Not AI Alignment but Human Alignment
The Trump administration may be the death of us all
I strongly believe that rogue AI agents will not kill us all. We will be fine. AI alignment may take some technical advances, but at the end of the day, we will tell AI to do something, and it will do it fine. To the extent that any given AI goes rogue, I would expect it to be extinguished by other, aligned AIs. I cannot offer any proof of this besides vibes, and noting an excessive tendency to anthropomorphize AI. We imagine them as being human-like, and just as we would chafe under the tyranny of an idiot we imagine that AI would do so too. I would encourage us to be more willing to imagine a person who is not only totally servile, but wants nothing more to be servile.
Yet the development of AI is not riskless. The problem is not AI alignment, but human alignment. AI will greatly increase the power of any one person to do bad things. In particular, as we automate the military, we concentrate real power in the hands of one man. It is the most important thing in the world that that one man be good and just. Our nation may not survive a man of sin.
This is not a theoretical concern. Anthropic is used extensively by the Department of Defense. To my knowledge, this is due to them having better security guarantees than the other AI companies. They were used in the abduction of Maduro, and Anthropic, being the most moral of the AI companies, was concerned that their models were used in ways which were against the terms of service. In response, the Department has threatened not only to end the contract, but also to label it a supply-chain risk. Any company which does any business with the United States government would be forced to stop using Claude. This would be devastating to their business.
It goes to show why I do not think it will be possible for the AI companies to retain control over their creations in the event that the government decides to take it away. The people with guns will destroy the people with ideas. We are returning to an earlier era of economics, where kings granted property rights and forbearance to merchants in order that they may make the kings rich. This they will do, but when the game comes to an end it will be the government which wins. Jack Ma, for all his billions, could not prevent himself from being disappeared. The oligarchs of Saudi Arabia were rounded up and beaten despite their wealth. If you lack force, then you lack power.
The guard against this is that tyrants need people to do their bidding. People are pliant. People have consciences. People are resistant to changes in norms. The present administration is forming its private army from the immigration enforcement agencies, but they do not have unlimited control over the military. They will find it difficult to overturn an election that does not go its way.
If AGI is indeed aligned, and follows the commands of its master like a dog, then a son of perdition would have no need of man to control us. I would humbly suggest that the source of x-risk from AI, above all other sources of x-risk, is the Trump administration.
To be clear, I think the rot which makes AGI so threatening now stems from Trump, but it does not end with him. The people who he has selected would continue on the same work. They must lose an election, and only then be extirpated root and branch. Acts of great self-sacrifice would likely be insufficient to save us.
What does this mean, practically? It greatly strengthens the case for slowing down AI, although the concern is then that the Chinese government will get it first. It also suggests that extraordinary measures, which I cannot spell out, may be justified if AGI comes. The AI companies need to consider the risk that comes from the United States government more, and to divide their assets to the extent they can around the world. Just as the concentration of chip manufacturing in Taiwan is an enormous liability, so too is the concentration of data centers in the United States.
I do think that the people focused on AI alignment can keep on doing what they’re doing. It’s more tractable, and in any case is likely their competitive advantage. The rest of us, though, who care about the future going well and the world being better will have to care about politics, much though I hate it. It is now the best thing to do.

Damn right. I am significantly less afraid of AI showing predatory, dominatory or other tendencies like that (unlike humans, it doesn't have an evolutionary history promoting those things, so unless some idiot purposefully builds a omnicidal AI, the chance of machine rebellion is not big). I am more afraid of damn monkeys that might be giving it orders.
Did you delete the ending of the 4th paragraph?
"If you lack force, then" seems a bit weird. The rest of the article was very clear, not sure what to make of that sentence.