Armstrong: ”AI är nytt – kapitalismen är det inte”

AI-bolagen framställer sig gärna som förvaltare av en teknik som kan förändra mänsklighetens framtid till det bättre. Men när säkerhetslöften krockar med investerarnas krav på avkastning är det vinsten som styr, skriver Robert Armstrong i Financial Times.
Han pekar på de enorma kapitalflödena in i AI-sektorn, och varnar för att försiktiga chefer snabbt kan rensas ut.
”AI är nytt – kapitalismen är det inte”, skriver Armstrong och argumenterar för att riskerna måste regleras av medborgare och lagstiftare, inte av bolagen själva.
AI companies are just companies
As we leap into a new technological age, the old rules of capitalism still apply.
AI enthusiasts wave off the notion that the technology will lead to mass unemployment. A lot of people once drove horse-drawn carts and made buggy whips, they say. Losing those jobs to automobiles didn’t lead to breadlines; on the contrary.
Doomers respond that, in the case of AI, we’re not the drivers; we’re the horses. The optimists’ retort, that horses’ lives got better as they went from work animals to luxury items, is no help. Have a look at what happened to the equine population in the first half of the 20th century.
Whatever AI’s ultimate impact on unemployment, this back-and-forth highlights the idea that AI is unlike all the technologies that went before, with greater complexity, greater upsides and greater risks — for labour, cyber security, national defence, mental health and so on. So those controlling it have special responsibilities. Everyone in the AI industry acknowledges this. It is expressed in OpenAI’s “Model Spec” guidelines and papers on the topic by Anthropic CEO Dario Amodei, which lay down guidelines about what AI companies will allow their models to do.
This simple observation — that some of the risks created by AI can only be managed by citizens, not companies — leaves hard questions about how to regulate it
But AI companies and their models will follow one rule before all others: they will seek to maximise returns for their shareholders, up to the limits set by law. When the law of profit conflicts with the company’s internal principles, profit will win every time.
This is not to be regretted. It is the outcome our system of corporate capitalism was intended to create. It has made us free and prosperous by encouraging risk-taking and creativity. And, in most cases, the profit motive and the common good line up beautifully. But as we leap into a new technological age, the old rules of capitalism still apply. Corporations only manage or pay for the economic externalities they create when they are forced to.
The amounts of money AI has attracted are staggering. The Big Tech “hyperscalers” plan to invest more than $600bn in the space this year alone. AI start-ups raised $73bn in the first quarter of 2025. OpenAI raised $122bn in a single funding round last month. The capital comes from people who demand a high return, and who know the industry will soon need more capital to buy computing power. This ensures that excessively cautious executives will be pushed aside, and sets up an arms race in which prioritising safety will open the way for technological irrelevance.
Amodei argues that there is a tension between building AI systems that won’t “autonomously threaten humanity” and staying ahead of authoritarian nations (or is it nation?) that might use such systems against us.
Before that tension comes into play, though, AI company CEOs will have to balance safety and competition. If Amodei or OpenAI’s Sam Altman strike that balance in a way that displeases their investors, they will be sacked. The industry’s sensitivity to revenue growth expectations is extreme. This week, The Wall Street Journal reported that OpenAI had missed internal sales and user targets. The story moved the whole of the Nasdaq, and OpenAI quickly released a statement calling it “clickbait”.
When Amodei says that he is “focused day and night on how to steer us away from [AI’s] negative outcomes and towards the positive ones”, I’m sure he is sincere. I’m also sure that, from the point of view of how the conflict between AI profit and safety plays out, his words are just noise. The relevant incentive structures don’t care what he is focused on.
Investors’ skin needs to be in the safety game
This simple observation — that some of the risks created by AI can only be managed by citizens, not companies — leaves hard questions about how to regulate it. Figuring it out will be messy. Some AI companies’ fears about unintended consequences will be realised.
What might good regulation look like? Horse anecdotes aside, it should not try to protect specific job categories, which always ends in paying people to be unproductive. It should match specific regulatory tools to specific harms — physical, digital, psychological, financial — rather than taking the form of a monolithic law. On the liability side, it should take seriously the example of how other useful but inherently dangerous products like explosives are treated, and it should rethink agency law applied to non-human agents. It should emphasise liability, rather than companies’ duty to warn. Investors’ skin needs to be in the safety game.
At the outset, though, the key is to reject any suggestion that this product is different, and somehow too complicated for citizens to have a say in. AI is new; capitalism is not.
©The Financial Times Limited 2026. All Rights Reserved. FT and Financial Times are trademarks of the Financial Times Ltd. Not to be redistributed, copied or modified in any way.