Hem
Utvecklingen av AIFördjupning

Pentagonbråket gör etik till vapen i AI-talangjakten

Anthropic CEO Dario Amodei, right, and Chief Product Officer Mike Krieger, left, 2025 in San Francisco. (Don Feria / AP)

Konflikten och det brutna avtalet mellan startupen Anthropic och Pentagon har blivit mer än en tvist om militära AI-avtal. Den har också blivit ett lackmustest för vilken sorts bolag som lockar branschens mest eftertraktade talanger, skriver The Wall Street Journal.

Efter Open AI:s Pentagonavtal har flera toppnamn lämnat bolaget. Samtidigt stärker Anthropic sin profil som ett AI-bolag där värderingar väger tungt.

– Det här handlade om principer, inte om människor, skrev Caitlin Kalinowski, tidigare hårdvaruchef inom Open AI:s robotikdivision.

The Wall Street Journal

Anthropic’s Standoff With the Pentagon Shakes Up AI Talent Race

A dispute over how AI can be used by the military shows top employees are looking for more than just nine-figure pay packages.

By Meghan Bobrowsky

The Wall Street Journal, 9 March 2026

Anthropic’s standoff with the Defense Department has cost it Uncle Sam as a customer, but it has brought a momentary advantage in the ferocious talent war between rival artificial intelligence labs.

At least two high-level employees have resigned from OpenAI, citing values and principles, since the company said it had reached a deal with the Pentagon to deploy its AI models in classified settings. Anthropic was renegotiating its own Pentagon deal and seeking to weave in new safeguards around domestic surveillance and autonomous weapons when talks broke down, prompting the Trump administration to declare the company a supply-chain risk.

Caitlin Kalinowski, who oversaw hardware within OpenAI’s robotics division, wrote in a post on X over the weekend that she was quitting over the contract.

“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” she wrote. “This was about principle, not people.”

OpenAI’s CEO Sam Altman, (/AP/TT / AP)

In a separate post a few days earlier, Max Schwarzer, a vice president of research at OpenAI, wrote on X that he was leaving to join Anthropic.

“Many of [the] people I most trust and respect have joined Anthropic over the last couple of years, and I’m excited to work with them again. I have also been very impressed with Anthropic’s talent, research taste and values, and I’m excited to be part of what the company does next!” he wrote, adding that he was also leaving to return to individual contributor research work.

“We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” a spokeswoman for OpenAI said.

Anthropic has seen a surge of public goodwill since Chief Executive Dario Amodei said the company wouldn’t compromise its red lines to do a deal. Its Claude chatbot hit no. 1 in downloads on the Apple App Store, surpassing OpenAI’s ChatGPT for the first time. San Franciscans chalked messages of appreciation outside its headquarters. Even pop star Katy Perry posted a screenshot of her subscribing to a paid Claude plan with a heart around the page.

But no front of competition has mattered more than the red-hot contest for technical talent. And while researchers juggling competing offers have been able to command pay packages worth hundreds of millions of dollars, OpenAI’s recent defections offer a reminder that values have frequently trumped money for the most sought-after prospects.

“I think leading from values first has kind of allowed us to build a team that really believes in what we’re doing”

Dario Amodei, CEO of Anthropic

Anthropic was founded in 2021 by former OpenAI researchers, including Amodei, who felt that the company was rushing to build a commercial product without prioritizing safety. Since then, it has cultivated a reputation as a haven for researchers focused on the potential downsides of powerful AI, even as its models have established themselves as the preference of a wide range of business customers.

“I think leading from values first has kind of allowed us to build a team that really believes in what we’re doing,” Amodei said last week during a private session at Morgan Stanley’s Technology, Media & Telecom conference in San Francisco.

The Pentagon sought to use Anthropic’s customer relationships as a point of leverage in threatening to declare Anthropic a supply-chain risk, saying the designation would cut off other Defense Department contractors from working with the company. But Anthropic said the status applies only to the work contractors do for the department, and on Monday it sued to challenge the designation. A group of 37 employees of OpenAI and Google DeepMind filed a brief asking the court to side with Anthropic.

A test of how much Anthropic’s employees care about the company’s ethical orientation arrived last summer when Meta Platforms, seeking to supercharge its model development, mounted an aggressive campaign to woo researchers and engineers from rival labs, including OpenAI and Anthropic, dangling compensation packages in some cases worth hundreds of millions of dollars.

Anthropic develops the Claude chatbot. (Don Feria / AP)

OpenAI countered Meta’s offers with bonuses, and in December, got rid of its vesting cliff, meaning that new employees no longer had to wait for their stock to vest.

Anthropic, according to Amodei, told employees it wasn’t going to alter employees’ compensation structure to respond to Meta’s blitz. “The thing we said to everyone was look, you’re here for the mission,” he said in the Morgan Stanley session. “We have a system and we’re going to hold to that. If you leave, you leave.”

Anthropic ended up losing only two employees to Meta, versus “several dozen” from OpenAI, he said. He noted that his company not only has all of its original co-founders but almost all of its first 20 employees.

“If you look at our retention rate, it’s just the best in the industry,” Amodei said. “We’re just clearly doing something different.”

In the days after OpenAI signed its contract with the Pentagon, the company went into damage control mode. Saying the rush to get a deal had “looked opportunistic and sloppy,” Chief Executive Sam Altman posted messages on X trying to explain and assuage concerns.

“The thing we said to everyone was look, you’re here for the mission”

Dario Amodei, CEO of Anthropic

Over the weekend following the contract signing, Altman hosted an “ask-me-anything” session on X, where he and two other employees answered questions. And on the next business day, Altman posted that he had been working with the Department of Defense to make some additions to the agreement “to make our principles very clear.”

The following day, Schwarzer resigned. Kalinowski announced she had quit a few days later.

“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the OpenAI spokeswoman said.

Anthropic isn’t immune to scrutiny of its own.

An Anthropic safety researcher, Mrinank Sharma, said in early February that he was leaving the company to explore a poetry degree and wrote in a letter to colleagues that the world was in peril from AI.

Sharma’s decision to leave was related in part to a recent company decision to modify a long-held safety policy, The Wall Street Journal previously reported.

Omni är politiskt obundna och oberoende. Vi strävar efter att ge fler perspektiv på nyheterna. Har du frågor eller synpunkter kring vår rapportering? Kontakta redaktionen