Anthropic and the Middle Finger to Trump: Are AI Ethics Worth $200 Million?
There's a specific moment when Silicon Valley's grand declarations of principle collide with reality, and that moment usually takes the form of a multi-figure check. In this blog We often talk about how generative AI is changing art, music and creativity – just look at our experiments with IAIA, image generation and videos. But what happens when that same technology is used for war? On Friday, February 27, 2026, the company Anthropic gave a clear response to the Trump administration, drawing a red line that sent shockwaves through the tech industry.
The Context: 200 Million to Sell Silicon's Soul.
It all stems from a $200 million contract Anthropic had signed with the Pentagon. Their model, Claude, is no ordinary AI: it was the only generative system to have obtained security clearance to handle classified US Defense information, even proving instrumental in the operations that led to the capture of Nicolás Maduro.
But the idyll was shattered when Secretary of War Pete Hegseth demanded unrestricted access, threatening to invoke the Defense Production Act (a 1950s law used for national emergencies) to force the company to divest its technology. The ultimatum was clear: either Anthropic relaxed its safety standards by Friday, or the government would label it a "supply chain risk.".
Why Did Anthropic Reject Trump's Ultimatum?
This isn't naive pacifism, but a clear vision of the limits of technological power. CEO Dario Amodei preferred to forgo the millions of dollars in funding and suffer the total ban from federal agencies ordered by Donald Trump, rather than submit. The reasons for this refusal are based on two fundamental ethical pillars of their policy:
• No to lethal autonomous weaponsAnthropic strongly opposes the use of AI to power weapons systems capable of shooting, targeting, or killing without a human in the decision-making cycle.
• No to mass surveillanceThe company considers the use of its models to indiscriminately monitor US citizens on domestic soil to be “illegitimate” and “open to abuse.”.
OpenAI and the Opportunism Fair
Where there's an idealist who steps back, there's always a capitalist ready to step forward. Just hours after Trump's ban on Anthropic, Sam Altman announced on X that OpenAI had finalized a deal with the Pentagon to integrate its models into the War Department's classified infrastructure.
Ironically, OpenAI claims to have obtained from the Pentagon the inclusion of the exact same safeguards (no autonomous weapons, no domestic surveillance) that were the cause of its divorce from Anthropic. This move smacks of cynical opportunism: exploiting its rival's ethical disputes to secure a client, promising restrictions that, in practice, the Trump administration had just demonstrated it would not tolerate.
Ethics vs. Algorithm
I find it poetically rebellious that consistency is chosen over the profit algorithm. Anthropic has demonstrated that even a silicon entity (or rather, whoever programs it) can have a backbone.
We seek beauty everywhere. And if we don't find it, we create it. But beauty also lies in knowing how to say no when those in power ask you to transform a tool of evolution into a weapon of blind destruction. It's the same difference between using industrial excellence to build lasting works and using it to indulge the madness of the moment. Anthropic lost €200 million, but perhaps it saved the soul of the industry. And in a world of Big Tech ready to bow for convenience, a well-placed middle finger is a true work of art.
Digital creative, musician, and storyteller. I explore the intersection of humanity and technology, telling stories of AI, music, and real life. Welcome to my organized mess.”
