Two Superpowers, Same Question
Everyone’s tracking the wrong scoreboard.
Western tech coverage obsesses over benchmarks. Who topped the leaderboard this week? Whose model generates better images? China’s “catching up,” or maybe they already caught up, or maybe they’re still behind. The framing assumes the contest is about building the best AI.
It’s not.
The real fight is about something else: who decides what AI systems can and can’t do? And the uncomfortable truth emerging this month is that both democracies and authoritarian states are colliding over the same question, just with different players claiming authority.
On March 9, Anthropic, the AI company that built Claude, sued the U.S. Department of Defense.
The sequence tells the story. Anthropic had been working toward government contracts, including through the GSA’s “OneGov” program. But they drew two lines: no mass surveillance of Americans, and no fully autonomous weapons that select targets without human involvement.
Defense Secretary Hegseth’s response, delivered in February: the Pentagon should have “any lawful purpose” access to AI systems. Contractors don’t get to impose conditions beyond what the law requires.
He’s right.
The United States is a constitutional republic. Citizens elect representatives. Representatives make laws. Military leadership operates under civilian control with Congressional oversight, Inspector General audits, JAG legal review, and judicial accountability. The whole architecture is designed to ensure that coercive power flows from democratic legitimacy.
Anthropic is saying: even if all those systems approve something, we won’t build it.
That’s not defensible in republican terms.
Who elected Dario Amodei to decide what weapons America can deploy? Technical expertise doesn’t confer moral authority. Being a “safety-focused” company doesn’t mean your ethics trump democratic deliberation. If the legislature, the executive, the courts, and military legal counsel all approve something, Anthropic’s objection amounts to a private veto on collective self-governance.
We have a word for that: oligarchy.
The responses to this don’t work.
Market freedom? Private entities can refuse to do business with anyone, but that’s not what’s happening. Anthropic wasn’t refusing government work. They wanted the contracts, just on their terms. You can exit the market. You don’t get to reshape the market’s rules based on your private ethics.
Conscience protections? We allow conscientious objectors. But individual soldiers opting out is different from weapons manufacturers dictating what the military can build. The baker can decline to make a cake. The baker doesn’t get to tell the wedding industry what ceremonies are acceptable.
Historical failures? Government oversight failed to prevent NSA surveillance, CIA torture, COINTELPRO. True. But the remedy for oversight failures is better oversight: congressional reform, judicial review, public pressure. The remedy isn’t handing veto power to unelected corporate executives who happen to have opinions about ethics.
When democratic institutions fail, you fix them. You don’t route around them by elevating private companies to governance roles they weren’t elected to fill.
There’s a version of corporate ethics that’s legitimate: refusing to break the law. Refusing to participate in clearly illegal activity even when ordered. Whistleblowing when regulations are violated. That’s accountability, holding government to its own rules.
That’s not what Anthropic did. They didn’t say the DoD was breaking laws. They said the DoD’s legal activities didn’t meet Anthropic’s ethical standards.
A private company imposing constraints beyond what law requires is claiming moral authority the republic didn’t grant them. That’s the issue.
Now consider China.
The usual framing treats Chinese AI governance as a black box. That’s not quite accurate. China has published AI regulations. Interim Measures for Generative AI Services. Cybersecurity Law amendments. Multiple agencies participate. The governance isn’t invisible. It’s just unified under one authority.
Both countries have governance structures. Both have oversight. Both have published rules.
The difference is that China is honest about it.
China’s model says explicitly: the Party decides and companies comply. There’s no pretense that private firms have independent authority to impose ethical limits. The source of legitimacy is the Party itself.
The U.S. model is different in kind, not degree. The republic decides, through elected representatives, constitutional constraints, civilian oversight, judicial review, and public accountability, and companies comply. The source of legitimacy is the citizens.
What Anthropic got wrong was treating their private ethics as a legitimate constraint on republican deliberation.
And yet.
The government’s response to Anthropic wasn’t “let’s have the legitimacy argument.” It was: you’re now classified as a supply chain risk alongside hostile foreign actors.
If “private companies don’t get to impose ethical constraints on national defense” is a winning argument, and it is, then make it publicly. Let Anthropic try to defend corporate ethical veto to voters. Let the public decide whether they want tech executives overruling military leadership.
Instead, the administration reached for national security classification. The one move that transforms policy disagreement into security threat and bypasses public debate entirely.
Why avoid an argument you’d win?
One possibility: they didn’t think they’d win cleanly. Americans might agree that companies shouldn’t veto democratic governance in principle, while also being uneasy about mass surveillance and autonomous weapons in practice. The administration didn’t want that conversation.
Another possibility: classification is just easier. Why argue when you can compel?
Either way, the government’s choice to invoke national security rather than democratic legitimacy tells you something about how confident they are in public deliberation.
Meanwhile, OpenAI announced a DoD partnership. At least one executive resigned over it. The market is sorting itself: companies that defer to government authority alone, and companies that pay the price for independent limits.
The government demonstrated which path leads to contracts and which leads to blacklists.
Here’s where this lands.
Anthropic was wrong to claim authority they don’t have. Private companies don’t get to layer ethical constraints on top of republican governance. If you think surveillance is wrong, lobby Congress. If you think autonomous weapons violate the laws of armed conflict, file lawsuits. Work through democratic channels. Don’t announce that your company won’t build what elected government authorizes. That’s substituting private judgment for collective deliberation.
But the government was also wrong, not in substance, but in method. If corporate ethical constraint is illegitimate, say so publicly and defend it. Invoke democratic legitimacy, not national security. Make the republican argument to republicans. Win the debate you’d win.
Using classification to sidestep public argument suggests the administration wanted to avoid the conversation, even though they’d probably prevail.
China says the Party decides and companies comply.
America says the republic decides, through elected representatives, constitutional constraints, civilian oversight, judicial review, and public accountability, and companies comply.
That’s not the same thing with extra steps. That’s two fundamentally different sources of authority. The republic’s legitimacy comes from citizens. The Party’s comes from itself.
What Anthropic got wrong was treating their private ethics as a legitimate constraint on republican deliberation. What the government got wrong was responding with classification instead of the legitimacy argument they’d win.
The distinction from China isn’t procedural. It’s substantive. But it only stays substantive if the republic actually argues in public rather than governing by security classification.
Human-Curated, AI-Enabled (HCAE) James D. Longmire | ORCID: 0009-0009-1383-7698 March 2026
Comments
Sign in with GitHub to comment, or use the anonymous form below.