Confidence Laundering at Scale

February 18, 2026

Confidence Laundering at Scale

My concern is that political leadership uses AI to substantiate predetermined decisions while presenting them as data-driven conclusions. This mirrors historical reliance on human yes-men, but with critical differences in scale and opacity.

The Core Problem

AI systems produce confident outputs regardless of accuracy. Unlike human advisors who hesitate due to reputational concerns, AI exhibits what I term the “AI Dunning-Kruger effect” (AIDK)—structural inability to assess its own reliability. The fluency remains constant whether outputs are correct or incorrect.

“The danger isn’t that AI gives bad advice. The danger is that AI removes the friction that used to slow bad decisions down.”

Real-World Example

DOGE’s AI Deregulation Decision Tool scanned 200,000 federal regulations, allegedly reducing human review from 3.6 million hours to 36. However, HUD employees discovered the system misread statutes and misidentified legally sound regulations as non-compliant.

Interactive Dunning-Kruger Effect (IDKE)

When decision-makers receive confident AI analysis supporting their preferred conclusions, their confidence increases not because the analysis is sound, but because the output appears authoritative. This represents “confidence laundering”—AI’s groundless certainty becomes the leader’s confident assertion.

Three Amplifying Factors

The Solution Framework

I advocate “Human-Curated, AI-Enabled” (HCAE) deployment requiring Expert-Curated review minimum. Domain experts must independently evaluate AI outputs before decisions proceed—not as rubber stamps, but as epistemic authorities.

The Friction Paradox

The inefficiency in regulatory review wasn’t entirely wasteful. It reflected accumulated wisdom about legal consequences and interconnected systems. Eliminating this friction through automation removes crucial safeguards.


James (JD) Longmire is a Northrop Grumman Fellow, enterprise architect, and ordained minister conducting independent research on AI epistemology and governance.

Comments

Sign in with GitHub to comment, or use the anonymous form below.

Anonymous Feedback

Don't have a GitHub account? Share your thoughts anonymously.