I Know You’re Using AI to Write That. Here’s What You’re Getting Wrong.

The Concern
The problem isn’t that you’re using AI. The problem is that you stopped thinking when you started prompting.
I can tell. Your audience can tell. LinkedIn’s algorithm can tell. Originality AI’s research found that 54% of long-form LinkedIn posts are now likely AI-generated. Merriam-Webster made “slop” its 2025 Word of the Year. The backlash isn’t coming. It arrived.
But here’s what bothers me more than the aesthetics of AI slop: the epistemic problem underneath it.
You’re not just publishing bland content. You’re publishing unverified content with your name on it. And the system you used to produce it is structurally incapable of telling you when it got something wrong.
The Confidence Problem
Large language models don’t know what they don’t know. That’s not a metaphor. It’s architecture.
When ChatGPT or Claude produces a paragraph for your LinkedIn post or blog article, the system generates text by predicting the most probable next token given everything before it. The output sounds authoritative because fluency is what the system optimized for. Accuracy is a different axis entirely, and the two are only loosely correlated.
I’ve written extensively about what I call the AI Dunning-Kruger effect: the structural condition where an AI system produces outputs with uniform confidence regardless of reliability, lacks mechanisms for detecting its own competence boundaries, and cannot self-correct through encounter with reality. Human Dunning-Kruger is developmental and correctable. You can learn what you didn’t know. AI Dunning-Kruger is architectural and permanent. The system can’t learn what it doesn’t know because it has no access to “knowing” in the first place.
That’s the system you’re trusting to write your thought leadership.
The Interaction Makes It Worse
Here’s where it gets dangerous. When AI confidence meets human uncertainty, something I call the Interactive Dunning-Kruger Effect kicks in. The mechanism is straightforward:
You don’t know the details of a topic well enough to evaluate the output. The AI doesn’t know whether its output is reliable. The AI produces something fluent and confident-sounding. You read it, it sounds right, you hit publish. Your confidence increases despite no warrant for that increase.
The AI’s structural inability to flag its own unreliability gets laundered into your confident assertion. If someone challenges you on it, you defend the position. You’ve inherited the system’s groundless certainty as your own.
The people most vulnerable to this are exactly the people most likely to lean heavily on AI for content: those writing outside their deep expertise. The expertise gap that makes AI attractive is the same gap that makes its errors invisible.
The Minimum Viable Fix
I’m not telling you to stop using AI. I use AI tools constantly in my research and writing. The question is how you use them.
At minimum, you need a verification layer. You need a second system whose job is checking what the first system produced. The same way you wouldn’t publish a report without a review, you shouldn’t publish AI-assisted content without a fact check.
My practical recommendation: use Perplexity.
Perplexity operates differently from ChatGPT or Claude in a way that matters here. It’s built around search and citation. When it gives you an answer, it shows you where that answer came from. You can follow the links. You can verify the claims. You can see whether the confident-sounding assertion in your draft actually has a source behind it or whether your primary model confabulated it.
This isn’t perfect. Perplexity has its own limitations. But as a verification step, it provides something your primary generation model cannot: traceable sourcing. You go from “this sounds right” to “here’s where this claim originates, and I can check it.”
That’s a meaningful upgrade in epistemic hygiene, and it takes five minutes.
What Good Practice Looks Like
If you’re going to use AI in your content workflow (and you probably should, for efficiency), here’s the discipline that separates credible work from slop:
Draft with your primary model. Use ChatGPT, Claude, whatever fits your workflow. Let it accelerate the writing process.
Verify with a secondary model. Run key claims, statistics, and attributions through Perplexity. Follow the citations. If a claim doesn’t trace back to a real source, cut it or find one yourself.
Add what the machine can’t. Your experience. Your judgment. Your specific perspective on why this matters. The AI can produce the structure. Only you can provide the grounding. If you don’t have something original to say about the topic, the AI isn’t going to find it for you.
Own the output. When your name goes on it, you’re the one making the epistemic claim. You’re asserting that this content is reliable enough to stake your professional reputation on. Act accordingly.
The Deeper Issue
This goes beyond content hygiene. What we’re watching on LinkedIn and across professional publishing is a live demonstration of what happens when powerful derivation tools get deployed without grounding.
AI systems are extraordinary at derivation: transforming inputs according to learned patterns, generating plausible text, synthesizing existing knowledge into coherent narratives. What they cannot do is originate: access novel insight, evaluate truth against reality, or know when they’ve crossed from reliable territory into fabrication.
That boundary matters. When you use AI to draft and publish without verification, you’re treating a derivation engine as if it were an origination engine. You’re assuming the system’s confidence tracks truth. It doesn’t. It tracks training data probability.
The 70-95% enterprise AI failure rate we keep seeing isn’t primarily a technology problem. It’s a deployment problem. Organizations focus on what the system can do (horizontal capability) without asking whether it should be trusted to do it in this context (grounding). The same error plays out every time someone publishes AI-generated content without checking whether it’s actually true.
The Ask
Slow down.
Not by much. Five minutes of verification per piece of content. Run your claims through Perplexity or a similar search-grounded tool. Follow the sources. Cut what you can’t verify. Add what only you can contribute.
Your audience can already detect the difference between someone who used AI thoughtfully and someone who outsourced their thinking. The detection tools will only get better. The algorithm penalties will only get sharper. And the reputational cost of publishing something confidently wrong keeps compounding.
The tool doesn’t think for you. It was never designed to. Use it like the powerful derivation engine it is, and provide the grounding it structurally cannot.
That’s not a limitation of AI. That’s your job.
James (JD) Longmire is a Northrop Grumman Fellow conducting independent research on AI epistemology and governance.
Comments
Sign in with GitHub to comment, or use the anonymous form below.