I watched a senior analyst spend three days last week doing something I did in forty minutes.

Same deliverable. Same data sources. Same quality bar. The difference: I used Claude to process the source material, cross-reference findings, and draft the initial structure. Then I spent the remaining time doing what actually matters: thinking about whether the conclusions held up.

He’s a smart guy. Decades of experience. But he’s decided that AI is a fad, or a crutch, or somehow beneath the work. So he’s doing it the way he’s always done it. And every week, the gap between his output and mine gets wider.

This isn’t a story about me being clever. It’s a story about what happens when capable people refuse to pick up the tools that are reshaping their profession.


I’m a Northrop Grumman Fellow and Chief Architect. I build digital ecosystems for one of the world’s largest defense contractors. I use AI every single day. Multiple models, multiple contexts, integrated into my actual workflow. I write with it. I architect with it. I debug with it. I research with it.

And here’s what I’ve learned: the people most at risk aren’t the ones who lack technical skill. They’re the ones who have plenty of skill and have decided that’s enough.


There’s a persistent misconception that “learning AI” means learning to write prompts. That the skill is in crafting the right incantation to make the machine produce good output.

It isn’t.

The skill is knowing what to ask. And knowing what to ask requires knowing your domain deeply enough to recognize when the answer is wrong, incomplete, or subtly misleading.

I can use Claude effectively for systems architecture because I’ve spent years doing systems architecture. I know what a good integration pattern looks like. I know when a proposed solution will fall apart at scale. I know which constraints the model doesn’t know about because they live in institutional knowledge, not documentation.

The model gives me leverage. My expertise gives me direction.

Someone who doesn’t understand the domain can prompt all day and produce confident, fluent, wrong output. Someone who understands the domain can use AI to move at a pace that was previously impossible.


This is the part that most AI discourse gets backwards. The conventional anxiety is that AI devalues human expertise. That if a machine can write code, analyze data, draft proposals, and summarize research, then the humans who used to do those things are redundant.

The opposite is true.

AI makes domain expertise more valuable, not less. Every organization now has access to a tireless junior analyst that can process information at scale. What they don’t have, and can’t automate, is the person who knows which questions to ask, which outputs to trust, and which conclusions require a second look.

That person is the domain expert who has integrated AI into their workflow. They’re doing the work of three people, and doing it better, because the bottleneck was never their thinking. It was the mechanical overhead that surrounded it.


A colleague asked me recently whether he should be worried about AI taking his job.

Wrong question.

The right question: should he be worried about the colleague down the hall who uses AI taking his job?

Because that’s the actual competitive dynamic playing out across every knowledge profession right now. It’s happening in law firms, consulting shops, engineering organizations, research labs, and newsrooms. The people who integrate AI into their work aren’t just slightly faster. They’re operating at a fundamentally different throughput. They take on more complex problems. They iterate faster. They catch things that manual review would miss.

And they’re making the people who refuse to adapt look slow.

This is uncomfortable to say directly, so most people don’t. They talk about AI in abstract, future-tense terms. “AI will eventually change how we work.” It’s changing how we work right now. The people who’ve adopted it are already pulling ahead. The gap compounds daily.


I should be clear about what “integrating AI” actually means, because it doesn’t mean what most corporate training programs think it means.

It doesn’t mean attending a lunch-and-learn about ChatGPT. It doesn’t mean using Copilot to autocomplete a few lines of code. It doesn’t mean asking a chatbot to summarize a meeting.

It means restructuring how you approach your core work to take advantage of what AI is good at, while applying your judgment to what it isn’t.

For me, that looks like this: when I’m designing a system architecture, I’ll describe the requirements and constraints to Claude and ask it to propose candidate approaches. It produces three or four options, each with tradeoffs articulated. I evaluate them against institutional knowledge the model doesn’t have. I push back. I ask it to stress-test the weakest option. I ask it to find failure modes in the strongest one.

The result: I get to the right answer faster, and I’ve pressure-tested it more thoroughly than I would have working alone. The model didn’t replace my judgment. It gave me more surface area to apply it.


The resistance I encounter most often comes from experienced professionals. People who are good at what they do and see no reason to change. I understand the instinct. When you’ve built expertise over decades, the suggestion that you need a new tool can feel like an insult.

It isn’t. Your expertise is the whole point. AI without expertise produces plausible nonsense. Expertise without AI produces excellent work, slowly. The combination produces excellent work at speed.

And speed matters. Throughput matters. In every knowledge profession, the volume and complexity of work is increasing. The people who can handle that increase without burning out are the ones who’ve augmented their capacity. Everyone else is either drowning or triaging.


I’ll make the prediction plainly.

Within two years, “doesn’t use AI” will carry the same professional connotation as “doesn’t use email” did in 2005. You could still do the job. You’d just be doing it at a disadvantage that your peers wouldn’t tolerate for long.

The window for treating AI adoption as optional is closing. The knowledge workers who thrive will be the ones who treated AI as a force multiplier for their existing expertise. The ones who struggled will be the ones who waited for someone to tell them it was mandatory.

Nobody is going to tell you it’s mandatory. They’ll just start giving the interesting work to the person who gets it done in a third of the time.

Overtake AI. Learn what it can do. Learn what it can’t. Integrate it into the work you already do well. Or watch while someone else does exactly that, and wonder why the assignments stopped coming your way.


James (JD) Longmire is a Northrop Grumman Fellow, Chief Architect of Digital Ecosystems, and independent researcher in AI epistemology and governance. He writes at aithinkr.net.

Comments

Sign in with GitHub to comment, or use the anonymous form below.

Anonymous Feedback

Don't have a GitHub account? Share your thoughts anonymously.