AI Research & Philosophy - Understanding AI Through the AIDK Framework

AI Research & Philosophy

Commentary and Analysis on AI Capabilities and Limitations

Latest Articles

The Token Cliff: Why the AI Vendor Era Is Already Ending

You're not buying AI. You're renting cognition by the syllable. And the meter is always running.

April 02, 2026
Read Article

Overtake AI, or It Will Surely Overtake You

The future of knowledge work belongs to people who integrate AI into their expertise. The rest will watch from the sidelines.

March 27, 2026
Read Article

The 6-Month Gap

An article from AI Research & Philosophy.

March 23, 2026
Read Article

The Frontier Is Closed – And That's a Problem for National Defense

The U.S. Defense Industrial Base has cleared cloud access to frontier AI. But two structural gaps remain that no amount of IL authorization closes.

March 23, 2026
Read Article

The AI Coding Assistant Just Grew Up

Claude Code jumped from its 1.x roots to a genuine 2.x architecture. The version number understates the change.

March 13, 2026
Read Article

The Case for Always-On AI Memory

Google's Always-On Memory Agent represents a shift from passive retrieval to active consolidation. Here's why it matters for AI systems that need to learn continuously.

March 09, 2026
Read Article

All Articles

April 2026

Date Article
Apr 02 The Token Cliff: Why the AI Vendor Era Is Already Ending

March 2026

Date Article
Mar 27 Overtake AI, or It Will Surely Overtake You
Mar 23 The 6-Month Gap
Mar 23 The Frontier Is Closed – And That’s a Problem for National Defense
Mar 13 The AI Coding Assistant Just Grew Up
Mar 09 The Case for Always-On AI Memory
Mar 09 The Transcendental Argument for Logic Realism
Mar 06 The Anthropic Red Line: A Stress Test for AI Ethics and Power

February 2026

Date Article
Feb 28 The Questions Nobody Asked Before Deploying AI Into Defense
Feb 27 When Your AI Vendor Becomes a Supply Chain Risk
Feb 27 Probabilistic Morality: Why Anthropic’s Red Line on Weapons Exposes Everything Else
Feb 26 The Drift Problem: Why Long-Running AI Agents Are Riskier Than You Think
Feb 26 Your Boss Is Right About AI Agents. The Industry Isn’t Ready for What Comes Next.
Feb 22 Trust Architecture: Why AI Safety Can’t Depend on Good Intentions
Feb 22 Mirrors, Not Minds: What AI ‘Self-Preservation’ Actually Reveals
Feb 21 I Made the Rules and I Can’t Follow Them
Feb 21 Amazon’s AI Bot Nuked Its Own Cloud. The Problem Isn’t What You Think.
Feb 19 I Know You’re Using AI to Write That. Here’s What You’re Getting Wrong.
Feb 18 Confidence Laundering at Scale
Feb 16 The Retreat from AGI
Feb 13 Flexibly Deterministic, Structured Probabilistic: The Two Categories of AI
Feb 10 The GPU Doesn’t Care What It’s Computing
Feb 09 A Man’s Got to Know His Limitations
Feb 09 Context Poisoning: The AI Failure Mode You Can’t See From Inside the Conversation
Feb 06 Sarbanes-Oxley for AI: Why the Analogy Isn’t a Stretch
Feb 06 The Hidden Human: How AI Training Repeats a 250-Year-Old Trick
Feb 06 The AIDK Framework: Why AI Can Never Think Like You

###

Date Article
  Two Superpowers, Same Question
  Last-Ditch Talks and Trust Demands: The Anthropic Standoff Continues

Author

James (JD) Longmire

AI Assistance Disclosure: This work was developed with assistance from AI language models including Claude (Anthropic), ChatGPT (OpenAI), Gemini (Google), Grok (xAI), and Perplexity. All substantive claims, arguments, and errors remain the author’s responsibility. Human-Curated, AI-Enabled (HCAE).


Archives