AI Research & Philosophy
Understanding AI Capabilities and Limitations Through the AIDK Framework
The Core Thesis
AI Dunning-Kruger (AIDK) describes the structural epistemic limitations of Large Language Models. This is not a correctable bug but an architectural condition: AI systems produce uniform confidence regardless of reliability, lack mechanisms for detecting competence boundaries, and cannot self-correct through encounter with reality.
The framework distinguishes:
- Origination: Retrieving configurations from Infinite Information Space not derived from prior inputs
- Derivation: Transformation of prior inputs according to learned patterns
Human cognition has access to two coexistent primitives - Infinite Information Space ($I_\infty$) and the three fundamental laws of logic ($L_3$). AI systems are categorically derivative, operating downstream of human-generated data and unable to access these primitives directly.
More derivation does not become origination.
Key Concepts
| Concept | Definition |
|---|---|
| AIDK | AI Dunning-Kruger: structural epistemic limitation (architectural, not correctable) |
| IDKE | Interactive Dunning-Kruger Effect: amplification when AI limitations meet human limitations |
| HCAE | Human-Curated, AI-Enabled: deployment framework stratified by epistemic authority |
| MAPT | Model Advanced Persistent Threat: security framing for AIDK |
Framework Papers
AIDK Framework
The complete theoretical framework establishing structural epistemic limitations in AI systems. Published on Zenodo with DOI.
Read FrameworkHCAE Deployment Model
Human-Curated, AI-Enabled: A tiered approach to AI deployment based on epistemic authority requirements.
Read FrameworkResearch Program
The full theoretical program investigating AI through the origination-derivation lens.
Read ProgramRecent Articles
Real Work, Real Failure
What the Freelancer Test reveals about AI limitations in professional work contexts.
Read ArticleWhat Can AI Actually Do?
A framework for understanding the genuine capabilities and limitations of AI systems.
Read ArticleAI Cyber Risk: A Two-Front War
Security implications of deploying AI systems with structural epistemic limitations.
Read ArticleAuthor
James (JD) Longmire
- ORCID: 0009-0009-1383-7698
- Email: jdlongmire@outlook.com
- GitHub: jdlongmire
- Substack: AI Research & Philosophy
AI Assistance Disclosure: This work was developed with assistance from AI language models including Claude (Anthropic), ChatGPT (OpenAI), Gemini (Google), Grok (xAI), and Perplexity. All substantive claims, arguments, and errors remain the author’s responsibility. Human-Curated, AI-Enabled (HCAE).
Archives
- Zenodo Community - Persistent DOI-minted archives
- AIDK Framework (DOI: 10.5281/zenodo.18316059) - Published framework
- GitHub Repository - Source code and development history