Research Program: AI Limitations and Opportunities
Abstract
A framework for understanding artificial intelligence through the origination-derivation distinction. Human cognition has access to two coexistent primitives - Infinite Information Space and the three fundamental laws of logic - while AI systems are categorically derivative, operating downstream of human-generated data and unable to access these primitives directly.
Program Overview
Central Research Question
What are the fundamental capabilities and limitations of AI systems, and can these be understood through a principled theoretical framework grounded in the distinction between origination and derivation?
Scope and Aims
This research program investigates AI through a novel theoretical lens: the claim that human cognition has access to two coexistent primitive systems - Infinite Information Space ($I_\infty$) and the three fundamental laws of logic ($L_3$) - while AI systems are categorically derivative, operating downstream of human-generated data and unable to access these primitives directly.
Theoretical Foundation
Two Coexistent Primitives
Infinite Information Space ($I_\infty$)
- A non-physical space containing all possible configurations of information
- Includes contradictory configurations
- Not derived from or reducible to physical reality
- Serves as the “what” - the totality of conceivable content
Three Fundamental Laws of Logic ($L_3$)
- Law of Identity (A = A)
- Law of Non-Contradiction (not both A and not-A)
- Law of Excluded Middle (either A or not-A)
- Ontologically primitive - not derived from more basic principles
- Serve as the “how” - the navigation system through $I_\infty$
Coexistence
- Neither primitive is derived from the other
- The laws do not generate the coherent subset of the space; they navigate it
- The space is not defined as “what the laws permit” - it exists independently
The Hierarchy of Actualization
- $I_\infty$ - all possible configurations, including contradictions
- Conceptually accessible - humans can explore contradictions, hold them in mind
- Logically navigable - the coherent subset, filtered by $L_3$
- Physically actualizable ($A_\Omega$) - what can be instantiated in reality
- AI outputs - derivative of human-generated data, already a filtered subset
Origination vs Derivation
Origination
- Retrieving configurations from $I_\infty$ not derived from prior inputs
- Requires access to the space itself and $L_3$ for navigation
- The causal chain includes something entering from outside prior experience
Derivation
- Transformation of prior inputs according to learned patterns
- Causal chain: inputs → processing → outputs with no external entry point
- However sophisticated, remains confined to transformations within training distribution
Research Questions
Primary Question
What are the fundamental capabilities and limitations of AI systems, and how does the origination-derivation distinction illuminate both?
Subsidiary Questions
Theoretical
- Is the origination-derivation distinction categorical or a matter of degree?
- What is the nature of the faculty by which humans access the two primitives?
- Can the framework generate testable predictions?
Empirical (AI Limitations)
- Why is hallucination mathematically inevitable in LLMs?
- Why do reasoning failures persist despite scaling?
- Why does brittleness occur at distribution boundaries?
- Why do creativity measures show ceilings?
Empirical (AI Opportunities)
- In what domains do derivative systems excel?
- What forms of human-AI collaboration leverage both origination and derivation?
- How can AI augment rather than replace human origination?
Ethical
- What are the ethical implications of deploying derivative systems in domains requiring origination?
- How does the framework inform the alignment problem?
- What epistemic obligations follow from the origination-derivation distinction?
Research Agenda
AI Limitations
Hallucination as Mathematically Inevitable
- LLMs cannot learn all computable functions (Xu et al. 2024)
- Hallucinations stem from fundamental mathematical structure (Banerjee et al. 2024)
- Framework mapping: Derivative systems cannot verify against ground truth they don’t access
Reasoning Failures as Structural
- LLMs cannot perform provably correct general-purpose formal reasoning
- 29-90% reasoning failure rates across models (LogicAsker)
- Performance degrades with minor wording changes (Apple GSM-Symbolic)
- Framework mapping: Mimicking logical patterns ≠ grasping $L_3$ as primitives
Brittleness and Out-of-Distribution Failure
- Performance degrades sharply at distribution boundaries
- Models learn surface correlations, not underlying principles
- Framework mapping: Confined to training distribution, not navigating $I_\infty$
Scaling Limits
- Diminishing returns documented across major labs
- 76% of AI researchers (2025 AAAI): Scaling unlikely to achieve AGI
- Framework mapping: More derivation does not become origination
AI Opportunities
Domains Where Derivative Systems Excel
- Pattern recognition at scale
- Consistency checking within defined parameters
- Synthesis and summarization of existing information
- Execution of well-specified tasks
- Augmentation of human origination
Human-AI Collaboration Models
- Human origination + AI derivation as complementary
- AI as tool for exploring implications of human-originated ideas
- AI for identifying patterns humans might miss within existing data
- Human verification and selection from AI-generated options
Methodology
Theoretical Analysis
- Conceptual clarification of primitives and their relationships
- Logical analysis of the hierarchy of actualization
- Examination of entailments and implications
Literature Synthesis
- Systematic review of documented AI limitations
- Integration of findings across hallucination, reasoning, brittleness research
- Engagement with philosophy of mind, logic, and AI ethics literatures
Framework Application
- Mapping documented phenomena to theoretical framework
- Testing explanatory power against alternative accounts
- Identifying gaps and refinements needed
Open Questions
Theoretical
- What contains the two coexistent primitives? (Brute fact? Mind? Something else?)
- What is the nature of the faculty by which humans access the primitives?
- How does a physical brain connect to non-physical primitives?
Empirical
- What experimental designs could test the framework’s predictions?
- Are there documented phenomena that would falsify key claims?
- How does the framework account for apparent AI “insights” or novel outputs?
Ethical
- How do we determine appropriate vs inappropriate AI deployment domains?
- What governance structures follow from the framework?
- How do we balance AI benefits against risks of category confusion?
Planned Outputs
Commentary Articles
- Analysis of new hallucination research through framework lens
- Commentary on scaling debates and diminishing returns
- Response to claims of emergent capabilities
- Critique of AGI timeline predictions
Original Research Articles
| Topic | Audience | Status |
|---|---|---|
| AI Limitation Argument | AI/AGI researchers | Published |
| Logical Laws as Ontologically Primitive | Metaphysics | Planned |
| AI Ethics and Origination-Derivation | AI ethics | Planned |
Related Work
- AIDK Framework - Operational framework for AI limitations
- HCAE Model - Deployment implications
- Logic Realism Theory - Foundational $L_3$ framework