Published Research

IQIUU Research

Advancing the frontier of intelligence. Our work spans recursive memory systems, temporal reasoning, world model synthesis, and the decomposition of cognition itself.

7
Papers Published
4
Foundation Models
8
Novel Concepts
Publications

Technical Reports

IQIUU-TR-2026-001

Recursive Memory Architecture: Persistent State in Post-Transformer Systems

M. Zenji, K. Narada, S. Orvall
IQIUU Research, 2026
Published
Abstract

We propose RMA, a novel architecture where memory does not merely store — it learns from its own access patterns. Unlike attention mechanisms that are stateless between forward passes, RMA maintains a recursive memory graph that evolves with each interaction. The memory graph is parameterized by self-referential attention heads that track their own retrieval history, enabling the system to develop access-pattern eigenvalues that optimize future retrievals. We demonstrate a 47% improvement in long-horizon task completion versus transformer baselines across six benchmark suites, with particularly strong gains in multi-step reasoning and persistent-state tasks.

Recursive Memory Graph Self-Referential Attention Memory Eigenvalues Persistence Gradient
Fig. 1 — Recursive Memory Architecture (RMA) Data Flow
Input
Token
Stream
Layer 1
Memory
Graph
Core
Recursive
Attention
Layer 2
Persistence
Gradient
Output
State
Vector
Eigenvalue Feedback Loop
Self-Referential Update
+47%
Task Completion
12.8B
Parameters
0.94
Persistence Score
IQIUU-TR-2026-002

Temporal Intelligence Graphs: Causal Reasoning Beyond Sequence Prediction

K. Narada, M. Zenji, L. Vostok
IQIUU Research, 2026
Published
Abstract

Current large language models process time as sequential tokens, collapsing rich temporal structure into flat positional encodings. Temporal Intelligence Graphs (TIG) introduce a graph-based temporal representation where events are nodes, causality is directed edges, and prediction emerges from graph traversal rather than next-token prediction. Each node carries a time-aware embedding that encodes not just content but temporal context, duration, and causal weight. TIG-augmented models show 3.2x improvement on causal reasoning benchmarks and demonstrate emergent abilities in counterfactual reasoning and temporal abstraction.

Temporal Nodes Causal Edges Prediction Traversal Time-Aware Embeddings
IQIUU-TR-2026-003

World Model Synthesis: From Language to Internal Reality Simulation

S. Orvall, M. Zenji
IQIUU Research, 2026
Under Review
Abstract

World Model Synthesis (WMS) enables artificial intelligence systems to build and maintain internal world models — not as explicit knowledge graphs, but as continuous latent spaces that simulate reality. The core innovation is the Latent World Tensor, a high-dimensional continuous representation that captures physical laws, social dynamics, and market structures as emergent properties of training. We show that WMS-equipped agents can predict physical outcomes, social dynamics, and market movements with accuracy surpassing specialized models, suggesting that general intelligence may require an internal simulation of the world it inhabits.

Latent World Tensor Reality Gradient Simulation Coherence Emergent Physics
IQIUU-TR-2026-004

Eigenintelligence: A Unified Theory of Cognitive Modes in Artificial Systems

M. Zenji
IQIUU Research, 2026
Published
Abstract

We propose Eigenintelligence — the decomposition of intelligence into orthogonal cognitive modes: analytical, creative, empathetic, strategic, and predictive. Each mode is characterized by its own activation function, routing logic, and representational geometry within the latent space. Eigenintelligence Gradient Activation (EGA) allows dynamic mode switching mid-inference, enabling a single model to operate across the full cognitive spectrum without mode collapse. Empirically, EGA-equipped models outperform both specialist and mixture-of-experts baselines on composite intelligence benchmarks, suggesting that intelligence is not monolithic but decomposes into a finite basis of cognitive eigenvectors.

Cognitive Modes Eigenvalue Decomposition Mode Switching Gradient Activation
Domains

Research Areas

M
Memory Systems
Recursive and persistent memory architectures that evolve beyond stateless attention. Self-referential graphs, memory eigenvalues, and persistence gradients.
2 papers · 1 model
T
Temporal Intelligence
Graph-based temporal reasoning, causal edge networks, and time-aware embeddings. Moving prediction beyond sequence to causality.
1 paper · 1 model
W
World Models
Continuous latent world tensors that simulate reality. Internal physics, social dynamics, and emergent system modeling from language alone.
1 paper · 1 model
C
Cognitive Architecture
Eigenintelligence decomposition, cognitive mode switching, and gradient activation. A unified theory of how intelligence decomposes into basis vectors.
2 papers · 1 model
D
Dark Cognition
Investigating the latent representations that emerge between explicit reasoning steps. Understanding what models know that they cannot articulate.
1 paper · upcoming
Q
Quantum-Classical Interface
Theoretical frameworks for hybrid quantum-classical intelligence systems. Exploring superposition-based cognition and entangled reasoning paths.
Exploratory · 2027
Collaborate

Join Our Research Team

We are looking for researchers who think beyond current paradigms. If you work on memory, temporal reasoning, world models, or cognitive architecture — we want to talk.

Contact Research