API Documentation
v1.0 Beta Coming Q3 2026
Home Research Contact

IQIUU API

Access proprietary intelligence models built on Recursive Memory Architecture (RMA), Topological Intelligence Geometry (TIG), and World Model Substrate (WMS).

This is not another LLM wrapper. IQIUU models implement true cognitive computing — persistent memory, mode-decomposed reasoning, and latent world simulation. The API follows REST conventions and returns JSON.

The IQIUU API is currently in closed beta. Access will be available starting Q3 2026. Join the waitlist for early access.

Key capabilities


Authentication

All API requests require a Bearer token in the Authorization header. API keys are scoped per project and can be managed from your IQIUU dashboard.

Base URL

URL
https://api.iqiuu.com/v1

Example request

curl
curl https://api.iqiuu.com/v1/chat/completions \
  -H "Authorization: Bearer iq_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "iqiuu-void-1",
    "messages": [{"role": "user", "content": "Hello"}]
  }'
Never expose your API key in client-side code. Always call the API from your backend server.

Models

IQIUU offers a family of models optimized for different use cases, from frontier reasoning to cost-effective inference.

iqiuu-void-1
Frontier reasoning model. Deepest recursive memory, highest fidelity cognitive modes. For research and complex analysis.
256K context $15 / 1M tokens
iqiuu-qualia-1
Multimodal understanding. Processes text, images, and structured data with unified cognitive representation.
128K context $10 / 1M tokens
iqiuu-nexus-1
Real-time inference optimized for speed. Low-latency responses with full cognitive routing capabilities.
64K context $3 / 1M tokens
iqiuu-zero-1
Efficient model for high-volume workloads. Cost-effective with core IQIUU capabilities.
32K context $0.50 / 1M tokens

Chat Completions

Generate model responses given a conversation history. Supports persistent memory, cognitive mode selection, and streaming.

POST /v1/chat/completions

Request body

Parameter Type Required Description
model string Required Model ID to use (e.g. iqiuu-void-1)
messages array Required Array of message objects with role and content
temperature number Optional Sampling temperature, 0 to 2. Default: 1
max_tokens integer Optional Maximum tokens to generate. Default: model-specific
memory boolean Optional Enable RMA persistent memory. Default: false
cognitive_mode string Optional One of: analytical, creative, strategic, empathetic, auto. Default: auto

Response

JSON Response
{
  "id": "iq-chat-8f3a9b2c",
  "object": "chat.completion",
  "model": "iqiuu-void-1",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 8,
    "total_tokens": 20
  },
  "memory_updates": []
}

Code examples

import iqiuu

client = iqiuu.Client(api_key="iq_your_api_key")

response = client.chat.completions.create(
    model="iqiuu-void-1",
    messages=[
        {"role": "system", "content": "You are a strategic advisor."},
        {"role": "user", "content": "Analyze the market landscape for AI infrastructure."}
    ],
    cognitive_mode="strategic",
    memory=True,
    temperature=0.7
)

print(response.choices[0].message.content)
import IQIUU from '@iqiuu/sdk';

const client = new IQIUU({ apiKey: 'iq_your_api_key' });

const response = await client.chat.completions.create({
  model: 'iqiuu-void-1',
  messages: [
    { role: 'system', content: 'You are a strategic advisor.' },
    { role: 'user', content: 'Analyze the market landscape for AI infrastructure.' }
  ],
  cognitive_mode: 'strategic',
  memory: true,
  temperature: 0.7
});

console.log(response.choices[0].message.content);
curl https://api.iqiuu.com/v1/chat/completions \
  -H "Authorization: Bearer iq_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "iqiuu-void-1",
    "messages": [
      {"role": "system", "content": "You are a strategic advisor."},
      {"role": "user", "content": "Analyze the market landscape for AI infrastructure."}
    ],
    "cognitive_mode": "strategic",
    "memory": true,
    "temperature": 0.7
  }'

Memory

The Memory API provides access to RMA-powered persistent context. Memories are automatically extracted from conversations when memory: true is set, or can be managed explicitly.

List memories

GET /v1/memory

Returns all persistent memories associated with your API key.

JSON Response
{
  "object": "list",
  "data": [
    {
      "id": "mem_a1b2c3d4",
      "content": "User prefers concise, data-driven responses",
      "source": "auto",
      "created_at": 1756684800,
      "relevance_score": 0.94
    }
  ]
}

Create memory

POST /v1/memory
ParameterTypeRequiredDescription
content string Required The memory content to persist
metadata object Optional Arbitrary key-value metadata

Delete memory

DELETE /v1/memory/{id}

Permanently deletes a memory by ID. This action cannot be undone.


Embeddings

Coming Soon — Eigenintelligence-aware embeddings are currently in development.

IQIUU embeddings go beyond semantic similarity. Vectors are decomposed along cognitive eigenvectors, capturing reasoning modality, not just meaning.

Planned specification

POST /v1/embeddings

World Models

Coming Soon — Latent world model inference is under active research.

Query IQIUU's internal world models for simulation and prediction. Built on World Model Substrate (WMS), these endpoints expose latent causal reasoning over complex systems.

Planned endpoints

POST /v1/world/simulate

Run counterfactual simulations over a described scenario. Returns probabilistic outcome distributions.

POST /v1/world/predict

Generate predictions for future states of a described system. Leverages latent world model inference for multi-step causal forecasting.


Rate Limits

Rate limits are applied per API key and reset daily at midnight UTC.

Free
$0/mo
100 requests / day
Max 10 req/min
Pro
$99/mo
10,000 requests / day
Max 100 req/min
Enterprise
Custom
Unlimited requests
Dedicated infrastructure
Rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) are included in every response.

Pricing

Token-based pricing. Input and output tokens are billed at the same rate. Usage is tracked in real time on your dashboard.

Model Tier Input / 1M tokens Output / 1M tokens Context Window
iqiuu-void-1 Frontier $15.00 $15.00 256K
iqiuu-qualia-1 Standard $10.00 $10.00 128K
iqiuu-nexus-1 Fast $3.00 $3.00 64K
iqiuu-zero-1 Economy $0.50 $0.50 32K

SDKs

Official client libraries for the IQIUU API. All SDKs provide typed interfaces, automatic retries, and streaming support.

Python Available at launch
pip install iqiuu
JavaScript / TypeScript Available at launch
npm install @iqiuu/sdk
Go Coming soon
go get github.com/iqiuu/iqiuu-go
Rust Coming soon
cargo add iqiuu
© 2026 IQIUU Research. All rights reserved.
iqiuu.com Contact