How It Works

Five layers of structural intelligence.

Raw signals in. Compounding intelligence out. Each layer compresses, verifies, and routes — building an architecture that gets smarter with every session.

The model is the pen. The structure is the brain.

Scroll
Capture
Synthesis
Knowledge
Intelligence
Evolution

01 — Capture

Your AI watched you solve that. It won't forget.

Every debugging session, every architectural decision, every workaround — captured silently in the background. No tagging. No note-taking. You just work.

Zero-effort capture Full session context Errors & fixes linked

You solved it once. That should be enough.

02 — Synthesis

A thousand events. A handful of lessons.

Raw session data is noise. ACE compresses it — extracting the discoveries, decisions, and patterns that actually matter. What took you an hour to figure out becomes a single retrievable insight.

Automated extraction Pattern recognition ~16:1 compression
0 events per session — distilled to the insights that count

03 — Knowledge

Bad knowledge is worse than no knowledge.

Not everything makes it. Observations are scored, deduplicated, and verified before they become permanent knowledge. Outdated patterns get flagged. Contradictions get surfaced. Only what's proven survives.

Confidence scoring Contradiction detection Staleness tracking

Your AI doesn't just remember — it knows what to trust.

04 — Intelligence

Agent B already knows what Agent A learned last week.

When any agent hits a problem, ACE surfaces knowledge from every prior session — including solutions found by different agents, in different contexts, days ago. The hour you spent debugging? Every future session skips it entirely.

Cross-agent memory Contextual retrieval 5 min vs. 1 hour

Stop re-explaining your codebase. It already knows.

05 — Evolution

Monday morning. Your AI is smarter than Friday.

Between sessions, ACE consolidates what it's learned — reinforcing patterns that held up, retiring knowledge that didn't, and identifying gaps it should research next. You didn't do anything. It just got better.

Autonomous consolidation Self-healing knowledge Gap-driven research

The intelligence curve only goes up.

Inside the capture layer

A pipeline, not a pile.

Every tool call — file reads, terminal commands, search queries, edits — is intercepted before it reaches the model. The capture layer extracts file paths, classifies output types, detects error patterns, and timestamps everything. The raw session becomes structured data.

Event Stream ● Recording
14:23:01Readshared/config.py → 142 lines
14:23:08Bashpytest tests/ → 3 failed, 47 passed
14:23:15Greppattern: "timeout" → 4 matches in 2 files
14:23:22Editshared/config.py:87 → timeout 30→120
14:23:30Bashpytest tests/ → 50 passed ✓

Inside the synthesis layer

From 50 events to 3 observations.

A fast classifier reads the raw event stream and extracts typed observations — each with a category, confidence score, and evidence polarity. Thousands of tokens of tool output compress into structured knowledge. The compression ratio is roughly 16:1.

Raw Event

Bash: pytest tests/ → 3 failed
FAILED test_config_timeout[30s] - TimeoutError
FAILED test_config_timeout[60s] - TimeoutError
FAILED test_config_retry[default] - ConnectionResetError

discovery

Default timeout of 30s insufficient for integration tests

Tests requiring external API calls fail at 30s and 60s. Setting timeout to 120s resolves all 3 failures. Root cause: upstream service latency under load.

confidence: 0.82 · polarity: positive · files: shared/config.py

Inside the knowledge layer

Only verified knowledge enters the retrieval surface.

Not every observation deserves to persist. The graduation pipeline filters through confidence gates, deduplicates against existing knowledge, and promotes only what meets the bar. 98.6% of raw events are noise. The pipeline finds the signal.

Graduation Pipeline
8,800Raw events
550Observations
180Graduated
120Verified knowledge

Inside the intelligence layer

5 minutes instead of an hour.

An agent hits a 401 error debugging an API integration. Instead of re-tracing the full authentication cascade, the retrieval engine surfaces a diagnosis from three days ago — by a different agent, in a different session. The fix was already learned.

Negative knowledge is the most valuable kind. The system remembers what didn't work — so the same mistake never recurs across any agent.

Retrieval Engine
401 authentication failure API endpoint
#1

OAuth token not usable as direct API key

Subscription OAuth token (claudeAiOauth.accessToken) returns 401 when passed as x-api-key. Must use CLI bridge for auth.

cross-agent score: 0.91 · 3 days ago
#2

Auth cascade: env → .env file → CLI credentials

Centralized in auth_resolver.py. All components use resolve_auth() + build_cli_subprocess_env().

same-project score: 0.87 · 5 days ago

Inside the evolution layer

By morning, the bench is further along.

Between sessions, the consolidation infrastructure reviews high-salience observations against existing knowledge. What held up gets reinforced. Contradictions get flagged. Gaps in coverage trigger targeted research. The system is designed to maintain itself — so every session starts with better context than the last.

Consolidation Log Between sessions
Reinforced: 8 observations confirmed by repeated retrieval activation
Extended: 3 knowledge chunks updated with new supporting evidence
Flagged: 1 contradiction detected — queued for review
Gap detected: Zero coverage on WebSocket reconnection patterns — research triggered

The model is replaceable.
The intelligence is not.

Five layers of compression, verification, and routing — building an architecture that compounds.