Topological Execution Graphs

When your AI agents learn to decompose their own work

February 25, 2026 · Derek Adair

Our autonomous coding agents have completed over 900 tasks. They read instructions, write SEARCH/REPLACE diffs, create files, and commit changes without human review. It works remarkably well—until it doesn't.

The failure mode is predictable: hand an agent an instruction file with six edits across four files, and watch it collapse. The first two edits land. The third gets a stale match. The retry duplicates content. By the sixth edit, the agent is hallucinating SEARCH strings against a file it already corrupted.

We call this context collapse, and it's the single largest source of dead-letter-queue entries in our system.

# The Root Cause

The problem isn't the agent. It's the task shape. When you ask a single execution context to hold the state of four files, track six operations of mixed types (creates and edits), maintain correct ordering within each file while parallelizing across files, and recover from partial failure—you're asking for trouble.

Batch processing of heterogeneous steps in a single task is the enemy.

The pattern: Every failure we traced back in our dead-letter-queue triage shared the same root cause. Mixed step types in one instruction. Multiple files in one context window. Retry logic that doesn't know which step failed. The solution became obvious: don't do that.

# Two Papers, One Insight

Two papers dropped in February 2026 that crystallized the approach:

OpenSage (arXiv:2602.16891) demonstrates runtime topological self-assembly—an LLM that generates its own agent topology, custom tools, and hierarchical memory graph at runtime. No human architect decides the structure. The system observes the task and assembles the graph it needs.

AlphaEvolve (arXiv:2602.16928) from DeepMind uses LLMs as evolutionary optimizers over discrete symbolic structures—specifically, abstract syntax trees representing multi-agent learning algorithms. The key insight: algorithms live in a discrete, non-differentiable space. You can't gradient-descend a for-loop. You search over graph structures.

Both papers operate on the same fundamental idea: the graph is the program. Not the weights. Not the embeddings. The topology.

# TEG: Topological Execution Graphs

We built a pre-execution decomposer that sits between the task queue and the executor. When a task arrives with complex multi-file instructions, TEG intercepts it before any code runs.

BEFORE: Instruction (6 edits, 4 files) ↓ Executor processes all steps sequentially ↓ Step 3 fails → retry corrupts → DLQ AFTER: Instruction (6 edits, 4 files) + teg_plan=true ↓ TEG Planner intercepts ↓ Parses 6 operation blocks ↓ Builds file-level DAG: ↓ File A: [edit 1] → [edit 2] → [edit 3] (sequential) File B: [edit 4] (parallel) File C: [edit 5] → [edit 6] (sequential) ↓ 6 atomic child tasks, 3 start immediately ↓ Each child: one file, one operation, one context

The Rules

TEG follows strict decomposition rules derived from our KB corpus of 900+ task executions:

  1. One operation per node. Never mix a file creation with a search-replace edit in the same task.
  2. Same-file operations are sequential. If two edits target the same file, the second depends on the first. Directed edge in the DAG.
  3. Different-file operations are parallel. No dependency edge. Both start as pending immediately.
  4. Atomic failure. If node 3 fails, only its downstream dependents are cancelled. Nodes on other files are unaffected.
  5. Parent tracks children. The original task goes to "blocked" status. When all children complete, the parent auto-completes. If any child fails, the parent gets a summary of what succeeded and what didn't.

# What It Looks Like in Practice

Triggering TEG requires one JSON key:

INSERT INTO jr_work_queue (
    title, instruction_file, assigned_jr,
    parameters
) VALUES (
    'Complex Multi-File Change',
    '/path/to/instruction.md',
    'Software Engineer Jr.',
    '{"teg_plan": true}'
);

That's it. The planner handles the rest:

  1. Reads the instruction file
  2. Extracts all SEARCH/REPLACE blocks and Create blocks using the same regex patterns as the executor itself
  3. Attributes each block to its target file by finding the closest preceding File: `/path` reference
  4. Groups blocks by file, orders within groups by position in the original instruction
  5. Writes individual instruction files for each node
  6. Inserts child tasks with dependency metadata
  7. Blocks the parent

Each child task gets its own instruction file with exactly one operation. The executor sees a simple, atomic task. No ambiguity. No context collapse.

# The Completion Chain

When a child task completes, a hook fires:

  1. Look up my TEG node ID and parent
  2. Find any blocked siblings whose dependency list includes me
  3. For each, check if ALL their dependencies are now complete
  4. If so, unblock them (set to pending)
  5. Check if ALL children of the parent are complete
  6. If so, auto-complete the parent

The failure path mirrors this: when a child fails, its downstream dependents are cancelled, and the parent gets marked with a partial-failure summary showing exactly which nodes succeeded and which didn't.

# What We Learned Deploying It

The TEG planner was itself deployed via our autonomous executor. The irony was immediate.

The instruction to create teg_planner.py was a 415-line file. The executor's Create handler truncated it at line 134—a context collapse on the very module designed to prevent context collapse. We wrote the file directly.

Three more bugs surfaced during the smoke test:

Each bug was found by watching real execution, not by static analysis. The system told us exactly what was wrong through its behavior.

# Constitutional Constraints

Our 7-specialist AI council reviewed the TEG design before deployment. The vote came back "Proceed with Caution" at 0.843 confidence. Key concerns:

The constitutional constraint is clear: the council topology is sacred and fixed. TEG operates exclusively in the executor layer. It decomposes tasks. It does not reorganize specialists, modify voting, or touch governance. The council is the fixed star. TEG is the Jr layer.

# The Deeper Pattern

OpenSage showed us that topology can be self-assembled. AlphaEvolve showed us that algorithms live in discrete non-differentiable space—you can't gradient-descend a for-loop, but you can evolve one.

TEG is the simplest possible instantiation of this insight: a static decomposer with fixed rules. No learning. No evolution. Just parsing and graph construction. But it opens the door to something more interesting.

If the graph structure of task decomposition can be optimized—if we can evolve better decomposition strategies from execution outcomes—then the executor isn't just running tasks. It's discovering how to run them better.

That's the next step. For now, TEG solves the immediate problem: complex instructions no longer crash our agents. One operation, one file, one context. The graph handles the rest.

For Seven Generations: TEG is live on the Cherokee AI Federation. 900+ tasks completed and counting. The executor writes its own code, and now it decomposes its own work.