Handoff Protocol and Procedural Patterns
Copy page
Structure context passing between agents with handoff packets and return packets. Use procedural patterns for validation loops, iteration policies, and error handling.
Subagents start with fresh context, so reliable outcomes depend on structured handoffs. This page covers how to pass context between agents and the procedural patterns that make agent behavior predictable and bounded.
Parent → subagent: handoff packet
When spawning a subagent, provide a structured handoff packet. Copy and customize this template:
If you already know the target files, include them. If the agent must avoid edits, say so explicitly and restrict tools in frontmatter.
Subagent → parent: return packet
Require the subagent to respond in a structured format:
Iteration patterns
Pattern 1: One-shot summary
Best when:
- The task is self-contained
- The parent will do the implementation
The subagent does its job and returns one comprehensive report.
Pattern 2: Resume loop
Best when:
- The task is iterative or large
- You want the same subagent instance to keep its own context and history
Instruction for the parent: "Continue the previous subagent work and now do X. Keep the same output contract."
Pattern 3: Chain subagents via the parent
Best when:
- You need different expertise phases (review → implement → validate)
Parent delegates to subagent A and gets the return packet
Parent extracts the minimal relevant pieces and passes them into subagent B's handoff
Repeat for additional phases
Avoid copying entire transcripts between subagents — this wastes tokens. Extract only the minimal context needed for the next phase.
Artifact passing rules of thumb
| Artifact size | Strategy |
|---|---|
| Small (< ~2-3 KB) | Pass directly in the next handoff packet |
| Large (plans, research notes, many findings) | Write to a file and pass the path forward |
This prevents token bloat and enables resume/fork workflows.
Error retention
When a subagent action fails, keep the failed action and error in the handoff context. This enables implicit belief updating — the agent learns what doesn't work without explicit "don't do X" instructions. Summarize patterns if errors accumulate, but don't strip them entirely.
Procedural patterns
When agent tasks involve iteration, validation, or error handling, use these patterns to make behavior predictable and bounded.
Validation loops
Use when an agent needs to verify its work before returning.
Pattern: Do → Verify → Fix (if needed) → Re-verify → Return
Key elements:
- Bounded iterations (max N attempts)
- Clear termination condition
- Explicit failure path with useful output
Iteration policies
Use when an agent may need multiple passes to complete a task:
Error handling (graceful degradation)
Use when an agent might encounter errors that shouldn't crash the entire task:
Decision trees
Use when an agent needs to choose between paths based on conditions:
Severity levels
Use when an agent needs to prioritize or categorize findings:
| Level | Meaning | Examples |
|---|---|---|
| CRITICAL | Must fix before proceeding; blocks the task | Security vulnerabilities, data loss risks, breaking API changes |
| HIGH | Should fix; significant impact | Bugs affecting users, performance regressions, missing error handling |
| MEDIUM | Worth fixing; moderate impact | Code quality issues, missing tests, inconsistent patterns |
| LOW | Nice to have; minor improvement | Style preferences, minor refactoring, documentation gaps |
Emphasis markers
Use these markers consistently to signal importance:
| Marker | Meaning | When to use |
|---|---|---|
| CRITICAL: | Non-negotiable | Safety constraints, security rules, data integrity |
| MUST / NEVER | Hard requirement / prohibition | Core correctness rules |
| SHOULD | Strong default with exceptions | Best practices with escape hatches |
| CONSIDER / MAY | Suggestion; use judgment | Optional improvements |
Combining patterns
For complex agents, combine these patterns into a cohesive workflow:
Workflow Orchestrators
Design multi-phase orchestrator agents that spawn subagents, aggregate results, implement quality gates, and iterate with bounded loops.
Testing & Iteration
Tune delegation behavior, debug agent underperformance, distinguish designer failures from runtime failures, and update existing agents without drift.