Context Optimization & Positioning
Optimize context window usage with strategic positioning, trimming, and summarization techniques while avoiding common pitfalls.
Context management techniques:
Progressive summarization risks: important details can be lost through repeated summarization
'Lost in the middle' effect: information in the middle of long contexts is less likely to be recalled
'Case facts' blocks: structured reference sections that preserve critical information
Trimming verbose tool outputs: remove noise while retaining essential data
Position-aware ordering: put the most important information at the beginning and end of context
Anti-Patterns to Avoid
Progressive summarization of critical details without preserving originals
Ignoring the 'lost in the middle' effect in long context windows
Escalation & Error Propagation
Design escalation patterns and error propagation strategies that provide enough context for recovery or human intervention.
Escalation and error handling:
Escalation triggers: customer demands, policy gaps — not just sentiment
Structured error context vs generic errors: always include what was attempted
Access failures vs empty results: distinguish between 'could not check' and 'checked and found nothing'
Local recovery before coordinator escalation: try to fix locally first
Partial results + what was attempted: always report progress even on failure
Anti-Patterns to Avoid
Sentiment-based escalation (sentiment does not equal task complexity)
Generic error propagation that loses the original error context
Silently suppressing errors instead of escalating with context
Context Degradation & Extended Sessions
Handle context degradation in long-running sessions. Use scratchpad files, /compact, and subagent delegation to maintain quality.
Managing extended sessions:
Context degradation: quality decreases in extended sessions as context fills up
Scratchpad files: external files to persist important state across context resets
/compact: compress conversation history to reclaim context space
Subagent delegation: delegate verbose exploration to subagents to keep coordinator context clean
Crash recovery manifests: persistent state files that enable session recovery
Anti-Patterns to Avoid
Running extended sessions without monitoring context degradation
Not using scratchpad files for important intermediate state
Human Review & Information Provenance
Design human-in-the-loop review systems and maintain information provenance through claim-source mappings and temporal data.
Human review and provenance:
Stratified sampling: review samples across different categories, not just random selection
Field-level confidence: provide confidence indicators for individual data fields
Accuracy by document type: track performance per document category, not just aggregate
Claim-source mappings: link each output claim to its source for traceability
Temporal data: preserve timestamps and version information for currency
Conflict annotation: explicitly mark conflicting sources rather than silently choosing one
Anti-Patterns to Avoid
Aggregate accuracy metrics that mask per-document-type failures
Not maintaining claim-source mappings for traceability
Silently resolving source conflicts instead of annotating them
Exam Tips for Domain 5
Progressive summarization loses critical details — use 'case facts' blocks instead
Sentiment ≠ complexity for escalation decisions
Always distinguish access failures from genuinely empty results
Track accuracy per document type, not just aggregate
Related Exam Scenarios
Customer Support Resolution Agent
Design an AI-powered customer support agent that handles inquiries, resolves issues, and escalates complex cases. Tests Agent SDK usage, MCP tools, and escalation logic.
Multi-Agent Research System
Build a coordinator-subagent system for parallel research tasks. Tests multi-agent orchestration, context passing, error propagation, and result synthesis.
Structured Data Extraction
Build a structured data extraction pipeline from unstructured documents. Tests JSON schemas, tool_use, validation-retry loops, and few-shot prompting.
Test Your Knowledge of Context & Reliability
Practice with scenario-based questions covering this domain.