Explicit Criteria & Instruction Design
Write prompts with explicit, measurable criteria instead of vague instructions. Understand how false positives impact developer trust.
Prompt design principles:
Explicit criteria over vague instructions: 'flag functions over 50 lines' vs 'flag long functions'
False positive impact: too many false positives erode developer trust in the system
Specificity reduces ambiguity and improves consistency across runs
Measurable criteria enable automated validation of output quality
Anti-Patterns to Avoid
Vague instructions like 'make it better' or 'improve the code'
Not considering the downstream impact of false positives
Few-Shot Prompting
Use few-shot examples to guide Claude's output format and reasoning. Know when and how many examples to provide.
Few-shot prompting techniques:
2-4 examples: optimal for ambiguous cases to establish format and reasoning patterns
Format consistency: all examples should follow the same output structure
Edge case coverage: include at least one example that handles an edge case
Few-shot is most valuable when the task has ambiguous boundaries
Anti-Patterns to Avoid
Too many examples (>6) that bloat the prompt without adding value
Inconsistent formatting across examples confusing the model
Tool Use for Structured Output
Use tool_use to guarantee JSON schema compliance. Understand the difference between schema compliance and semantic correctness.
Structured output via tool_use:
tool_use guarantees JSON schema compliance — the output will match the defined structure
Semantic errors are still possible: the structure is correct but the content may be wrong
tool_choice options: 'auto', 'any', or forced specific tool for guaranteed invocation
Schema design: required vs optional fields, enums with 'other' + detail, nullable fields
Anti-Patterns to Avoid
Assuming tool_use eliminates all errors (it only guarantees structural compliance)
Not using enums with 'other' category for fields that may have unexpected values
Validation-Retry Loops & Multi-Pass Review
Implement validation-retry patterns and multi-pass review strategies for reliable output. Understand when retries are effective and when they are not.
Validation and review patterns:
Validation-retry loops: append specific errors to the prompt and retry for self-correction
detected_pattern fields: track dismissal patterns to identify systematic issues
Multi-pass review: per-file local analysis + cross-file integration pass
Self-review limitations: same session retains reasoning context, reducing effectiveness
Batch processing: synchronous for blocking tasks, batch for latency-tolerant workloads
Anti-Patterns to Avoid
Same-session self-review (the model retains its reasoning context, creating bias)
Generic retry without appending specific error information
Aggregate accuracy metrics masking per-document-type failures
Exam Tips for Domain 4
Explicit, measurable criteria > vague instructions (always)
2-4 few-shot examples is the sweet spot for ambiguous tasks
tool_use = structural compliance, NOT semantic correctness
Same-session self-review is an anti-pattern — use separate sessions
Related Exam Scenarios
Code Generation with Claude Code
Configure Claude Code for a development team workflow. Tests CLAUDE.md configuration, plan mode, slash commands, and iterative refinement strategies.
Claude Code for CI/CD
Integrate Claude Code into continuous integration and delivery pipelines. Tests -p flag usage, structured output, batch API, and multi-pass code review.
Structured Data Extraction
Build a structured data extraction pipeline from unstructured documents. Tests JSON schemas, tool_use, validation-retry loops, and few-shot prompting.
Test Your Knowledge of Prompt Engineering
Practice with scenario-based questions covering this domain.