Skip to content
Domain 4 · ~20%

Prompt Engineering & Structured Output

Master prompt engineering techniques for production systems. Covers explicit criteria, few-shot prompting, tool_use for structured output, JSON schema design, validation-retry loops, and multi-pass review strategies.

d4.1

Explicit Criteria & Instruction Design

Write prompts with explicit, measurable criteria instead of vague instructions. Understand how false positives impact developer trust.

Prompt design principles:

Explicit criteria over vague instructions: 'flag functions over 50 lines' vs 'flag long functions'

False positive impact: too many false positives erode developer trust in the system

Specificity reduces ambiguity and improves consistency across runs

Measurable criteria enable automated validation of output quality

Anti-Patterns to Avoid

Vague instructions like 'make it better' or 'improve the code'

Not considering the downstream impact of false positives

d4.2

Few-Shot Prompting

Use few-shot examples to guide Claude's output format and reasoning. Know when and how many examples to provide.

Few-shot prompting techniques:

2-4 examples: optimal for ambiguous cases to establish format and reasoning patterns

Format consistency: all examples should follow the same output structure

Edge case coverage: include at least one example that handles an edge case

Few-shot is most valuable when the task has ambiguous boundaries

Anti-Patterns to Avoid

Too many examples (>6) that bloat the prompt without adding value

Inconsistent formatting across examples confusing the model

d4.3

Tool Use for Structured Output

Use tool_use to guarantee JSON schema compliance. Understand the difference between schema compliance and semantic correctness.

Structured output via tool_use:

tool_use guarantees JSON schema compliance — the output will match the defined structure

Semantic errors are still possible: the structure is correct but the content may be wrong

tool_choice options: 'auto', 'any', or forced specific tool for guaranteed invocation

Schema design: required vs optional fields, enums with 'other' + detail, nullable fields

Anti-Patterns to Avoid

Assuming tool_use eliminates all errors (it only guarantees structural compliance)

Not using enums with 'other' category for fields that may have unexpected values

d4.4

Validation-Retry Loops & Multi-Pass Review

Implement validation-retry patterns and multi-pass review strategies for reliable output. Understand when retries are effective and when they are not.

Validation and review patterns:

Validation-retry loops: append specific errors to the prompt and retry for self-correction

detected_pattern fields: track dismissal patterns to identify systematic issues

Multi-pass review: per-file local analysis + cross-file integration pass

Self-review limitations: same session retains reasoning context, reducing effectiveness

Batch processing: synchronous for blocking tasks, batch for latency-tolerant workloads

Anti-Patterns to Avoid

Same-session self-review (the model retains its reasoning context, creating bias)

Generic retry without appending specific error information

Aggregate accuracy metrics masking per-document-type failures

Exam Tips for Domain 4

1.

Explicit, measurable criteria > vague instructions (always)

2.

2-4 few-shot examples is the sweet spot for ambiguous tasks

3.

tool_use = structural compliance, NOT semantic correctness

4.

Same-session self-review is an anti-pattern — use separate sessions

Related Exam Scenarios

Test Your Knowledge of Prompt Engineering

Practice with scenario-based questions covering this domain.

Practice Questions