Skip to content

Claude Certified Architect – Foundations Exam Guide

Everything you need to know about Anthropic's first official technical certification. Exam format, domains, scenarios, and how to prepare.

5 domains · 6 scenarios · 10 anti-patterns~15 min read

Exam Overview

Certification Name

Claude Certified Architect – Foundations

Issued By

Anthropic

Exam Format

Multiple choice, scenario-based

Passing Score

720 / 1000

Scenarios

4 of 6 scenarios randomly selected

Target Audience

Solution architects building production applications with Claude

Cost

Free for first 5,000 partner company employees

Availability

Available now for Anthropic partners

Register for the Exam

Request access through Anthropic's official Skilljar portal. Currently available for partner company employees.

Go to Registration Portal

5 Exam Domains

The exam covers five domains. Each domain has a specific weight indicating how many questions relate to that topic. Understanding all five domains is essential for passing.

6 Exam Scenarios

The exam presents you with 4 of 6 possible scenarios, randomly selected. Each scenario places you in a real-world context where you must make architectural decisions for production Claude applications.

1

Customer Support Resolution Agent

Design an AI-powered customer support agent that handles inquiries, resolves issues, and escalates complex cases. Tests Agent SDK usage, MCP tools, and escalation logic.

Key Skills Tested

Agent SDK implementationEscalation pattern designHook-based compliance enforcementStructured error handling
2

Code Generation with Claude Code

Configure Claude Code for a development team workflow. Tests CLAUDE.md configuration, plan mode, slash commands, and iterative refinement strategies.

Key Skills Tested

CLAUDE.md hierarchy setupPlan mode vs direct executionCustom slash commands and skillsTDD iteration pattern
3

Multi-Agent Research System

Build a coordinator-subagent system for parallel research tasks. Tests multi-agent orchestration, context passing, error propagation, and result synthesis.

Key Skills Tested

Hub-and-spoke architectureContext isolation and passingError propagation patternsInformation provenance and synthesis
4

Developer Productivity with Claude

Build developer tools using the Claude Agent SDK with built-in tools and MCP servers. Tests tool selection, codebase exploration, and code generation workflows.

Key Skills Tested

Built-in tool selection (Read, Write, Bash, Grep, Glob)MCP server integrationCodebase exploration strategiesTool distribution across agents
5

Claude Code for CI/CD

Integrate Claude Code into continuous integration and delivery pipelines. Tests -p flag usage, structured output, batch API, and multi-pass code review.

Key Skills Tested

-p flag for non-interactive modeStructured output with --output-format jsonBatch API with Message BatchesSession isolation for generator vs reviewer
6

Structured Data Extraction

Build a structured data extraction pipeline from unstructured documents. Tests JSON schemas, tool_use, validation-retry loops, and few-shot prompting.

Key Skills Tested

JSON schema design for tool_useValidation-retry loop implementationFew-shot prompting for format consistencyField-level confidence and human review

Key Anti-Patterns (Common Wrong Answers)

These anti-patterns frequently appear as distractor answers on the exam. Recognizing them is critical for choosing the correct answer.

Parsing natural language for loop termination

✓ Instead: Check stop_reason ('tool_use' vs 'end_turn')

Arbitrary iteration caps as primary stopping

✓ Instead: Let the agentic loop terminate naturally via stop_reason

Prompt-based enforcement for critical business rules

✓ Instead: Use programmatic hooks for deterministic enforcement

Self-reported confidence scores for escalation

✓ Instead: Use structured criteria and programmatic checks

Sentiment-based escalation

✓ Instead: Escalate based on task complexity, policy gaps, not sentiment

Generic error messages ('Operation failed')

✓ Instead: Include isError, errorCategory, isRetryable, and context

Silently suppressing errors (empty results as success)

✓ Instead: Distinguish access failures from genuinely empty results

Too many tools per agent (18+)

✓ Instead: Keep to 4-5 tools per agent for optimal selection

Same-session self-review

✓ Instead: Use separate sessions to avoid reasoning context bias

Aggregate accuracy metrics only

✓ Instead: Track accuracy per document type to catch masked failures

Official Resources

Ready to Start Studying?

Follow our structured 12-week plan or dive into individual domains.