MCP Is Not an API. Stop Treating It Like One.
Here's the mistake I keep seeing. Engineers preparing for the Claude Certified Architect – Foundations (CCA-F) spend two weeks memorizing how to write MCP tool definitions, then walk into the exam and get destroyed on the scenario questions. Why? Because they learned the syntax of MCP without understanding its contract.
MCP isn't just a way to give Claude access to functions. It's a stateful, bidirectional transport layer for agentic intelligence. The distinction matters — a lot — and the CCA-F tests it hard.
The Three Primitives: Know When to Use Each One
The exam's "Tool Design & MCP Integration" domain (18% of your score) is entirely structured around choosing the right primitive. Get this wrong and you'll be picking the wrong answer on scenario after scenario without knowing why.
Tools — "The Doers"
Tools are executable functions with side effects. They write to databases, call external APIs, send emails. The exam trap: candidates reach for a Tool whenever they need data. That's wrong. Tools imply action, and every action costs reasoning tokens and introduces failure modes.
Resources — "The Knowers"
Resources are read-only data sources — logs, database snapshots, documentation. If your agent needs to look something up, that's a Resource, not a Tool. Using a Tool for a read-only operation is the architectural equivalent of calling a POST endpoint to do a GET. It works, but it's wrong, and the exam will mark you down for it.
Sampling — "The Collaborators"
This is the 2026 addition that most study guides haven't caught up with yet. Sampling flips the communication direction. Instead of the LLM calling your server, your server calls the LLM. The canonical use case: your SQL tool intercepts a destructive query, samples Claude back with "this DELETE has no WHERE clause — should I proceed?", and waits for confirmation before executing.
If you've never thought about your MCP server as a peer collaborator rather than a passive endpoint, this will be a conceptual shift. But once it clicks, a whole class of exam questions becomes obvious.
The 18-Tool Ceiling Nobody Talks About
I want to flag something that the official documentation glosses over and that I've seen sink a lot of candidates in practice environments.
Claude's reasoning performance degrades when it's presented with more than ~18 tools in a single context. This isn't a soft preference or a stylistic concern — it's a measurable drop in tool-selection accuracy. The model starts hedging. It picks the wrong tool. It loops.
The exam leans into this with scenario questions that describe an "Enterprise AI platform" with 40+ capabilities and asks you to architect it. The wrong answer is always some variation of "put all the tools in one MCP server." The right answer is always a Router-Subagent pattern.
The structure: a primary orchestrator agent handles intent classification and task routing. It spins up domain-specific subagents — a db-agent that only sees SQL tools, a comms-agent that only sees email and Slack tools, a billing-agent that only sees payment APIs. Each subagent operates with a clean, minimal tool set. No cognitive overload.
This pattern shows up in multiple exam domains. Learn it once, apply it everywhere.
Structured Error Handling: The Part That Separates Engineers from Beginners
On the exam, a scenario gives you a database timeout in the middle of an agentic workflow. Four answer choices. Most candidates pick the one that throws a generic error and assumes the retry is someone else's problem. That answer is always wrong.
The CCA-F blueprint requires you to return structured domain errors that tell Claude exactly how to respond. The isError flag isn't a boolean checkbox — it's a semantic signal that drives the agent's next decision.
Three error categories the exam will test you on:
- Transient errors (rate limits, timeouts):
isRetryable: true. Claude waits and retries with backoff. - Validation errors (bad arguments, malformed input): Tell Claude to fix its arguments and resubmit.
- Permanent errors (permission denied, resource not found):
isRetryable: false. Claude escalates or stops.
If your error handling doesn't give Claude enough information to make that decision, your agent will loop, hallucinate a recovery strategy, or silently fail. The exam tests whether you understand this distinction at the architecture level.
What Correct TypeScript Looks Like
Here's a production-ready MCP tool registration pattern using Zod for schema-first validation. Note the structured error response — this is exactly what the CCA-F graders are looking for.
import { McpServer } from "@modelcontextprotocol/sdk"; import { z } from "zod"; const server = new McpServer({ name: "inventory-service", version: "2.1.0" }); server.registerTool( "query_inventory", { sku: z.string().describe("Unique product SKU identifier"), warehouse: z.enum(["US-EAST", "EU-WEST"]).optional() }, async ({ sku, warehouse }) => { try { const data = await db.fetch(sku, warehouse); return { content: [{ type: "text", text: JSON.stringify(data) }], isError: false }; } catch (e: any) { // Structured error — Claude knows exactly what happened and what to do next return { content: [{ type: "text", text: e.message }], isError: true, error: { errorCategory: e.code === "ETIMEOUT" ? "transient" : "permanent", isRetryable: e.code === "ETIMEOUT" } }; } } );
Two things worth paying attention to. First, Zod validation on inputs — the exam will have a scenario about an agent passing malformed arguments to a tool and ask you which layer should catch it. Schema-first validation on the server is the correct answer. Second, the errorCategory field in the catch block. This is what lets the orchestrating agent make a decision rather than just crashing.
The Bottom Line on MCP
MCP is still young. The documentation is good but assumes you're already thinking in agentic systems. Most tutorials are written by people who've used it for toy demos, not people who've run into the edge cases at scale.
The CCA-F rewards you for understanding the why behind each primitive, not just the how. Tools vs. Resources is a judgment call about side effects. The 18-tool ceiling is about cognitive load on the model. Structured errors are about giving your orchestrator enough signal to recover autonomously.
Get these three right and you're in good shape for 18% of your exam score.
Test Yourself Before the Real Thing
Our Claude Architect simulator includes a dedicated MCP domain track — scenario questions specifically built around Tool vs. Resource selection, the Router-Subagent pattern, and structured error handling under real exam time pressure.
Try the Free Claude Architect Simulator →
If you're scoring above 75% on domain 2 in our simulator, you're ready. Below that, come back and work the error handling scenarios until the structured response pattern is automatic.