If you have ever defined a YANG model for a Cisco service, or written an Ansible module with a typed argument spec, you already know 80% of what makes the Model Context Protocol work. MCP is a wire protocol that lets an LLM call your code through named, schema-validated tools. That is it. The protocol is the boring part. The interesting part is what you do once your code is reachable.
This post is a guided tour of a working Python MCP server I wrote called mtg-mcp. It exposes two tools to a Claude-compatible client: a health.check and a mtg.card_lookup that calls the Scryfall API for Magic: The Gathering card data. The MTG domain is incidental. The walk-through pattern is what transfers — replace card lookup with inventory lookup, customer record, internal documentation search, or a Netbox query, and the shape stays identical.
By the end of this post, you will have cloned the repo, run a working server with two tools, watched a Claude desktop client call those tools, and seen the Day-1 fake response swap out for a real Scryfall API call without changing the tool’s contract. Plan on about 30 minutes if you are following along and have Python installed.
What is MCP, and why a network engineer should care
MCP is the wire protocol Anthropic shipped late 2024 for how LLM clients call external tools. Each tool has a name, a typed input schema, an implementation, and a return value. The client (Claude Desktop, Cursor, Continue, an SDK-built agent) speaks the protocol; the server (your code) speaks the protocol. Same contract on both sides, any client can talk to any server.
If you have spent time with OpenConfig YANG models, that sentence should sound familiar. OpenConfig defined a vendor-neutral schema so any controller could push to any vendor’s box. MCP is the same instinct, applied to LLM tooling. Anthropic owned the spec at first; today the protocol is open and there are MCP-compatible clients across the major IDEs and chat surfaces.
The reason to care now is timing. The protocol is stable enough to build against, the SDKs are out of beta, and the early-adopter window is open. If you ship one MCP server for your domain, it works in every MCP-compatible client your team uses. That is the payoff. The same way one Ansible module covers every Ansible playbook in your org, one MCP server covers every MCP-aware client.
What we are building
Two tools, exposed by one Python process:
health.check— returns the server version, current UTC time, and a tiny “learning streak” counter persisted to a JSON file. The simplest possible tool. No network calls, no external deps. The point ofhealth.checkis to prove the wiring works before you debug anything harder.mtg.card_lookup— takes a card name and returns the card’s type line, oracle text, mana value, and image URL. We start with a fake hardcoded response (Day 1), then swap in a real call to the Scryfall fuzzy-named endpoint (Day 2). The schema stays identical across the swap.
That fake-then-real split is the most underrated pattern in this kind of work. You write the tool’s contract first, you wire it up to the client with a hardcoded response, you confirm the LLM can find and call the tool, and only then do you worry about whether your HTTP client is configured correctly. If something is broken at the protocol layer, you find out when the surface area is one return statement, not 60 lines of API plumbing.
The repo is public at github.com/jabelk/mtg-mcp. Everything in the post is pinned to commit 720aad9, so the code blocks below match what you will see when you clone.
Prerequisites
You need:
- Python 3.11 or newer.
- An MCP-compatible client. Claude Desktop is the easiest if you are on macOS or Windows. On Linux, or on a corporate machine where Claude Desktop is blocked, Cursor, Continue, and the
mcpCLI all work too. - A terminal you trust to run the install commands cleanly. The IDE-integrated terminal is fine; just confirm your Python and
pipresolve where you expect.
Clone the repo and set up a fresh virtualenv:
git clone https://github.com/jabelk/mtg-mcp.git
cd mtg-mcp
git checkout 720aad9d4760ecfc5eba8fcb0b15db7a4f88fabd
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install --upgrade pip
pip install -r requirements.txt
The requirements.txt lists three packages (versions are not pinned in this snapshot of the repo, so you will pick up whatever the SDK has shipped recently):
mcp
uvloop
httpx
mcp is the official Python SDK. uvloop is a faster event loop (skip it on Windows; the code already falls back gracefully). httpx is for the Day-2 Scryfall call.
Day 1: the minimum viable server with fake responses
Two files do all the work: src/server.py registers the tools with the SDK, and src/tools/card_lookup.py is where the implementation lives.
The server file is short. The interesting part is the two @mcp.tool() decorators, each of which exposes a Python coroutine to the LLM as a callable tool:
# src/server.py — Day 1 sketch (full file in the repo)
import asyncio
import json
import logging
from datetime import datetime, timezone
from pathlib import Path
from mcp.server import Server
from mcp.types import TextContent
from tools.card_lookup import lookup as mtg_lookup, FakeCardLookupError
APP_VERSION = "0.1.0"
server = Server("mcp-mtg-week1")
@server.tool()
async def health_check() -> dict:
"""Return server version, current time (UTC), and learning streak days."""
now = datetime.now(timezone.utc).isoformat()
return {"version": APP_VERSION, "now": now}
@server.tool()
async def mtg_card_lookup(name: str) -> dict:
"""Fuzzy-lookup a Magic card by name. Day-1: fake response to prove wiring."""
try:
return await mtg_lookup(name=name)
except FakeCardLookupError as e:
return {
"error": "not_found",
"message": str(e),
"suggestion": "Try a different name or check spelling."
}
async def amain() -> None:
await server.run_stdio_async()
if __name__ == "__main__":
asyncio.run(amain())
Every tool is a Python async function. Its type-annotated arguments become the input schema the client sees. Its docstring becomes the description the LLM uses to decide when to call it. Its return value is what the client gets back. There is no XML, no protobuf, no YAML config. The Python signature is the contract.
Here is the Day-1 fake card_lookup:
# src/tools/card_lookup.py — Day 1 (fake)
import asyncio
from dataclasses import dataclass
@dataclass
class FakeCardLookupError(Exception):
query: str
def __str__(self) -> str:
return f"No fake match for query: {self.query}"
async def lookup(name: str) -> dict:
"""Day-1 fake. Matches 'atraxa' (case-insensitive); raises otherwise."""
if not isinstance(name, str) or not name.strip():
raise FakeCardLookupError(query=name)
if "atraxa" in name.lower():
return {
"name": "Atraxa, Grand Unifier",
"type_line": "Legendary Creature — Phyrexian Angel",
"oracle_text": "Flying, vigilance, deathtouch, lifelink ...",
"mana_value": 7,
"image_small": "https://cards.scryfall.io/small/front/5/9/59....jpg",
"rulings_uri": "https://api.scryfall.com/cards/xxxx/rulings"
}
raise FakeCardLookupError(query=name)
Six fields in the response: name, type_line, oracle_text, mana_value, image_small, rulings_uri. That set is the schema. The fake implementation hardcodes one card, but the shape it returns is the same shape the real Scryfall implementation will return tomorrow. That is the contract.
The interactive demo below shows the contract on top and three sample responses underneath. Click the tabs to compare them. Same shape, different implementations underneath.
mtg.card_lookup — same response schema, three different implementations
Template
# Response schema for mtg.card_lookup
# (same regardless of fake vs. real implementation)
name: string # canonical card name
type_line: string # e.g. "Legendary Creature — Phyrexian Angel"
oracle_text: string # rules text
mana_value: integer # converted mana cost
image_small: string # URL to small card image
rulings_uri: string # Scryfall rulings endpoint URL Click a scenario to render
{
"name": "Atraxa, Grand Unifier",
"type_line": "Legendary Creature — Phyrexian Angel",
"oracle_text": "Flying, vigilance, deathtouch, lifelink ...",
"mana_value": 7,
"image_small": "https://cards.scryfall.io/small/front/5/9/59....jpg",
"rulings_uri": "https://api.scryfall.com/cards/xxxx/rulings"
} Day-1 implementation. Hardcoded match for any input containing 'atraxa'. The body is a literal dict; no network call. The contract is satisfied by construction.
{
"name": "Black Lotus",
"type_line": "Artifact",
"oracle_text": "{T}, Sacrifice Black Lotus: Add three mana of any one color.",
"mana_value": 0,
"image_small": "https://cards.scryfall.io/small/front/b/d/bd8fa3...jpg",
"rulings_uri": "https://api.scryfall.com/cards/bd8fa3.../rulings"
} Day-2 implementation. The same function calls Scryfall's fuzzy-named endpoint. The response is mapped from Scryfall's payload into the contract above. Note the contract did not change.
{
"error": "not_found",
"message": "Card not found. Check spelling or try a different name.",
"query": "asdfqwerty"
} Same Day-2 implementation, called with a name Scryfall cannot match. Returns a friendly error envelope rather than crashing the LLM call. The client sees a structured failure it can react to.
Run the server once to make sure the imports resolve:
python src/server.py
It will block on stdio waiting for an MCP client to talk to it. That is correct. Hit Ctrl+C to stop. We will wire it up to a real client next.
Wiring it to Claude Desktop
Claude Desktop reads its MCP server list from a JSON config file. The location depends on your OS:
| OS | Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
| Linux | not officially supported; try Continue or Cursor instead |
Open that file (create it if it does not exist) and add an mcpServers entry pointing at the Python interpreter inside your virtualenv:
{
"mcpServers": {
"mtg": {
"command": "/absolute/path/to/mtg-mcp/.venv/bin/python",
"args": ["/absolute/path/to/mtg-mcp/src/server.py"]
}
}
}
Use absolute paths. Claude Desktop launches the server itself, so a ~ or a relative path will not resolve. After saving, fully quit Claude Desktop (not just close the window) and reopen it.
The first time you ask Claude something that nudges it toward a tool — “look up the Magic card Atraxa, Grand Unifier” — you should see Claude announce it is calling mtg_card_lookup and return the JSON response inline. If you ask for the server’s health, it should call health_check and tell you the version. If neither happens, scroll Claude Desktop’s developer log; an unresolved command path is the most common failure.
Day 2: real Scryfall integration
The Day-1 fake proves the wiring. Day 2 swaps in real data without touching the tool’s contract. Here is the actual card_lookup.py from the pinned commit:
# src/tools/card_lookup.py — Day 2 (real Scryfall)
import httpx
from dataclasses import dataclass
from typing import Literal
SCRYFALL_API = "https://api.scryfall.com/cards/named"
@dataclass
class CardLookupError(Exception):
error_type: Literal["not_found", "service_unavailable"]
query: str
message: str
def __str__(self) -> str:
return self.message
async def lookup(name: str) -> dict:
if not isinstance(name, str) or not name.strip():
raise CardLookupError(
error_type="not_found",
query=str(name),
message="Card name cannot be empty",
)
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(SCRYFALL_API, params={"fuzzy": name.strip()})
except httpx.RequestError as e:
raise CardLookupError(
error_type="service_unavailable",
query=name,
message=f"Could not reach Scryfall API: {e}",
)
if response.status_code == 404:
raise CardLookupError(
error_type="not_found",
query=name,
message="Card not found. Check spelling or try a different name.",
)
if response.status_code >= 400:
raise CardLookupError(
error_type="service_unavailable",
query=name,
message=f"Scryfall API error: {response.status_code}",
)
card = response.json()
# Double-faced cards (werewolves, MDFCs) carry oracle_text on each face.
# Use the front face only.
if "card_faces" in card:
face = card["card_faces"][0]
oracle_text = face.get("oracle_text", "")
type_line = face.get("type_line", card.get("type_line", ""))
image_small = face.get("image_uris", {}).get("small", "")
else:
oracle_text = card.get("oracle_text", "")
type_line = card.get("type_line", "")
image_small = card.get("image_uris", {}).get("small", "")
return {
"name": card.get("name", ""),
"type_line": type_line,
"oracle_text": oracle_text,
"mana_value": card.get("cmc", 0),
"image_small": image_small,
"rulings_uri": card.get("rulings_uri", ""),
}
Diff vs. Day 1: about 40 lines of new error handling and double-faced-card logic, but the function signature is identical, the return shape is identical, and the caller in server.py did not change. The contract held.
The full spec-driven write-up of this swap lives in docs/01-spec-scryfall-integration.md in the repo. It is a worked example of writing the success criteria first, the implementation second.
Restart Claude Desktop to pick up the new code, ask it for any real card by name (“look up Lightning Bolt”), and you should see real Scryfall data come back in the same response shape the fake one used.
What you have actually built
You wrote a typed, schema-validated, named-tool-with-implementation that any MCP-compatible LLM client can call. That is everything an MCP server is. It is not a framework. It is not a vendor-specific SDK. It is a Python module with two decorated functions and a stdio loop.
If you have shipped an Ansible module, you have shipped this pattern. The argument spec is the schema, the module body is the implementation, the return dict is the contract. MCP is the same shape with an LLM as the orchestrator instead of ansible-playbook.
If you read What Jinja2 Templates Taught Me About AI Agents, the same template-first instinct applies inside an MCP tool’s body. The tool’s input schema is the contract; the body is the rendering. Validate before you commit, render from a reusable structure, do not let the LLM generate things that should be deterministic.
Where to take this from here
Three obvious next moves once your first server is running:
- Replace the MTG domain with yours. Card lookup becomes inventory lookup, customer record, ticket query, internal docs search, a Netbox device fetch, an NSO service template render. The protocol does not care; rename the tool, change the body, keep the schema-shape discipline.
- Move beyond stdio. The SDK supports HTTP and Server-Sent Events transports. Once you outgrow your laptop, host the server on Cloudflare Workers, Railway, or Fly and let a hosted client reach it over HTTPS. Authentication patterns are still settling — start with a static bearer token and revisit when the SDK lands a recommendation.
- Stack multiple servers. Most MCP clients let you register more than one server. One for your inventory, one for your ticketing system, one for your docs. The LLM picks the right tool by name and schema. This is how cross-domain agent workflows actually compose.
If you want the career-arc version of this — what it looks like to go from Ansible playbooks to MCP-driven agent workflows for a network team — that is the next post in this series, and it will land soon.
FAQ
Why MCP and not just direct API calls from the LLM? Standardization. Any MCP-compatible client (Claude Desktop, Cursor, Continue, the Anthropic Agent SDK, future agents) can call your tools without per-client integration. Per-vendor function calling locks you to one vendor’s surface. MCP is “write once, call from anywhere.”
Do I need to host this myself? For a hobby project or a single team, run it locally over stdio and you are done. For shared use, you will want it hosted. Anthropic, Cloudflare, and Railway have all been publishing MCP server hosting patterns through 2025–2026; pick whichever matches your existing stack rather than introducing new infrastructure.
What languages can I write MCP servers in? Python and TypeScript have first-class SDK support today. Go, Rust, and others are at varying stages of community SDKs. If your team already has a preferred language and it is not Python or TypeScript, check the SDK landscape before committing — the protocol itself is language-agnostic, but mature tooling is not yet universal.
What is the difference between MCP and OpenAI’s function calling or Claude’s tool use? Function calling and tool use are model-side capabilities — the LLM emits a structured request to call a tool, and your application code is responsible for executing it and returning the result. MCP is the wire protocol that standardizes how the client and the server exchange those requests and responses. You can build an MCP server that is consumed by a Claude tool-use loop, or by a third-party client that has never seen your code. The decoupling is the whole point.
Can I integrate this with Cisco NSO, Netbox, or other network-automation tooling? Yes. An MCP server is just a Python (or TypeScript) process. It can wrap an NSO REST API client, a Netbox query, a Catalyst Center call, or a junos-eznc session. The same fake-then-real pedagogy applies: stub the response, prove the LLM can call it, then drop in the real backend. The Ansible-to-MCP post coming next walks one of these end-to-end.
If you are scoping AI training for an engineering team — or trying to figure out where MCP fits in your team’s roadmap — book a free 20-minute call. First conversation is free, and I will tell you honestly if MCP is the right fit for what you are trying to do, or if a smaller integration would land the same value with less protocol overhead. More on what that engagement looks like at the training page.