Getting Started with Model Context Protocol

LLM, GenAI, MCP — the acronyms keep coming, and they’re hard to keep up with. But here’s the thing — it doesn’t have to be that confusing, and you can actually get hands-on with these technologies faster than you think.

AI is reshaping how we interact with the world around us. It’s showing up in the tools developers use every day — from code editors to CI pipelines to the CLI. With that kind of reach, understanding the basics isn’t just for tech enthusiasts anymore — it’s becoming essential for everyone.

So let’s start from the basics. By the end of this article, you’ll understand what MCP is, how it connects to AI agents like Claude, and how to set up your own environment to start experimenting. No computer science degree required.

Let’s build something.

What are GenAI and LLMs? Link to heading

Generative AI — or GenAI for short — is a branch of Artificial Intelligence. Unlike older AI systems that simply analyzed data or made predictions, GenAI can actually create new content — think text, images, code, and even audio — by learning patterns from massive amounts of existing data.

At the heart of most GenAI tools you use today are Large Language Models, or LLMs. These are a type of AI model that has been trained on enormous amounts of text, allowing them to understand and generate human language in a surprisingly natural way. The AI we’ll be working with in this article — Claude, built by Anthropic — is one such LLM.

Think of an LLM like an incredibly well-read assistant. It hasn’t experienced the world, but it has processed so much written human knowledge that it can hold a conversation, answer questions, write code, and much more.

What is MCP? Link to heading

MCP stands for Model Context Protocol. It’s an open standard, created by Anthropic, that defines how AI models like Claude communicate with external tools and data sources.

Here’s the problem it solves: LLMs are powerful, but on their own they’re isolated. By default, Claude doesn’t know what’s on your filesystem, can’t query your database, and can’t call your internal APIs. MCP is the bridge that changes that.

Anthropic describes MCP like USB-C for AI. Before USB-C, every device had its own proprietary cable — a mess of incompatible connectors. USB-C gave us one universal standard, and suddenly any device could plug into any peripheral. MCP does the same thing for AI. Instead of every tool building a custom, one-off integration with every AI model, MCP gives everyone a shared language. Build an MCP Server once, and any MCP-compatible client can connect to it.

The architecture follows a simple client-server model:

  • An MCP Server exposes capabilities — tools the AI can call, data it can read, or predefined prompts
  • An MCP Client connects to those servers and hands those capabilities to the LLM
  • The LLM decides when and how to use them based on context
graph LR A([You]) -->|sends a message| B[MCP Client\ne.g. Claude] B <-->|MCP Protocol| C[MCP Server] C --> D[Tool: read_file] C --> E[Tool: query_db] C --> F[Tool: call_api]

How MCP connects to Claude Link to heading

Claude acts as the MCP Client. When you configure an MCP Server, Claude can see the tools it exposes and will call them automatically when it determines they’re relevant to what you’re asking.

For example, imagine an MCP Server that can read files from your local machine. You ask Claude “summarize my project README.” Claude recognizes it needs to read a file, calls the MCP tool, gets the content back, and incorporates it into its response — all in one turn.

This turns Claude from a text-in, text-out model into something that can actually act on your environment.

Project: Build a simple MCP Server Link to heading

We’re going to build a weather MCP server that lets Claude look up current conditions for any city. We’ll use Open-Meteo — a free weather API that requires no account or API key.

Pre-Requisites Link to heading

  • Python 3.10+
  • uv installed
  • Claude Desktop installed

Create the directory and install the dependencies:

uv init weather-mcp
cd weather-mcp
uv add fastmcp httpx

The Server Link to heading

Create a file called weather_server.py:

import httpx
from fastmcp import FastMCP

mcp = FastMCP("weather")

@mcp.tool()
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    # Step 1: Convert city name to coordinates
    geo = httpx.get(
        "https://geocoding-api.open-meteo.com/v1/search",
        params={"name": city, "count": 1}
    ).json()

    if not geo.get("results"):
        return f"Could not find location: {city}"

    location = geo["results"][0]
    lat, lon = location["latitude"], location["longitude"]
    name, country = location["name"], location["country"]

    # Step 2: Fetch current weather
    weather = httpx.get(
        "https://api.open-meteo.com/v1/forecast",
        params={"latitude": lat, "longitude": lon, "current_weather": True}
    ).json()

    current = weather["current_weather"]
    temp_c = current["temperature"]
    temp_f = round(temp_c * 9 / 5 + 32, 1)
    return (
        f"Current weather in {name}, {country}: "
        f"{temp_c}°C ({temp_f}°F), wind {current['windspeed']} km/h"
    )

if __name__ == "__main__":
    mcp.run()

That’s the entire server. The @mcp.tool() decorator registers get_weather as a tool that any MCP client can discover and call. FastMCP handles all the protocol wiring — you just write a normal Python function.

Testing Locally Link to heading

Before connecting to Claude Desktop, you can test your server using the MCP Inspector — a web UI built into FastMCP:

uv run fastmcp dev inspector weather_server.py

This spins up a local interface where you can call your tools directly, inspect inputs and outputs, and confirm everything works before touching any Claude configuration.

Connect it to Claude Link to heading

Claude Desktop reads its MCP configuration from a JSON file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Open that file and add your server:

{
  "mcpServers": {
    "weather": {
      "command": "uv",
      "args": ["run", "--directory", "/absolute/path/to/weather-mcp", "weather_server.py"]
    }
  }
}

Restart Claude Desktop. You should see a hammer icon in the chat interface indicating tools are available. Now ask Claude:

“What’s the weather like in Tokyo?”

Claude will call your get_weather tool, get the response back from Open-Meteo, and answer you — without you having to paste any API output yourself.

The Future of MCP Link to heading

The weather server you just built is a toy, but the pattern scales surprisingly far.

In production environments, MCP opens up some genuinely powerful workflows. Imagine asking Claude “how many unresolved incidents do we have right now?” and having it query your database directly. Or “what’s the state of our ECS services in us-east-1?” and having it call the AWS SDK on your behalf. Or “did the last Terraform apply succeed?” and having it check your state file. These aren’t hypothetical — they’re just more @mcp.tool() decorators on functions you probably already have.

@mcp.tool()
def get_rds_instance_status(instance_id: str) -> str:
    """Check the status of an RDS instance."""
    ...

@mcp.tool()
def get_terraform_state_summary(stack: str) -> str:
    """Return the resource summary for a Terraform stack."""
    ...

That’s the point. If you have an existing function that fetches data from somewhere, wrapping it in MCP is a one-line change.

Warning: Don’t overload Claude with MCP Servers. Every server you add contributes its tool definitions to Claude’s context window. Add too many and you’re consuming tokens before the conversation even starts — leaving less room for your actual work. Worse, a large pool of loosely related tools increases the chance that Claude picks the wrong one for the job. Start with a small, focused set of servers relevant to what you’re doing, and add more only as needed.

A word of caution on mutations. At this stage, it’s wise to focus your MCP tools on read operations — fetching data, checking status, summarizing state. Write operations (creating resources, deleting records, applying changes) introduce real risk because AI models can misinterpret intent or act on ambiguous instructions. The cost of a wrong read is a bad answer. The cost of a wrong write could be a dropped table or an unintended AWS resource change. Build confidence with reads first, and layer in writes carefully with explicit confirmation steps.

What’s next? Link to heading

You’ve gone from “what is MCP?” to a working server connected to Claude in one sitting. That’s the whole point of the protocol — the barrier to extending AI with real-world capabilities is now just writing a Python function.

From here, a few directions worth exploring:

  • Add more tools — extend the weather server with a get_forecast tool that returns a 7-day outlook
  • Browse the MCP ecosystem — there are already servers for GitHub, Slack, databases, and more at modelcontextprotocol.io
  • Use Claude Code — Claude Code has MCP support built in, so you can add servers and use them directly in your terminal workflow

References Link to heading