Preliminary Note
Throughout this article, you'll see that I primarily refer to Claude and Claude Code (if you're not familiar with it, take a look) as examples of MCP clients. But what I describe here is valid for any large language model (LLM) that can be integrated into real workflows: whether it's ChatGPT, Gemini, Mistral, or any other.
The concept discussed here and its architecture don't depend on the model, but on the MCP protocol and its implementation.
Introduction
There's a clear limitation in the "standard" interaction with any LLM: it can read your code, but it cannot directly interact with the rest of your tools.
For example, you read specifications in Notion, manage tasks in Linear, deploy to Vercel... but Claude remains confined to a chat window.
The result? Copy, paste, repeat.
MCP (Model Context Protocol) solves that bottleneck.
What is MCP?
MCP is not a new tool. It's the protocol that allows an LLM (like Claude) to stop being a passive spectator and become an active part of your workflow.
It's a standard protocol that allows AI applications to connect with external tools in a structured, secure, and modular way.
Simplified Architecture
graph LR
Client[MCP Client]
Server[MCP Server]
Tool[External Tool]
Client --> Server --> Tool
Components
- MCP Client: the application that wants to access data or execute external functions (Claude Desktop, Claude Code, Cursor, etc.).
- MCP Server: the intermediary that translates client requests into concrete actions on a tool.
Types of Capabilities
- Resources: allow reading information (files, documents, API responses).
- Tools: allow executing actions (create issues, send emails, query databases).
- Templates: predefined instructions converted into reusable commands.
Of course, there's much more to learn about MCP, so I recommend you review its essential concepts.
MCP vs APIs: The Difference That Matters
This is the most common confusion and a doubt I've had myself. To be clear: MCP is not an API. It's a protocol that standardizes how an LLM interacts with multiple APIs without learning them one by one.
Let me give you an example to show the differences.
Without MCP
flowchart LR
Claude[LLM]
Linear[Linear API Token + Custom JSON]
GitHub[GitHub API Token + Pagination]
Notion[Notion API OAuth + Nested Structures]
Claude --> Linear
Claude --> GitHub
Claude --> Notion
subgraph Result
direction TB
Error[Chaos of specific integrations]
end
Linear --> Error
GitHub --> Error
Notion --> Error
Each API has its own authentication, format, and structure, so the LLM is forced to know the implementation details in each case. This potentially means a different context every time you interact with services:
- Linear → token + custom JSON.
- GitHub → token + pagination.
- Notion → OAuth + nested structures.
Result: specific integrations that are complicated to maintain and can end up in chaos.
With MCP
flowchart LR
Claude[LLM]
MCPServer[MCP Server as adapter]
API1[Linear API]
API2[Jira API]
API3[Notion API]
Claude --> MCPServer
MCPServer --> API1
MCPServer --> API2
MCPServer --> API3
subgraph Result
direction TB
Uniformity[Same protocol - Reusable integration]
end
API1 --> Uniformity
API2 --> Uniformity
API3 --> Uniformity
- The LLM talks to MCP.
- The MCP server translates between MCP and the specific API.
- Switch from Linear to Jira → same protocol, different server.
Result: MCP acts as a universal adapter.
Where to Find MCP Servers
Awesome MCP (https://mcpservers.org/)
- This is a more or less global directory of MCP servers of all types and colors.
- Each server gives you access to a repository with all the options to integrate the server into any MCP client like Claude, Cline, ChatGPT, Cursor, Windsurf, etc.
Claude MCP (https://www.claudemcp.com/)
- A directory of servers (for any MCP client, not just Claude) created by the community.
- Databases: PostgreSQL, MongoDB, SQLite.
- Development: GitHub, Git, Docker.
- Productivity: Notion, Obsidian, Gmail.
- Automation: Puppeteer, web browsers.
- Installation instructions are all very similar.
- For example, to use the GitHub MCP server with Claude Code:
claude mcp add github /path/to/server -e GITHUB_TOKEN=your_token
claude mcp add postgres /path/to/server -e DATABASE_URL=your_connection
VSCode MCP (https://code.visualstudio.com/mcp)
- Direct installation from the VS Code interface.
- Once installed, they are accessible from the agent window.
- You have all the information about its usage on this page.
Real Case: MCP Server for FrontendLeap
There's nothing comparable to practice to ensure you've understood a concept. With something as apparently abstract as the MCP protocol, it's almost an obligation.
On the other hand, I totally believe in the "scratch your own itch" theory when it comes to solving problems—that is, solve your own first.
The Problem
As a Frontend developer trainer, I regularly face this:
- Student: "I don't understand
Array.reduce
" —it's an example. - Result: search for examples (almost always generic) that don't connect with their context.
So I started to ponder until the idea presented itself.
What if Claude could generate personalized exercises based on the user's specific context?
To test it, I needed:
- Connection to the FrontendLeap challenges API.
- Content generation adapted to the user (through an LLM).
- Return a URL with the created challenge, ready to solve.
The Solution
A —very simple— MCP server with one exposed command: create_challenge
, created with the TypeScript SDK:
const server = new McpServer({ name: "frontendleap-challenges", version: "1.0.0" });
server.addTool({
name: "create_challenge",
description: "Creates a personalized FrontendLeap challenge",
handler: async (params) => {
const challenge = await createCustomChallenge(params);
return `Challenge created: ${challenge.url}`;
}
});
This is the morphology of the challenge
entity:
tools: [
{
name: "create_challenge",
description:
"Create a complete coding challenge with all content generated by Claude and save it to FrontendLeap",
inputSchema: {
type: "object",
properties: {
title: {
type: "string",
description:
"The challenge title (e.g., 'Advanced CSS Flexbox Centering Challenge')",
},
description: {
type: "string",
description: "Brief description of what the challenge teaches",
},
explanation: {
type: "string",
description:
"Detailed markdown explanation of the concept, including examples and learning objectives",
},
starter_code: {
type: "string",
description:
"The initial code template that users start with - should be relevant to the challenge",
},
test_code: {
type: "string",
description:
"JavaScript test code (using Jasmine) that validates the user's solution",
},
solution: {
type: "string",
description:
"Optional markdown explanation of the solution approach and key concepts",
},
language: {
type: "string",
enum: ["javascript", "html", "css", "typescript"],
description: "Programming language for the challenge",
},
difficulty: {
type: "string",
enum: ["beginner", "intermediate", "advanced"],
description: "Challenge difficulty level",
},
},
required: [
"title",
"description",
"explanation",
"starter_code",
"test_code",
"language",
"difficulty",
],
},
},
];
If you want to take a look at the code, here's the repo.
What This MCP Server Does
- Receives user context conversationally (no more static exercises).
- With the context, generates code, unit tests, explanations, and the solution.
- Queries the
challenges
API on FrontendLeap. - Publishes the challenge and returns the URL to the user.
Real Example
I'm a junior Frontend developer and I struggle with TypeScript generics. Can you generate a challenge for that?
Claude interprets the context, calls the MCP server, and returns something like:
https://fl.test/challenges/custom-hooks-uselocalstorage-hook-challenge
(as you can see, it's a local URL since this feature isn't in production).
When to Create Your Own MCP Server?
Not all automations require an MCP server. But when the model needs to interact with external systems in a structured way, the difference, as you can see, is radical.
When It IS Worth It
- When you need to manage secure authentication (tokens, OAuth, per-user permissions).
- When there's non-trivial logic: multiple steps, conditions, transformations, calculations.
- When it requires writing or modifying external resources (creating files, uploading content, updating records).
- When the flow demands responses adapted to the user or current context.
When It's Probably NOT
- If it's enough to read local files or query static JSONs.
- If the task is so simple that it's solved with a custom command, as we saw with Claude Code.
- If the logic is so specific that it won't be reused in other contexts.
The Decisive Test
Does your model need to leave the local environment to act on an external system with its own criteria?
If the answer is yes, then you need MCP.
Conclusion
MCP Is the Missing Glue
It transforms a passive model into an operational agent within your stack. Without repetitive integrations or manual hacks.
The Ecosystem Is Already Mature
Before building from scratch, explore. MCP servers exist for most common cases.
You Don't Always Need to Build
The key is to discern: does what you're going to do justify the implementation and maintenance effort?
MCP Doesn't Replace APIs, It Abstracts Them
It's the bridge between your LLM and any service, without forcing the model to learn each API in detail.
The Future Is Integration
Useful AI doesn't live in an isolated tab. It lives in your real environment, executing tasks with context. MCP makes it possible.