MCP Servers: how to extend LLMs' capabilities with real tools

November 30, 2025
Anonymous
2 Views
Featured
AI & Machine Learning
Disclaimer: This text was translated from Polish using AI and has not been human-verified. Differences between the Polish and English versions may exist.
Table of contents

1. AI work revolution: how to extend the capabilities of language models



Language models, such as Claude or ChatGPT, have revolutionized the way we work with computers. However, despite their impressive capabilities, these tools have a significant limitation — they are isolated from the real-world data and systems we use every day. They cannot autonomously search your drive, update databases, or interact with external service APIs. At least until now.

In November 2024, Anthropic introduced the Model Context Protocol (MCP) — an open standard that changes the rules of the game. MCP acts as a universal interface that allows AI models to connect with external data sources and tools in a standardized and secure way. It's like giving your AI assistant real hands to perform specific tasks.

As early as March 2025, OpenAI officially adopted MCP into its ecosystem, and Google DeepMind announced support for MCP in upcoming Gemini versions. Today, MCP is becoming an industry standard — its adoption by the biggest players in the AI market confirms that this is not a fleeting trend, but the future of interaction with intelligent assistants.

In this article, I will show what MCP servers are, why it's worth using them, and how to configure them in the most popular AI tools. I will also present the most interesting servers for developers and non-technical users.

2. What are MCP servers?



Let's start with the basics. The Model Context Protocol is an open communication protocol that standardizes how AI models connect with external data resources and tools. We can compare it to USB-C for artificial intelligence — a universal connector that works everywhere.

2.1. The N×M problem that MCP solves



Before MCP, every AI integration with an external tool required writing dedicated code. If you had N AI models and M tools, you had to create N×M different integrations. This was a nightmare for both developers and companies wanting to use AI in their systems.

MCP solves this problem by creating a universal standard. Now, it's enough to implement an MCP server once for a given tool, and all AI models supporting this protocol can use it. The N×M problem turns into an N+M problem — a dramatic reduction in complexity.

2.2. How does it work in practice?



MCP servers are lightweight programs that act as bridges between an AI model and an external data source or tool. They can be run locally on your computer (for accessing files, databases, or applications) or remotely in the cloud (for web services).

The protocol defines three main components:
  • Resources (resources): data that the server makes available to the AI model — these can be files, database entries, or documentation

  • Tools (tools): functions that the AI model can invoke — e.g., saving a file, sending a message, or executing an SQL query

  • Prompts (prompts): predefined prompt templates that facilitate working with a given tool


Communication takes place via JSON-RPC 2.0, which makes the protocol lightweight and universal. MCP was designed with security in mind — it supports OAuth 2.0, authentication, and granular permission control.

3. Why use MCP servers?



The benefits of using MCP are enormous, both for individual users and for entire teams.

3.1. Elimination of hallucinations and access to current data



One of the biggest problems with language models is hallucinations — situations where AI invents information. MCP radically reduces this phenomenon by giving models access to real, up-to-date data. Instead of relying on knowledge from its training period, the model can directly access your database, documentation, or API.

3.2. Automatyzacja złożonych przepływów pracy



With MCP, your AI assistant stops being a passive chatbot and becomes an active agent that can perform multi-step tasks. Examples:
  • Analyzing server logs, identifying a problem, and automatically creating a ticket in the ticketing system

  • Retrieving data from a database, generating a report, and sending it via Slack

  • Searching a code repository, finding security vulnerabilities, and creating pull requests with fixes


3.3. Standardization and reduction of development costs



Thanks to the open standard, organizations can build integrations once and use them with various AI models. This means:
  • Lower development costs

  • Easier system maintenance

  • Ability to change AI provider without rewriting integrations

  • Faster deployment of new functionalities


3.4. Security and access control



MCP was designed with corporate security in mind. The protocol supports:
  • Granular permission control — you can specify which resources the model has access to

  • OAuth 2.0 and other authentication mechanisms

  • Resource Indicators (RFC 8707), which prevent misuse of access tokens

  • Auditing of all actions performed by AI


It is worth remembering, however, that MCP security is still evolving. In 2025, researchers discovered several vulnerabilities, including issues with prompt injection and potential data exfiltration. It is important to use only trusted MCP servers and to regularly update their versions.

4. Jak dodać serwery MCP do LLM-ów?



Now we move on to practice — I'll show you how to configure MCP servers in the most popular AI tools.

4.1. Claude Desktop — simplest configuration



Claude Desktop currently offers the most refined support for MCP. Since September 2025, Desktop Extensions have been introduced — .mcpb packages that you install with one click, without the need to edit configuration files.

Method 1: Installation via extension catalog (recommended)

For users of paid plans (Pro, Max, Team, Enterprise):
  1. Open Claude Desktop and go to Settings → Extensions

  2. Click Browse extensions to browse the official catalog

  3. Select the extension you are interested in and click Install

  4. Configure the required settings (e.g., API keys) in a user-friendly interface

  5. The extension is immediately available in conversations


Method 2: Manual configuration (for advanced users)

For users of the free plan or their own MCP servers:
  1. Open Claude Desktop and go to Settings → Developer

  2. Click Edit Config — the claude_desktop_config.json file will open

  3. Add the MCP server configuration in JSON format:


```json
{