5 Best Practices for Building, Testing, and Packaging MCP Servers 

We recently launched a new, reimagined Docker MCP Catalog with improved discovery and a new submission process. Containerized MCP servers offer a secure way to run and scale agentic applications and minimize risks tied to host access and secret management. Developers can submit servers in two ways: Docker-built servers, which include our full security suite (signatures, SBOMs, attestations, and continuous scanning), or community-built servers, which are built and maintained by developers using their own Docker images.

In this blog, we’ll share 5 MCP server best practices for designing, testing, and packaging them for submission. These recommendations are based on our experience building and helping developers build over 100 MCP servers for the Docker MCP Catalog. They’ll help you streamline the submission process, reach over 20 million Docker developers, and deliver real utility to both agents and the developers who use them.

1. Manage your agent’s tool budget intentionally

“Tool Budget” is our internal term for the number of tools an agent can handle effectively. Like any budget, managing it well is key to a good user experience. As the creator of an MCP server, it’s important to consider that offering too many tools can make your server more complex and costly to use, potentially turning users away. Some AI agents now allow users to selectively enable tools, helping keep the experience streamlined. But the better strategy is to design your toolset around clear use cases and avoid mapping every API endpoint to a separate tool.

For example, when creating an MCP server to access your API, you might be tempted to make one tool for each of the API’s endpoints. While that’s a quick way to get started, it often results in an overloaded toolset that discourages adoption.

So, if one tool per endpoint isn’t ideal, how do you design a better MCP server?

This is where the MCP server prompts come in. Think of them like Macros. Instead of requiring users to call multiple tools, you can create a single prompt that chains multiple tools or endpoint calls behind the scenes. That way, a user can simply ask the agent to “fetch my user’s invoices,” and the agent can handle the complexity internally, calling two or three tools without exposing the overhead.

2. The end user of the tool is the agent/LLM

One important point often overlooked: it’s the agent or LLM, not the end user, that actually uses the tool. The user enables the tool, but the agent is the one calling it. Why does this matter? When you’re building an MCP server, you’re not interfacing directly with users. You’re building for the agent that acts on their behalf.

Error handling is one area where we’ve consistently seen developers run into issues. If your tool returns error messages meant for humans, you might not provide the user experience you think. The agent, not the user, is the one calling your tool, and there’s no guarantee it will pass the error message back to the user.

Agents are designed to complete tasks. When something fails, they’ll often try a different approach. That’s why your error handling should help the agent decide what to do next, not just flag what went wrong. Instead of “You don’t have access to this system”, return something along the lines of “To have access to this system, the MCP server needs to be configured with a valid API_TOKEN, the current API_TOKEN is not valid”. 

What you’re doing here is informing the agent that access to the third-party system isn’t possible due to a misconfiguration, not because access is denied outright. The distinction matters: the lack of access is a result of the user not properly configuring the MCP server, not a hard permission issue.

3. Document for humans and agents! 

This brings us to an equally important point: documentation!

When writing for your MCP server, remember you’re serving two audiences: the end users and the AI agent. As we saw with error handling, it’s critical to understand the needs of both.

Your documentation should address each audience clearly. End users want to know why they should use your MCP server, what problems it solves and how it fits into their workflow. Agents, on the other hand, rely on well-written tool names and descriptions to decide whether your server is the right fit for a given task.

Keep in mind: the agent is the one actually using the MCP server, but it’s the end user who decides which tools the agent has access to. Your documentation needs to support both!

4. Don’t just test functionality, test user interactions

One of the best ways to validate your documentation is to test your own MCP server. By far, the easiest way of interacting with your server when developing is to use the MCP inspector (type npx @modelcontextprotocol/inspector in your terminal and off you go!).

While it’s common to test whether your MCP server works, the inspector also helps you think from the end user’s perspective. It gives you a clearer sense of how users will interact with your server and whether your documentation supports that experience.

There are three key steps to testing a server:

  1. Connecting to the MCP Server: This step will help you validate that your server is capturing all the necessary configuration to run properly.
  2. List Tools: This is what AI agents see when they initialize your MCP server.
  3. Tool Calling: Make sure that the tool behaves the way that it’s expected. This is where you can validate the failure modes.

One important design consideration is to think about the MCP Server lifecycle: Ask: What is necessary for the MCP Client to connect to the MCP Server?  How should tools be listed and discovered? And what’s the process for invoking a specific tool?

For example, when you’re writing an MCP server for your database. In a typical API, you’d establish the database connection when the server starts. However, when writing an MCP server, you should aim to make each tool call as self-contained as possible. This means creating a connection for every tool call, not on server start. By doing this, you will allow users to connect and list tools even if the server is not configured correctly. 

While this might feel like an anti-pattern at first, it actually makes more sense in this context. You’re trading a bit of latency for improved usability and reliability. In reality, the only moment your MCP will need a connection to a database (or a third-party system) is when a tool is invoked. The MCP Inspector is a great way to see this in action and gain a better understanding of how both users and agents will interact with your server.

If you are using the Docker MCP Toolkit, there are several ways to test whether your MCP server is behaving as expected. 

Run the following command to call your tool using the configuration you defined in Docker Desktop.

`docker mcp tools call my-tool`

To test what the MCP clients see, you can run the following command:

`docker mcp gateway run --verbose --dry-run`

This command simulates the call from an MCP client to your MCP server, assuming it’s enabled in the Docker MCP Catalog.

5. Packaging your MCP servers with containers

Excellent, we have written and tested our MCP server, what’s next? Packaging!

Packaging an MCP server is not so much about creating the artifact but thinking about how the artifact is going to be used. We might be a bit biased here, but we truly believe that packaging your MCP server as a Docker Image is the way to go.

MCP servers come in many different flavours: Python, TypeScript, Java… Packaging as a Docker image makes your server truly portable and because of the nature of Docker images. You can ensure that the end user will be able to run your MCP server regardless of how their system is configured. Using Docker containers is the easiest way to avoid dealing with dependencies on other people’s machines. If they can run Docker, they can run your MCP server.

There are many resources available about how to create a good Dockerfile, but if you’re not sure if you have done the right thing, you can always use Gordon or `docker ai` command to improve it. Just type `docker ai improve my Dockerfile` and Gordon, the Docker AI agent, will help you with optimizing a Dockerfile for your MCP server.

How to submit your MCP server 

Once you have a Dockerfile in your repository, we invite you to submit your MCP server to the Docker Official Registry! At the time of this writing, all submitted MCP servers must use the stdio transport mechanism, so be sure your server supports this when running as a container. We look forward to your submission!

Conclusion

The new Docker MCP Catalog makes it easier than ever to discover and scale MCP servers securely. Whether you’re submitting a Docker-built server with full security treatment or maintaining your own as a community contributor, following these five best practices for MCP servers; Managing tool budget, designing for the Agent, writing for both users and LLMs, thoroughly testing, and packaging with containers will help you create MCP servers that are reliable, easy to use, and ready for real-world agentic workloads. 

Ready to share yours with the Docker community? Submit it to the Docker MCP Catalog and get it in front of millions of developers! 

Learn more

Post Categories