With the release of Docker Compose v2.36.0, we’re excited to introduce a powerful new feature: provider services. This extension point opens up Docker Compose to interact not only with containers but also with any kind of external system, all while keeping the familiar Compose file at the center of the workflow.
In this blog post, we’ll walk through what provider services are, how developers can use them to streamline their workflows, how the provider system works behind the scenes, and how you can build your own provider to extend Compose for your platform needs.
Why Provider Services Are a Game-Changer
Docker Compose has long been a favorite among developers for orchestrating multi-container applications in a simple and declarative way. But as development environments have become more complex, the need to integrate non-container dependencies has become a common challenge. Applications often rely on managed databases, SaaS APIs, cloud-hosted message queues, VPN tunnels, or LLM inference engines — all of which traditionally sit outside the scope of Compose.
Developers have had to resort to shell scripts, Makefiles, or wrapper CLIs to manage these external components, fragmenting the developer experience and making it harder to onboard new contributors or maintain consistent workflows across teams.
Provider services change that. By introducing a native extension point into the Compose, developers can now define and manage external resources directly in their compose.yaml.
Compose delegates their lifecycle to the provider binary, coordinating with it as part of its own service lifecycle.
This makes Docker Compose a more complete solution for full-stack, platform-aware development — from local environments to hybrid or remote setups.
Using a Provider Service in Your Compose File
Provider services are declared like any other Compose service, but instead of specifying an image
, you specify a provider
with a type
, and optionally some options
. The type
must correspond to the name of a binary available in your $PATH
that implements the Compose provider specification.
As an example we will use the Telepresence provider plugin, which routes Kubernetes traffic to a local service for live cloud debugging. This is especially useful for testing how a local service behaves when integrated into a real cluster:

In this setup, when you run docker compose up
, Compose will call the compose-telepresence
plugin binary. The plugin performs the following actions:
Up Action:
- Check if the Telepresence traffic manager is installed in the Kubernetes cluster, and install it if needed.
- Establish an intercept to re-route traffic from the specified Kubernetes service to the local service.
Down Action:
- Remove the previously established intercept.
- Uninstall the Telepresence traffic manager from the cluster.
- Quit the active Telepresence session.
⚠️ The structure and content of the options
field are specific to each provider. It is up to the plugin author to define and document the expected keys and values.
If you’re unsure how to properly configure your provider service in your Compose file, the Compose Language Server (LSP) can guide you step by step with inline suggestions and validation.
You can find more usage examples and supported workflows in the official documentation: https://docs.docker.com/compose/how-tos/provider-services/
How Provider Services Work Behind the Scenes
Under the hood, when Compose encounters a service using the provider
key, it looks for an executable in the user’s $PATH
matching the provider type
name (e.g. docker-model cli
plugin or compose-telepresence
). Compose then spawns the binary and passes the service options
as flags, allowing the provider to receive all required configuration via command-line arguments.
The binary must respond to JSON-formatted requests on stdin and return structured JSON responses on stdout.
Here’s a diagram illustrating the interaction:

Communication with Compose
Compose send all the necessary information to the provider binary by transforming all the options
attributes as flags. It also passes the project and the service name. If we look at the compose-telepresence
provider example, on the up
command Compose will execute the following command:
$ compose-telepresence compose --project-name my-project up --name api --port 5732:api-80 --namespace avatars --service api dev-api
On the other side, providers can also send runtime messages to Compose:
info:
Reports status updates. Displayed in Compose’s logs.error:
Reports an error. Displayed as the failure reason.setenv:
Exposes environment variables to dependent services.debug:
Debug messages displayed only when running Compose with-verbose
.
This flexible protocol makes it easy to add new types and build rich provider integrations.
Refer to the official protocol spec for detailed structure and examples.
Building Your Own Provider Plugin
The real power of provider services lies in their extensibility. You can write your own plugin, in any language, as long as it adheres to the protocol.
A typical provider binary implements logic to handle a compose
command with up
and down
subcommands.
The source code of compose-telepresence-plugin will be a good starting point. This plugin is implemented in Go and wraps the Telepresence CLI to bridge a local dev container with a remote Kubernetes service.
Here’s a snippet from its up
implementation:


This method is triggered when docker compose up
is run, and it starts the service by calling the Telepresence CLI based on the received options.
To build your own provider:
- Read the full extension protocol spec
- Parse all the options as flags to collect the whole configuration needed by the provider
- Implement the expected JSON response handling over /stdout
- Don’t forget to add
debug
messages to have as many details as possible during your implementation phase. - Compile your binary and place it in your
$PATH
- Reference it in your Compose file using
provider.type
You can build anything from service emulators to remote cloud service starters. Compose will automatically invoke your binary as needed.
What’s Next?
Provider services will continue to evolve, future enhancements will be guided by real-world feedback from users to ensure provider services grow in the most useful and impactful directions.
Looking forward, we envision a future where Compose can serve as a declarative hub for full-stack dev environments, including containers, local tooling, remote services, and AI runtimes.
Whether you’re connecting to a cloud-hosted database, launching a tunnel, or orchestrating machine learning inference, Compose provider services give you a native way to extend your dev environment, no wrappers, no hacks.
Let us know what kind of providers you’d like to build or see added. We can’t wait to see how the community takes this further.
Stay tuned and happy coding!