Back in October, we showed how Docker Model Runner on the NVIDIA DGX Spark makes it remarkably easy to run large AI models locally with the same familiar Docker experience developers already trust. That post struck a chord: hundreds of developers discovered that a compact desktop system paired with Docker Model Runner could replace complex GPU setups and cloud API calls.
Recently at NVIDIA GTC 2026, NVIDIA is raising the bar with NVIDIA DGX Station and we’re excited to add support for it in Docker Model Runner! The new DGX Station brings serious performance, and Model Runner helps make it practical to use day to day. With Model Runner, you can run and iterate on larger models on a DGX Station, using the same intuitive Docker experience you already know and trust.
From NVIDIA DGX Spark to DGX Station: What has changed and why does this matter?
NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, gave developers 128GB of unified memory and petaflop-class AI performance in a compact form factor. A fantastic entry point for running models.
NVIDIA DGX Station is a different beast entirely. Built around the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, it connects a 72-core NVIDIA Grace CPU and NVIDIA Blackwell Ultra GPU through NVIDIA NVLink-C2C, creating a unified, high-bandwidth architecture built for frontier AI workloads. It brings data-center-class performance to a deskside form factor. Here are the headline specs:
|
DGX Spark (GB10) |
DGX Station (GB300) |
|
|---|---|---|
|
GPU Memory |
128 GB unified |
252 GB |
|
GPU Memory Bandwidth |
273 GB/s |
7.1 TB/s |
|
Total Coherent Memory |
128 GB |
748 GB |
|
ネットワーキング |
200 Gb/s |
800 Gb/s |
|
GPU Architecture |
Blackwell (5th-gen Tensor Cores, FP4) |
Blackwell Ultra (5th-gen Tensor Cores, FP4) |
With 252GB of GPU memory at 7.1 TB/s of bandwidth and a total of 748GB of coherent memory, the DGX Station doesn’t just let you run frontier models, it lets you run trillion-parameter models, fine-tune massive architectures, and serve multiple models simultaneously, all from your desk.
Here’s what 748GB of coherent memory and 7.1 TB/s of bandwidth unlock in practice:
- Run the largest open models without quantization. DGX Station can run the largest open 1T parameter models with quantization.
- Serve a team, not just yourself. NVIDIA Multi-Instance GPU (MIG) technology lets you partition NVIDIA Blackwell Ultra GPUs into up to seven isolated instances. Combined with Docker Model Runner’s containerized architecture, a single DGX Station can serve as a shared AI development node for an entire team — each member getting their own sandboxed model endpoint.
- Faster iteration on agentic workflows. Agentic AI pipelines often require multiple models running concurrently — a reasoning model, a code generation model, a vision model. With 7.1 TB/s of memory bandwidth, switching between and serving these models is dramatically faster than anything a desktop system has offered before.
Bottom line: The DGX Spark made that fast. The DGX Station makes it transformative. And raw hardware is only half the story. With Docker Model Runner, the setup stays effortless and the developer experience stays smooth, no matter how powerful the machine underneath becomes.
Getting Started: It’s the Same Docker Experience
For the full step-by-step walkthrough check out our guide for DGX Spark. Every instruction applies to the DGX Station as well.
NVIDIA’s new DGX Station puts data-center-class AI on your desk with 252GB of GPU memory, 7.1 TB/s bandwidth, and 748GB of total coherent memory. Docker Model Runner makes all of that power accessible with the same familiar commands developers already use on the DGX Spark. Pull a trillion-parameter model, serve a whole team, and iterate on agentic workflows. No cloud required, no new tools to learn.
参加方法
Docker Model Runnerの強みはコミュニティにあり、成長の余地は常にあります。参加するには:
- Star the repository: Show your support by starring the Docker Model Runner repo.
- アイデアを寄せてください: 問題を作成したり、プルリクエストを提出したりしてください。皆さんのアイデアを楽しみにしています!
- 広めよう: DockerでAIモデルを動かすことに興味がある友人や同僚に伝えましょう。
詳細情報
- Read our original post on Docker Model Runner + DGX Spark
- Check out the Docker Model Runner General Availability announcement
- 当社のModel Runner GitHubリポジトリをご覧ください
- Get started with a simple hello GenAI application