By combining Docker’s containerization with Cerebras’ inference speed, teams can evaluate new code quickly while keeping experiments isolated, repeatable, and far, far away from a production database.

Try it yourself

In this tutorial we will build DevDuck, our multi-agent system that combines Cerebras fast inference with Docker Compose for isolated AI development environments.
1

Initialization

First, make sure that you have Docker installed, which you can download from the official website.
2

Initialization

Next, open a terminal and run the following commands:
git clone https://github.com/shelajev/docker-cerebras-demo
3

Setup Environment

Once you’ve cloned the repository, the final step is to establish your environment. DevDuck uses two models: one local model and one Cerebras inference model. To use the system, enter your Cerebras API key into the .env file, which you can obtain from the Cerebras platform.
4

Running DevDuck

All that’s left to do is run the program. To build and start DevDuck, run:
docker compose up --build
The compose setup spins up our agents and Docker’s MCP gateway which manages, in this example, the MCP tools for working with the node sandbox containers.
5

New Step

You can initialize the sandbox with a single prompt. DevDuck has three separate agents, but the Cerebras agent does most of the heavy lifting and tool calling. To initialize the sandbox, or use any tool, simply ask for any task and the program will automatically navigate to the correct agent and take care of everything else. Here, the DevDuck agent automatically hands off to Cerebras, which then sets up the Docker Compose sandbox in seconds. You can say:
“Initialize the sandbox”
“Hey Cerebras, init the sandbox”
“Please initialize my container. Thank you!”
With Compose, we can easily host multiple containers. So, our agents can easily switch between each other and call tools with no extra work on your part. With Cerebras inference, expect sandbox initializations, file creation, and code generation to be done in seconds.