Skip to main content

Prerequisites

Before you begin, ensure you have:
  • Cerebras API Key - Get a free API key here.
  • Braintrust API Key - Visit Braintrust and create an account or log in.
    • Go to Settings > AI Providers and add your Cerebras API key. Then generate a Braintrust API key.
  • Python 3.7 or higher

Configure Braintrust

1

Install required dependencies

Run the following:
pip install autoevals braintrust openai
2

Configure environment variables

Create a .env file in your project directory:
CEREBRAS_API_KEY=your-cerebras-api-key-here
BRAINTRUST_API_KEY=your-braintrust-api-key-here

# Optional: If self-hosting Braintrust
# BRAINTRUST_API_URL=your-braintrust-api-url
3

Initialize the client

Set up the Cerebras client with Braintrust wrapping:
import os
import openai
import braintrust

# Wrap the OpenAI client to enable automatic tracing
client = braintrust.wrap_openai(
    openai.OpenAI(
        api_key=os.getenv("CEREBRAS_API_KEY"),
        base_url="https://api.cerebras.ai/v1",
    )
)
4

Start logging

Initialize a logger to automatically track all your model calls:

# Initialize logger for your project
braintrust.init_logger("My Cerebras Project")

# Make a simple completion call
response = client.chat.completions.create(
    model="llama3.1-8b",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of Nevada?"},
    ],
)

print(response.choices[0].message.content)

All calls are automatically logged to Braintrust with metrics like latency, token usage, and time to first token.
5

View your logs

  • Open the Braintrust dashboard
  • Navigate to your project
  • View detailed logs with metrics
  • Reproduce and tweak prompts directly in the UI view logs tweak the prompt
6

Run evaluations

Create evaluations to test your model’s performance:
from braintrust import Eval
from autoevals import Factuality

Eval(
    "My Cerebras Project",
    data=[
        {"input": "What is 100-94?", "expected": "6"},
        {"input": "What is the square root of 16?", "expected": "4"},
    ],
    task=lambda input: client.chat.completions.create(
        model="llama3.1-8b",
        messages=[
            {
                "role": "system",
                "content": "You are a helpful assistant. Provide only the answer.",
            },
            {"role": "user", "content": input},
        ],
    ).choices[0].message.content,
    scores=[
        Factuality(
            model="llama3.3-70b",
            api_key=os.environ["CEREBRAS_API_KEY"]
        )
    ],
)

You can check these evaluations in the Braintrust dashboard:dashboard

Next Steps

  • Explore the Braintrust documentation
  • Try out differen Cerebras models
  • Set up custom evaluation metrics
  • Build production monitoring dashboards

Troubleshooting

API Key Issues

  • Verify your keys are correctly set in environment variables
  • Check that Cerebras key is added to Braintrust’s AI providers

Import Errors

  • Ensure all packages are installed: pip install autoevals braintrust openai
  • Use a virtual environment to avoid conflicts

Connection Issues