Prerequisites
Before you begin, ensure you have:- Cerebras API Key - Get a free API key here
- Python 3.10 to 3.13 - Aider requires Python to run
- Git - Aider works with Git repositories to track changes
- A code editor - While Aider runs in the terminal, you’ll want an editor to view your files
Installation and Setup
Install Aider
pipx is recommended for isolated environments, as it installs Aider in its own virtual environment.Set up your Cerebras API key
~/.bashrc, ~/.zshrc, etc.).Initialize your project directory
Create an Aider configuration file
.aider.conf.yml file in your project root for easier usage. This eliminates the need to specify connection details every time you launch Aider.You can create this file via a terminal command:aider without additional flags.Using Aider with Cerebras
Start Aider with a Cerebras model
aider. Otherwise, use the full command:llama-3.3-70b model from Cerebras and connect to Cerebras’s API endpoint.cerebras/ prefix tells Aider to use LiteLLM routing to Cerebras, which automatically configures the correct API endpoint. The --openai-api-key flag is used because Aider expects this parameter name for OpenAI-compatible providers.Create a new file from scratch
fibonacci.py with the implementation. You don’t need to manually create the file first.Refine the code
- Analyze the recursive implementation
- Rewrite it to use iteration
- Add the main execution block
- Commit the changes to Git
Run the code
Review and undo
Recommended Models
Cerebras offers several models that work well with Aider. Choose based on your needs for speed versus capability:- llama-3.3-70b: Best for complex reasoning, long-form content, and tasks requiring deep understanding
- qwen-3-32b: Balanced performance for general-purpose applications
- llama3.1-8b: Fastest option for simple tasks and high-throughput scenarios
- gpt-oss-120b: Largest model for the most demanding tasks
- zai-glm-4.6: Advanced 357B parameter model with strong reasoning capabilities
cerebras/llama-3.3-70b as it provides excellent code understanding and generation capabilities while maintaining fast inference speeds.Advanced Usage
Using Different Edit Formats
Aider supports multiple edit formats. For Cerebras models, the default “whole” format works well, but you can experiment with others: To use whole file editing (recommended for Cerebras):Architect Mode
For high-level design discussions before coding, use architect mode. This is useful for planning changes, discussing architecture, or exploring different approaches:Streaming Responses
Enable streaming to see responses as they’re generated, providing immediate feedback:Working with Multiple Models
You can use different models for different tasks. For example, use a larger model for complex edits and a smaller one for simple changes:Example Session
Here’s a complete example of using Aider with Cerebras to add a new feature to a Python project. First, start Aider from your terminal:Troubleshooting
Aider can't connect to Cerebras API
Aider can't connect to Cerebras API
https://api.cerebras.ai/v1If you’re still having issues, try running Aider with verbose logging:Model isn't editing files properly
Model isn't editing files properly
- Using a more capable model like
llama-3.3-70b - Switching edit formats:
--edit-format whole - Being more specific in your requests with examples
- Breaking complex changes into smaller, incremental steps
- Adding more context files with
/add
Changes aren't being committed to Git
Changes aren't being committed to Git
- You’re in a Git repository:
git status - You have an initial commit:
git log - Your Git configuration is set up:
git config user.nameandgit config user.email
Rate limiting or quota errors
Rate limiting or quota errors
- Check your Cerebras dashboard for current usage
- Consider upgrading your plan for higher limits
- Use a smaller model like
llama3.1-8bfor simple tasks - Add delays between requests if using Aider programmatically
- Break large changes into smaller sessions
Aider is slow or unresponsive
Aider is slow or unresponsive
- Ensure you’re using a Cerebras model (they’re optimized for speed)
- Reduce the number of files in context with
/drop filename - Use streaming mode:
--stream - Check your internet connection
- Try a smaller model like
llama3.1-8bfor faster responses
FAQ
Which Cerebras model should I use with Aider?
Which Cerebras model should I use with Aider?
llama-3.3-70b. It provides excellent code understanding and generation capabilities while maintaining fast inference speeds.For faster responses on simpler tasks, try llama3.1-8b. For the most complex architectural decisions, consider gpt-oss-120b.Check Aider’s LLM leaderboards to see how different models perform on code editing tasks.Can I use Aider with multiple files?
Can I use Aider with multiple files?
/add command to include multiple files in the conversation:/drop filename to remove files from context if needed.Does Aider work with non-Python projects?
Does Aider work with non-Python projects?
llama-3.3-70b have strong multi-language capabilities.How do I undo changes made by Aider?
How do I undo changes made by Aider?
/undo within an Aider session to undo the last change before it’s committed.Can I use Aider in CI/CD pipelines?
Can I use Aider in CI/CD pipelines?
yes=True flag to auto-approve changes:How much does it cost to use Aider with Cerebras?
How much does it cost to use Aider with Cerebras?
- The model you choose (larger models cost more per token)
- The number of tokens processed (input + output)
- Your usage volume
Next Steps
- Explore Aider’s documentation for advanced features like voice coding and custom commands
- Try different Cerebras models to find the best fit for your workflow
- Check out Aider’s LLM leaderboards to see performance comparisons
- Read the Cerebras documentation to learn more about API capabilities
- Want to try the latest model? Check out the GLM4.6 migration guide

