What is Arize Phoenix?
Arize Phoenix is an open-source AI observability platform that helps you monitor, evaluate, and debug your LLM applications. Phoenix provides detailed tracing, evaluation capabilities, and debugging tools to help you understand and improve your AI systems in production. Learn more at https://arize.com/phoenix/ With Phoenix, you can:- Capture detailed traces of your LLM interactions
- Evaluate model outputs with custom or pre-built evaluators
- Debug retrieval and generation pipelines
- Monitor performance and quality metrics
- Analyze embeddings and vector search results
Prerequisites
Before you begin, ensure you have:- Cerebras API Key - Get a free API key here.
- Python 3.11 or higher - Phoenix requires Python 3.11+. Check your version with
python --version. - Phoenix Account (Optional) - While you can use Phoenix locally, creating a free account at Phoenix Cloud enables cloud-based tracing and team collaboration.
Configure Arize Phoenix
Configure environment variables
Create a Replace
.env file in your project directory:your-workspace-name with your actual Phoenix Cloud workspace name (e.g., sebastian-duerr). You can find your Phoenix API key and workspace name in your Phoenix Cloud dashboard.Note: If your Phoenix Cloud instance was created before June 24, 2025, you must set PHOENIX_CLIENT_HEADERS with the api_key= prefix for authentication to work correctly.Initialize Phoenix tracing
Set up Phoenix Cloud tracing with automatic instrumentation:The
auto_instrument=True flag automatically instruments the OpenAI SDK to capture all API calls (including Cerebras) and sends detailed traces to Phoenix Cloud.Make your first traced request
Make a request to Cerebras. Phoenix will automatically capture the full trace:After running this code, visit Phoenix Cloud to see your traces. You’ll see detailed information including conversation history, token usage, response latency, model parameters, and any errors or warnings.
Advanced Features
Streaming Responses
Phoenix fully supports streaming responses from Cerebras. Traces will capture the complete streamed output:Using Phoenix Evaluations
Phoenix includes a powerful evaluation library that can use Cerebras models to evaluate your LLM outputs:Multi-Turn Conversations
Phoenix traces multi-turn conversations, making it easy to debug complex interactions:Complete Example
Here’s a complete example showing all the setup and a traced request:Troubleshooting
Traces not appearing in Phoenix Cloud
Traces not appearing in Phoenix Cloud
If you don’t see traces in Phoenix Cloud:
- For instances created before June 24, 2025: Ensure you set
PHOENIX_CLIENT_HEADERS=api_key=your-api-keyin your.envfile - Verify your
PHOENIX_API_KEYenvironment variable is set correctly - Check that
PHOENIX_COLLECTOR_ENDPOINTis set tohttps://app.phoenix.arize.com/s/your-workspace-name - Ensure you called
register()withauto_instrument=Truebefore making any API requests - Look for any error messages in your Python console (especially “401 Unauthorized”)
- Confirm the
arize-phoenixandopeninference-instrumentation-openaipackages are installed
401 Unauthorized errors
401 Unauthorized errors
Connection errors to Cerebras API
Connection errors to Cerebras API
If you’re getting connection errors:
- Verify your
CEREBRAS_API_KEYenvironment variable is set correctly - Ensure you’re using the correct base URL:
https://api.cerebras.ai/v1 - Check your internet connection and firewall settings
- Try making a simple request without Phoenix to isolate the issue
High memory usage with large traces
High memory usage with large traces
If Phoenix is consuming too much memory:
- Consider using Phoenix Cloud instead of running locally for production workloads
- Limit the number of traces stored locally by restarting Phoenix periodically
- Use trace sampling for high-volume applications
- Review the performance optimization guide in Phoenix docs
Next Steps
- Explore the Phoenix documentation to learn about advanced features like custom evaluators and embedding analysis
- Try different Cerebras models to compare performance and quality
- Set up custom evaluators to monitor specific quality metrics
- Integrate Phoenix with your production applications for continuous monitoring
- Join the Phoenix community on Slack to get help and share feedback

