Resources
Examples
Run and fork these examples to start building with Cerebras
Getting started with Cerebras Inference API
Learn how to get started with the Cerebras Inference API for your AI projects.
Conversational Memory for LLMs with Langchain
Explore how to build conversational memory for LLMs using Langchain.
RAG with Pinecone + Docker
Implement Retrieval-Augmented Generation (RAG) using Pinecone and Docker.
RAG with Weaviate + HuggingFace
Implement Retrieval-Augmented Generation (RAG) using Weaviate and HuggingFace.
Getting started with Cerebras + Streamlit
Learn how to integrate Cerebras with Streamlit to build interactive applications.
AI Agentic Workflow Example with LlamaIndex
Build an AI agentic workflow using LlamaIndex.
AI Agentic Workflow Example with Langchain
Build an AI agentic workflow using Langchain.
Multi AI Agentic Workflow Example with Langgraph + LangSmith
Create a multi-agentic AI workflow with Langgraph and LangSmith.
Track and Log your Cerebras LLM calls with Weights & Biases
Use Weights & Biases Weave with the Cerebras Cloud SDK for automatic tracking and logging of LLM calls.
Was this page helpful?