Try out the full deep research chat here.
For this workshop, youβll need:
Step 1: Environment Setup
First, letβs install all the necessary libraries, import everything we need, and configure our API credentials.
pip install exa-py cerebras-cloud-sdk
from exa_py import Exa
from cerebras.cloud.sdk import Cerebras
# Add your API keys here
EXA_API_KEY = ""
CEREBRAS_API_KEY = ""
client = Cerebras(api_key = CEREBRAS_API_KEY)
exa = Exa(api_key = EXA_API_KEY)
print("β
Setup complete")
Step 2: Web Search Function
Our first core function handles web searching using Exaβs auto search. This is advantageous because it uses a blend of keyword and neural search to find both exact matches and semantic similarities. It also returns the content of each scraped URL.
def search_web(query, num=5):
"""Search the web using Exa's auto search"""
result = exa.search_and_contents(
query,
type = "auto",
num_results = num,
text={"max_characters": 1000}
)
return result.results
print("β
Search function ready")
Step 3: AI Analysis Function
This function leverages Cerebras fast inference to analyze content and generate insights. Weβll use it both for structured JSON responses and regular text analysis throughout our research process.
def ask_ai(prompt):
"""Get AI response from Cerebras"""
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": prompt,
}
],
model="llama-4-scout-17b-16e-instruct",
max_tokens = 600,
temperature = 0.2
)
return chat_completion.choices[0].message.content
print("β
AI function ready")
Step 4: Research Function
Now weβll build our research methodology. The first part of the below cell performs the initial search and gathers our first batch of sources, just like when you first query Perplexity.
Then, the AI is queried to generate a conclusion based on the source data.
def research_topic(query):
"""Main research function that orchestrates the entire process"""
print(f"π Researching: {query}")
# Search for sources
results = search_web(query, 5)
print(f"π Found {len(results)} sources")
# Get content from sources
sources = []
for result in results:
content = result.text
title = result.title
if content and len(content) > 200:
sources.append({
"title": title,
"content": content
})
print(f"π Scraped {len(sources)} sources")
if not sources:
return {"summary": "No sources found", "insights": []}
# Create context for AI analysis
context = f"Research query: {query}\n\nSources:\n"
for i, source in enumerate(sources[:4], 1):
context += f"{i}. {source['title']}: {source['content'][:400]}...\n\n"
# ^^ get rid of this to use API params!
# best practices - https://www.anthropic.com/engineering/built-multi-agent-research-system
# Ask AI to analyze and synthesize
prompt = f"""{context}
Based on these sources, provide:
1. A comprehensive summary (2-3 sentences)
2. Three key insights as bullet points
Format your response exactly like this:
SUMMARY: [your summary here]
INSIGHTS:
- [insight 1]
- [insight 2]
- [insight 3]"""
response = ask_ai(prompt)
print("π§ Analysis complete")
return {"query": query, "sources": len(sources), "response": response}
print("β
Research function ready")
Step 5: Add Research Depth
Now letβs make our research intelligent instead of just searching once and hoping for the best.
Hereβs the problem with basic search: You ask βWhat are the latest AI breakthroughs?β and get random articles. But what if those articles all focus on ChatGPT and miss robotics? Youβd never know what youβre missing.
Here is a more advanced research flow.
You ask: "What's driving the new wave of AI agents?"
β
Layer 1: Broad search finds 6 sources about agent frameworks like AutoGPT, LangChain, and OpenAIβs GPT-4o
β
AI reads them and thinks: βThese all focus on softwareβ¦ but whatβs enabling real-time speed?β
β
Layer 2: Targeted search for "AI hardware for real-time agents" and "fast inference for LLMs"
β
Final synthesis: Combines insights on agent software + breakthroughs in inference speed (Cerebras, NVIDIA, etc) =
A complete picture of what powers real-time AI agents today
def deeper_research_topic(query):
"""Two-layer research for better depth"""
print(f"π Researching: {query}")
# Layer 1: Initial search
results = search_web(query, 6)
sources = []
for result in results:
if result.text and len(result.text) > 200:
sources.append({"title": result.title, "content": result.text})
print(f"Layer 1: Found {len(sources)} sources")
if not sources:
return {"summary": "No sources found", "insights": []}
# Get initial analysis and identify follow-up topic
context1 = f"Research query: {query}\n\nSources:\n"
for i, source in enumerate(sources[:4], 1):
context1 += f"{i}. {source['title']}: {source['content'][:300]}...\n\n"
follow_up_prompt = f"""{context1}
Based on these sources, what's the most important follow-up question that would deepen our understanding of "{query}"?
Respond with just a specific search query (no explanation):"""
follow_up_query = ask_ai(follow_up_prompt).strip().strip('"')
# Layer 2: Follow-up search
print(f"Layer 2: Investigating '{follow_up_query}'")
follow_results = search_web(follow_up_query, 4)
for result in follow_results:
if result.text and len(result.text) > 200:
sources.append({"title": f"[Follow-up] {result.title}", "content": result.text})
print(f"Total sources: {len(sources)}")
# Final synthesis
all_context = f"Research query: {query}\nFollow-up: {follow_up_query}\n\nAll Sources:\n"
for i, source in enumerate(sources[:7], 1):
all_context += f"{i}. {source['title']}: {source['content'][:300]}...\n\n"
final_prompt = f"""{all_context}
Provide a comprehensive analysis:
SUMMARY: [3-4 sentences covering key findings from both research layers]
INSIGHTS:
- [insight 1]
- [insight 2]
- [insight 3]
- [insight 4]
DEPTH GAINED: [1 sentence on how the follow-up search enhanced understanding]"""
response = ask_ai(final_prompt)
return {"query": query, "sources": len(sources), "response": response}
print("β
Enhanced research function ready")
# Test the enhanced research system
result = deeper_research_topic("climate change solutions 2025")
# Display results
print("\n" + "="*50)
print("ENHANCED RESEARCH RESULTS")
print("="*50)
print(f"Query: {result['query']}")
print(f"Sources analyzed: {result['sources']}")
print(f"\n{result['response']}")
print("="*50)
# Try more topics
print("\nTry these:")
print("deeper_research_topic('quantum computing advances')")
print("deeper_research_topic('space exploration news')")
(Optional) Step 6: Anthropic Multi-Agent Research
What makes Anthropicβs approach special?
It uses intelligent orchestration with specialized agents working in parallel.
Usually, one AI does everything sequentially
- Search β analyze β search β analyze (slow, limited)
Anthropicβs approach is to allow a lead agent delegate to specialized subagents
- Lead agent breaks down βAI safety researchβ into:
- Subagent 1: Current AI safety techniques
- Subagent 2: Recent regulatory developments
- Subagent 3: Industry implementation challenges
- All agents work simultaneously = 3x faster, better coverage
The result is parallel intelligence that scales to complex research tasks.
def anthropic_multiagent_research(query):
"""
Simple implementation of Anthropic's multi-agent approach:
1. Lead agent plans and delegates
2. Specialized subagents work in parallel
3. Lead agent synthesizes results
"""
print(f"π€ Anthropic Multi-Agent Research: {query}")
print("-" * 50)
# Step 1: Lead Agent - Task Decomposition & Delegation
print("π¨βπΌ LEAD AGENT: Planning and delegating...")
delegation_prompt = f"""You are a Lead Research Agent. Break down this complex query into 3 specialized subtasks for parallel execution: "{query}"
For each subtask, provide:
- Clear objective
- Specific search focus
- Expected output
SUBTASK 1: [Core/foundational aspects]
SUBTASK 2: [Recent developments/trends]
SUBTASK 3: [Applications/implications]
Make each subtask distinct to avoid overlap."""
plan = ask_ai(delegation_prompt)
print(" β Subtasks defined and delegated")
# Step 2: Simulate Parallel Subagents (simplified for demo)
print("\nπ SUBAGENTS: Working in parallel...")
# Extract subtasks and create targeted searches
subtask_searches = [
f"{query} fundamentals principles", # Core aspects
f"{query} latest developments", # Recent trends
f"{query} applications real world" # Implementation
]
subagent_results = []
for i, search_term in enumerate(subtask_searches, 1):
print(f" π€ Subagent {i}: Researching {search_term}")
results = search_web(search_term, 2)
sources = []
for result in results:
if result.text and len(result.text) > 200:
sources.append({
"title": result.title,
"content": result.text[:300]
})
subagent_results.append({
"subtask": i,
"search_focus": search_term,
"sources": sources
})
total_sources = sum(len(r["sources"]) for r in subagent_results)
print(f" π Combined: {total_sources} sources from {len(subagent_results)} agents")
# Step 3: Lead Agent - Synthesis
print("\nπ¨βπΌ LEAD AGENT: Synthesizing parallel findings...")
# Combine all subagent findings
synthesis_context = f"ORIGINAL QUERY: {query}\n\nSUBAGENT FINDINGS:\n"
for result in subagent_results:
synthesis_context += f"\nSubagent {result['subtask']} ({result['search_focus']}):\n"
for source in result['sources'][:2]: # Limit for brevity
synthesis_context += f"- {source['title']}: {source['content']}...\n"
synthesis_prompt = f"""{synthesis_context}
As the Lead Agent, synthesize these parallel findings into a comprehensive report:
EXECUTIVE SUMMARY:
[2-3 sentences covering the most important insights across all subagents]
INTEGRATED FINDINGS:
β’ [Key finding from foundational research]
β’ [Key finding from recent developments]
β’ [Key finding from applications research]
β’ [Cross-cutting insight that emerged]
RESEARCH QUALITY:
- Sources analyzed: {total_sources} across {len(subagent_results)} specialized agents
- Coverage: [How well the subtasks covered the topic]"""
final_synthesis = ask_ai(synthesis_prompt)
print("\n" + "=" * 50)
print("π― MULTI-AGENT RESEARCH COMPLETE")
print("=" * 50)
print(final_synthesis)
return {
"query": query,
"subagents": len(subagent_results),
"total_sources": total_sources,
"synthesis": final_synthesis
}
print("β
Anthropic multi-agent system ready!")
# Test the Anthropic multi-agent research system
result = anthropic_multiagent_research("current climate change solutions")
print("\n" + "π€" * 30)
print("ANTHROPIC MULTI-AGENT DEMO")
print("π€" * 30)
print(f"Query: {result['query']}")
print(f"Subagents deployed: {result['subagents']}")
print(f"Total sources: {result['total_sources']}")
print("\nπ‘ Key Innovation: Parallel specialized agents + intelligent orchestration")
print("\nπ― Try other complex topics:")
print("anthropic_multiagent_research('quantum computing commercial applications')")
print("anthropic_multiagent_research('artificial intelligence safety frameworks')")
print("anthropic_multiagent_research('renewable energy policy implementation')")