The Cerebras Inference SDK supports tool use, enabling programmatic execution of specific tasks by sending requests with clearly defined operations. This guide will walk you through a detailed example of how to use tool use with the Cerebras Inference SDK.

For a more detailed conceptual guide to tool use and function calling, please visit our AI Agent Bootcamp section on the topic.

Initial Setup

To begin, we need to import the necessary libraries and set up our Cerebras client.

If you haven’t set up your Cerebras API key yet, please visit our QuickStart guide for detailed instructions on how to obtain and configure your API key.
import os
import json
import re
from cerebras.cloud.sdk import Cerebras

# Initialize Cerebras client
client = Cerebras(
    api_key=os.environ.get("CEREBRAS_API_KEY"),
)

Setting Up the Tool and Tool Schema

Our first step is to define the tool that our AI will use. In this example, we’re creating a simple calculator function that can perform basic arithmetic operations.

def calculate(expression):
    expression = re.sub(r'[^0-9+\-*/().]', '', expression)
    
    try:
        result = eval(expression)
        return str(result)
    except (SyntaxError, ZeroDivisionError, NameError, TypeError, OverflowError):
        return "Error: Invalid expression"

Next, we define the tool schema. This schema acts as a blueprint for the AI, describing the tool’s functionality, when to use it, and what parameters it expects. It helps the AI understand how to interact with our custom tool effectively.

tools = [
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "A calculator tool that can perform basic arithmetic operations. Use this when you need to compute mathematical expressions or solve numerical problems.",
            "parameters": {
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The mathematical expression to evaluate"
                    }
                },
                "required": ["expression"]
            }
        }
    }
]

Making the API Call

With our tool and its schema defined, we can now set up the conversation for our AI. We will prompt the LLM using natural language to conduct a simple calculation, and make the API call.

This call sends our messages and tool schema to the LLM, allowing it to generate a response that may include tool use.

messages = [
    {"role": "system", "content": "You are a helpful assistant with access to a calculator. Use the calculator tool to compute mathematical expressions when needed."},
    {"role": "user", "content": "What's the result of 15 multiplied by 7?"},
]

response = client.chat.completions.create(
    model="llama3.1-8b",
    messages=messages,
    tools=tools,
)

Handling Tool Calls

Now that we’ve made the API call, we need to process the response and handle any tool calls the LLM might have made. Note that the LLM determines based on the prompt if it should rely on a tool to respond to the user. Therefore, we need to check for any tool calls and handle them appropriately.

choice = response.choices[0].message

if choice.tool_calls:
    function_call = choice.tool_calls[0].function
    if function_call.name == "calculate":
        arguments = json.loads(function_call.arguments)
        result = calculate(arguments["expression"])
        print(f"Calculation result: {result}")
        
        final_response = client.chat.completions.create(
            model="llama3.1-70b",
            messages=messages,
        )
        
        if final_response:
            print(final_response.choices[0].message.content)
        else:
            print("No final response received")
else:
    print("Unexpected response from the model")

In this case, the LLM determined that a tool call was appropriate to answer the users’ question of what the result of 15 multiplied by 7 is. See the output below.

Calculation result: 105

Conclusion

Tool use is an important feature that extends the capabilities of LLMs by allowing them to access pre-defined tools. Here are some more resources to continue learning about tool use with the Cerebras Inference SDK.