Tool Use and Function Calling
Introduction
As we explored in the previous section, large language model (LLM) applications can be significantly improved using the various components of agentic workflows. The first of these components that we’ll explore is tool use, which enables LLMs to perform more complex tasks than just text processing.
In the context of AI agents, a tool is any external resource or capability that augments the core functionality of the LLM. Most often, the types of tools you’ll encounter when building AI agents will be through a method called function calling, a subset of tool use that allows the LLM to invoke predefined functions with specific parameters that can perform calculations, retrieve data, or execute actions that the model itself cannot directly carry out.
To illustrate the value of tool use and function calling, let’s consider a financial analyst tasked with comparing the moving averages of two companies’ stock prices.
Without function calling capabilities, an LLM would have limited value to an analyst, facing significant challenges in performing detailed analysis. LLMs lack access to real-time or historical stock price data, making it difficult to work with up-to-date information. While they can explain concepts like moving averages and guide users through calculations, they aren’t reliable for precise mathematical operations. Additionally, due to their probabilistic nature, the results provided by an LLM for complex calculations can be inconsistent or inaccurate.
Tool choice and function calling address these limitations by allowing the LLM to:
- Request specific stock data for the companies in question.
- Invoke a dedicated function that accurately calculates the moving average.
- Consistently produce precise results based on the provided data and specified parameters.
By utilizing a tool that performs an exact calculation of the moving average, the LLM can provide more reliable answers to the analyst. Using this very example, let’s build an AI agent with tool use capabilities to better understand the concept.
Initial Setup
Before diving into building our tools, let’s begin by initializing the Cerebras Inference SDK. Note, if this is your first time using our SDK, please visit our QuickStart guide for details on installation and obtaining API keys.
For the sake of simplicity, in this section we will only build out a tool for calculating the moving average of stocks. We will handle the data loading step by simply using local JSON files. Note that in a production environment, this could have also been done by creating a tool that fetched real-time stock data from a financial API.
Creating a Moving Average Calculator Tool
Now that we have initialized our client and have some data to work with, let’s build our first tool: a function that our LLM can call to calculate the moving average of a stock.
We’ll name our function calculate_moving_average
. It computes the moving average of stock prices over a specified period. It first validates the input parameters and retrieves the relevant stock data. The function then iterates through the data, maintaining a sliding window of stock prices. For each day after the initial window, it calculates the average price within the window, rounds it to two decimal places, and stores the result along with the corresponding date. This process continues until it has processed the specified number of days, resulting in a list of moving averages.
Tool Schema
In addition to our calculate_moving_average
tool, we need to create a schema which provides context on when and how it can be used. You can think of the tool schema as a user manual for your AI agent. The more precise and informative your schema, the better equipped the AI becomes at determining when to utilize your tool and how to construct appropriate arguments. You can provide the schema to the Cerebras Inference API through the tools parameter, as described in the Tool Use section of the API documentation.
The schema is composed of three components: the name
of the tool, a description
of the tool, and what parameters
it accepts.
For our schema, we’ll use Pydantic to ensure type safety and simplify input validation before passing them to the function. This approach reduces the risk of errors that can arise from manually writing JSON schemas and keeps parameter definitions closely aligned with their usage in the code.
Integrating Tools Into the Workflow
Now that we have defined our calculate_moving_average
function and created the corresponding tool schema, we need to integrate these components into our AI agent’s workflow. The next step is to set up the messaging structure and create a chat completion request using the Cerebras Inference SDK. This involves all of the standard components that comprise a chat completion request, including crafting an initial system message and the user’s query. We’ll then pass these messages along with our defined tools to the chat completion method. This setup allows the AI to understand the context, interpret the user’s request, and determine when and how to utilize the calculate_moving_average
function to provide accurate responses.
Once the LLM receives the request, it makes a determination as to whether or not it can answer the query without additional data or computations. If so, it provides a text-based reply, just like it would with any other request. This is typically the case for general knowledge questions or queries that don’t require the use of tools. In our code, we first check to see if the model responded in this way. If so, we print out the content.
In cases where the LLM recognizes that answering the query requires specific data or calculations beyond its inherent capabilities, it opts for a function call.
We handle this by checking for a function call, similar to how we checked for a text response. When a function call is detected, the code extracts the function arguments, parses them, and executes the corresponding function.
Conclusion
Tool use and function calling significantly enhance the capabilities of large language models, enabling them to perform complex tasks beyond their core text processing abilities. As demonstrated in our financial analysis example, integrating tools allows LLMs to perform precise calculations, and provide more reliable and accurate responses to user queries.
Our workflow in this example was a simple one, but it can be applied to most tool use scenarios.
- Defining the tool function (in our case, calculate_moving_average) and create a corresponding tool schema that clearly outlines its purpose and parameters.
- Make a chat completion request, including the defined tools alongside the user’s query and any system messages.
- Handle the LLM’s response, which may be either a text-based reply or a function call.
- If a function call is made, execute the specified function with the provided arguments and process the results.
We now know how tool use and function calling extend the capabilities of AI agents. In subsequent sections, we’ll explore how we can do even more with tool use, such as:
- Multistep tool use, where the LLM chains together multiple function calls to solve more complex problems.
- Parallel tool use, allowing the model to decide for itself which functions are appropriate for the given task for increased flexibility
Was this page helpful?