Building Deterministic AI Agents with Function Calling
Introduction
Large language models (LLMs) have transformed many industries by offering sophisticated language understanding and generation capabilities. However, when integrating these models into downstream processes — such as healthcare diagnostics, financial data analysis, or customer support systems — deterministic, structured outputs are often essential. Traditional prompt engineering can be used to control model responses, but it may not always yield reliable or consistent results. Function calling offers a robust alternative, enabling the creation of deterministic AI agents that can interact with external tools to perform specific tasks.
What is Function Calling?
Function calling is a technique that extends the capabilities of an LLM by defining and invoking external tools or functions. These tools can be pre-defined functions that the model can call to retrieve structured data or perform computations beyond its inherent abilities. This approach helps in ensuring that the output is not only relevant but also formatted in a deterministic way, which is critical for many downstream applications.
How Function Calling Works
- Input Provision: The language model is provided with both a user prompt and a set of tools.
- Decision Making: Based on the prompt, the LLM determines whether to use one of the provided tools.
- Parameter Extraction: If a tool is selected, the LLM extracts the necessary parameters from the prompt.
- Tool Invocation: The extracted parameters are passed to the external tool or API, which executes the desired function.
- Response Processing: The LLM receives and processes the tool’s response, integrating it into the final output.
Implementation Example: Nutritional Information Lookup
Step 1: Creating the Mock Function
First, we create a Python function that simulates looking up nutritional information from a database. This function returns random nutritional facts for known food items and handles cases where the food item is not available.
import random
def get_nutrition_info(food_item):
items = [
"apple", "banana", "orange", "strawberry", "blueberry",
"spinach", "kale", "lettuce", "broccoli", "cauliflower",
"chicken", "beef", "pork", "salmon", "tuna",
"rice", "pasta", "quinoa", "bread", "oats",
"milk", "cheese", "yogurt", "butter", "cream",
"olive oil", "coconut oil", "avocado", "almonds", "walnuts",
"tomato", "cucumber", "carrot", "onion", "garlic",
"potato", "sweet potato", "pumpkin", "corn", "peas",
"lemon", "lime", "grapefruit", "mango", "pineapple",
"chocolate", "honey", "maple syrup", "sugar", "salt",
"pepper", "cinnamon", "cumin", "paprika", "oregano",
"eggs", "tofu", "tempeh", "lentils", "chickpeas",
"mushroom", "zucchini", "eggplant", "bell pepper", "jalapeno",
"coconut", "peanut butter", "almond butter", "jam", "jelly",
"soy sauce", "vinegar", "mustard", "ketchup", "mayonnaise",
"bacon", "ham", "sausage", "turkey", "duck",
"shrimp", "crab", "lobster", "squid", "octopus",
"basil", "rosemary", "thyme", "sage", "parsley",
"ginger", "turmeric", "cardamom", "nutmeg", "cloves",
"watermelon", "cantaloupe", "honeydew", "grapes", "peach"
]
if food_item.lower() in items:
return {
"item_name": food_item,
"calories": random.randint(50, 300),
"carbohydrates": random.randint(0, 30),
"fiber": random.randint(0, 5),
"sugar": random.randint(0, 20),
"protein": round(random.uniform(0, 25), 1),
"fat": round(random.uniform(0, 15), 1),
}
else:
return {"response":"Food Item Not Available"}
Step 2: Defining the Tool Schema
Next, we define a tool schema that tells the LLM how to use this function. The schema includes a name, a description, and an input schema that specifies the expected parameters.
nutrition_extraction_tool = {
"name": "nutritional_info",
"description": "Search for a food item by the item name and return the nutritional information of that item",
"input_schema": {
"type": "object",
"properties": {
"item": {
"type": "string",
"description": "The name of the food item to be extracted",
}
},
},
}
Step 3: Integrating with the Language Model
We now simulate a conversation where the LLM uses our defined tool. For example, if a user asks, “How many calories does an apple have?”, the LLM will decide to use the nutritional_info
tool to extract the food item from the prompt.
response = client.messages.create(
model='claude-3-haiku',
messages=[
{
"role":"user",
"content": "How much calories does Apple have"
}
],
max_tokens=1000,
tools=[nutrition_extraction_tool]
)
response
Message(id='msg_vrtx_01AFNha3XWcfmxv1pwEA59EN', content=[ToolUseBlock(id='toolu_vrtx_017iPLTE3wTD1YmNzHNzPrCK', input={'item': 'Apple'}, name='nutritional_info', type='tool_use')], model='claude-3-haiku-20240307', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=Usage(input_tokens=352, output_tokens=54))
Check if the tool was used and then pass the parameters to the function
if response.stop_reason == 'tool_use':
tool_use = response.content[-1]
tool_name = tool_use.name
tool_input = tool_use.input
result = get_nutrition_info(tool_input)
print(result)
{'item_name': 'Apple', 'calories': 164, 'carbohydrates': 9, 'fiber': 1, 'sugar': 15, 'protein': 23.2, 'fat': 14.3}
Conclusion
Function calling provides a powerful method to extend the capabilities of large language models by allowing deterministic, structured output. By defining clear tool schemas and integrating them into the model’s workflow, developers can build AI agents that reliably interact with external systems. Whether you’re designing a nutritional information lookup, a financial data retrieval system, or a customer support assistant, function calling offers a flexible and robust solution to meet the needs of modern applications.