Skip to content

Tools

What is a Tool?

In the context of ADK, a Tool represents a specific capability provided to an AI agent, enabling it to perform actions and interact with the world beyond its core text generation and reasoning abilities. What distinguishes capable agents from basic language models is often their effective use of tools.

Technically, a tool is typically a modular code component—like a Python function, a class method, or even another specialized agent—designed to execute a distinct, predefined task. These tasks often involve interacting with external systems or data.

Agent tool call

Key Characteristics

Action-Oriented: Tools perform specific actions, such as:

  • Querying databases
  • Making API requests (e.g., fetching weather data, booking systems)
  • Searching the web
  • Executing code snippets
  • Retrieving information from documents (RAG)
  • Interacting with other software or services

Extends Agent capabilities: They empower agents to access real-time information, affect external systems, and overcome the knowledge limitations inherent in their training data.

Execute predefined logic: Crucially, tools execute specific, developer-defined logic. They do not possess their own independent reasoning capabilities like the agent's core Large Language Model (LLM). The LLM reasons about which tool to use, when, and with what inputs, but the tool itself just executes its designated function.

How Agents Use Tools

Agents leverage tools dynamically through mechanisms often involving function calling. The process generally follows these steps:

  1. Reasoning: The agent's LLM analyzes its system instruction, conversation history, and user request.
  2. Selection: Based on the analysis, the LLM decides on which tool, if any, to execute, based on the tools available to the agent and the docstrings that describes each tool.
  3. Invocation: The LLM generates the required arguments (inputs) for the selected tool and triggers its execution.
  4. Observation: The agent receives the output (result) returned by the tool.
  5. Finalization: The agent incorporates the tool's output into its ongoing reasoning process to formulate the next response, decide the subsequent step, or determine if the goal has been achieved.

Think of the tools as a specialized toolkit that the agent's intelligent core (the LLM) can access and utilize as needed to accomplish complex tasks.

Tool Types in ADK

ADK offers flexibility by supporting several types of tools:

  1. Function Tools: Tools created by you, tailored to your specific application's needs.
    • Functions/Methods: Define standard synchronous functions or methods in your code (e.g., Python def).
    • Agents-as-Tools: Use another, potentially specialized, agent as a tool for a parent agent.
    • Long Running Function Tools: Support for tools that perform asynchronous operations or take significant time to complete.
  2. Built-in Tools: Ready-to-use tools provided by the framework for common tasks. Examples: Google Search, Code Execution, Retrieval-Augmented Generation (RAG).
  3. Third-Party Tools: Integrate tools seamlessly from popular external libraries. Examples: LangChain Tools, CrewAI Tools.

Navigate to the respective documentation pages linked above for detailed information and examples for each tool type.

Referencing Tool in Agent's Instructions

Within an agent's instructions, you can directly reference a tool by using its function name. If the tool's function name and docstring are sufficiently descriptive, your instructions can primarily focus on when the Large Language Model (LLM) should utilize the tool. This promotes clarity and helps the model understand the intended use of each tool.

It is crucial to clearly instruct the agent on how to handle different return values that a tool might produce. For example, if a tool returns an error message, your instructions should specify whether the agent should retry the operation, give up on the task, or request additional information from the user.

Furthermore, ADK supports the sequential use of tools, where the output of one tool can serve as the input for another. When implementing such workflows, it's important to describe the intended sequence of tool usage within the agent's instructions to guide the model through the necessary steps.

Example

The following example showcases how an agent can use tools by referencing their function names in its instructions. It also demonstrates how to guide the agent to handle different return values from tools, such as success or error messages, and how to orchestrate the sequential use of multiple tools to accomplish a task.

/**
 * TypeScript port of the weather_sentiment.py example from the Python ADK library
 * 
 * This example demonstrates how to use function tools for weather reports and
 * sentiment analysis in ADK TypeScript.
 * 
 * NOTE: This is a template file that demonstrates how to use the ADK TypeScript library.
 * You'll see TypeScript errors in your IDE until you install the actual 'adk-typescript' package.
 * The structure and patterns shown here match how you would use the library in a real project.
 */

import { 
  Agent, 
  Runner,
  Content, 
  InMemorySessionService,
  FunctionTool
} from 'adk-typescript';

// Constants for the app
const APP_NAME = "weather_sentiment_agent";
const USER_ID = "user1234";
const SESSION_ID = "1234";
const MODEL_ID = "gemini-2.0-flash";

// Configure logging (simplified version for TypeScript)
const logger = {
  info: (message: string, ...args: any[]) => console.info(message, ...args),
  error: (message: string, ...args: any[]) => console.error(message, ...args)
};

// Tool 1: Get Weather Report
function getWeatherReport(city: string): Record<string, string | Record<string, string>> {
  /**
   * Retrieves the current weather report for a specified city.
   * 
   * @param city The name of the city to get weather for
   * @returns A dictionary with status and either a report or error message
   */
  if (city.toLowerCase() === "london") {
    return { 
      "status": "success", 
      "report": "The current weather in London is cloudy with a temperature of 18 degrees Celsius and a chance of rain." 
    };
  } else if (city.toLowerCase() === "paris") {
    return { 
      "status": "success", 
      "report": "The weather in Paris is sunny with a temperature of 25 degrees Celsius." 
    };
  } else {
    return { 
      "status": "error", 
      "error_message": `Weather information for '${city}' is not available.` 
    };
  }
}

// Create weather function tool
const weatherTool = new FunctionTool(getWeatherReport);

// Tool 2: Analyze Sentiment
function analyzeSentiment(text: string): Record<string, string | number> {
  /**
   * Analyzes the sentiment of the given text.
   * 
   * @param text The text to analyze
   * @returns A dictionary with sentiment type and confidence score
   */
  if (text.toLowerCase().includes("good") || text.toLowerCase().includes("sunny")) {
    return { "sentiment": "positive", "confidence": 0.8 };
  } else if (text.toLowerCase().includes("rain") || text.toLowerCase().includes("bad")) {
    return { "sentiment": "negative", "confidence": 0.7 };
  } else {
    return { "sentiment": "neutral", "confidence": 0.6 };
  }
}

// Create sentiment function tool
const sentimentTool = new FunctionTool(analyzeSentiment);

// Create the agent with both tools
const weatherSentimentAgent = new Agent("weather_sentiment_agent", {
  model: MODEL_ID,
  instruction: `You are a helpful assistant that provides weather information and analyzes the sentiment of user feedback.
**If the user asks about the weather in a specific city, use the 'get_weather_report' tool to retrieve the weather details.**
**If the 'get_weather_report' tool returns a 'success' status, provide the weather report to the user.**
**If the 'get_weather_report' tool returns an 'error' status, inform the user that the weather information for the specified city is not available and ask if they have another city in mind.**
**After providing a weather report, if the user gives feedback on the weather (e.g., 'That's good' or 'I don't like rain'), use the 'analyze_sentiment' tool to understand their sentiment.** Then, briefly acknowledge their sentiment.
You can handle these tasks sequentially if needed.`,
  tools: [weatherTool, sentimentTool]
});

// Create Session and Runner
const sessionService = new InMemorySessionService();
const session = sessionService.createSession({
  appName: APP_NAME, 
  userId: USER_ID, 
  sessionId: SESSION_ID
});

const runner = new Runner({
  agent: weatherSentimentAgent, 
  appName: APP_NAME, 
  sessionService: sessionService
});

// Agent Interaction function
function callAgent(query: string): void {
  // Create content for the request
  const content: Content = {
    role: 'user',
    parts: [{ text: query }]
  };

  // Run the agent and collect results
  (async () => {
    try {
      const events = runner.run({
        userId: USER_ID, 
        sessionId: SESSION_ID, 
        newMessage: content
      });

      for await (const event of events) {
        if (event.isFinalResponse && event.content && event.content.parts && event.content.parts[0].text) {
          const finalResponse = event.content.parts[0].text;
          console.log("Agent Response: ", finalResponse);
        }
      }
    } catch (error) {
      console.error("Error running agent:", error);
    }
  })();
}

// Execute with a sample query
if (require.main === module) {
  callAgent("weather in london?");
}

// Export for external use
export const agent = weatherSentimentAgent;
export function runWeatherSentimentDemo(query: string): void {
  callAgent(query);
} 

Tool Context

For more advanced scenarios, ADK allows you to access additional contextual information within your tool function by including the special parameter tool_context: ToolContext. By including this in the function signature, ADK will automatically provide an instance of the ToolContext class when your tool is called during agent execution.

The ToolContext provides access to several key pieces of information and control levers:

  • state: State: Read and modify the current session's state. Changes made here are tracked and persisted.

  • actions: EventActions: Influence the agent's subsequent actions after the tool runs (e.g., skip summarization, transfer to another agent).

  • function_call_id: str: The unique identifier assigned by the framework to this specific invocation of the tool. Useful for tracking and correlating with authentication responses. This can also be helpful when multiple tools are called within a single model response.

  • function_call_event_id: str: This attribute provides the unique identifier of the event that triggered the current tool call. This can be useful for tracking and logging purposes.

  • auth_response: Any: Contains the authentication response/credentials if an authentication flow was completed before this tool call.

  • Access to Services: Methods to interact with configured services like Artifacts and Memory.

State Management

The tool_context.state attribute provides direct read and write access to the state associated with the current session. It behaves like a dictionary but ensures that any modifications are tracked as deltas and persisted by the session service. This enables tools to maintain and share information across different interactions and agent steps.

  • Reading State: Use standard dictionary access (tool_context.state['my_key']) or the .get() method (tool_context.state.get('my_key', default_value)).

  • Writing State: Assign values directly (tool_context.state['new_key'] = 'new_value'). These changes are recorded in the state_delta of the resulting event.

  • State Prefixes: Remember the standard state prefixes:

    • app:*: Shared across all users of the application.

    • user:*: Specific to the current user across all their sessions.

    • (No prefix): Specific to the current session.

    • temp:*: Temporary, not persisted across invocations (useful for passing data within a single run call but generally less useful inside a tool context which operates between LLM calls).

/**
 * TypeScript port of the user_preference.py example from the Python ADK library
 * 
 * This example demonstrates how to use ToolContext to update user-specific preferences
 * in the session state when a tool is invoked.
 * 
 * NOTE: This is a template file that demonstrates how to use the ADK TypeScript library.
 * You'll see TypeScript errors in your IDE until you install the actual 'adk-typescript' package.
 * The structure and patterns shown here match how you would use the library in a real project.
 */

import { ToolContext, FunctionTool } from 'adk-typescript';

/**
 * Updates a user-specific preference in the session state.
 * 
 * @param preference The preference name to update
 * @param value The value to set for the preference
 * @param toolContext The context for the tool execution, providing access to state
 * @returns A status object indicating success and which preference was updated
 */
function updateUserPreference(
  preference: string, 
  value: string, 
  toolContext: ToolContext
): Record<string, string> {
  const userPrefsKey = "user:preferences";

  // Get current preferences or initialize if none exist
  const preferences = toolContext.state.get(userPrefsKey, {});

  // Update the specific preference
  preferences[preference] = value;

  // Write the updated dictionary back to the state
  toolContext.state[userPrefsKey] = preferences;

  console.log(`Tool: Updated user preference '${preference}' to '${value}'`);

  return { 
    "status": "success", 
    "updated_preference": preference 
  };
}

// Create the function tool
const prefTool = new FunctionTool(updateUserPreference);

// Export for use in an Agent
export const userPreferenceTool = prefTool;

/**
 * Usage example in an Agent:
 * 
 * ```typescript
 * import { Agent } from 'adk-typescript';
 * import { userPreferenceTool } from './user-preference';
 * 
 * const myAgent = new Agent("preference_agent", {
 *   model: "gemini-2.0-flash",
 *   instruction: "You can update user preferences when asked.",
 *   tools: [userPreferenceTool]
 * });
 * ```
 * 
 * When the LLM calls updateUserPreference(preference='theme', value='dark', ...):
 * - The toolContext.state will be updated with {'user:preferences': {'theme': 'dark'}}
 * - The change will be part of the resulting tool response event's actions.state_delta
 */ 

Controlling Agent Flow

The tool_context.actions attribute holds an EventActions object. Modifying attributes on this object allows your tool to influence what the agent or framework does after the tool finishes execution.

  • skip_summarization: bool: (Default: False) If set to True, instructs the ADK to bypass the LLM call that typically summarizes the tool's output. This is useful if your tool's return value is already a user-ready message.

  • transfer_to_agent: str: Set this to the name of another agent. The framework will halt the current agent's execution and transfer control of the conversation to the specified agent. This allows tools to dynamically hand off tasks to more specialized agents.

  • escalate: bool: (Default: False) Setting this to True signals that the current agent cannot handle the request and should pass control up to its parent agent (if in a hierarchy). In a LoopAgent, setting escalate=True in a sub-agent's tool will terminate the loop.

Example

/**
 * TypeScript port of the customer_support_agent.py example from the Python ADK library
 * 
 * This example demonstrates how to transfer control between agents using the
 * ToolContext's actions.transfer_to_agent mechanism.
 * 
 * NOTE: This is a template file that demonstrates how to use the ADK TypeScript library.
 * You'll see TypeScript errors in your IDE until you install the actual 'adk-typescript' package.
 * The structure and patterns shown here match how you would use the library in a real project.
 */

import { 
  Agent, 
  Runner,
  Content, 
  InMemorySessionService,
  FunctionTool,
  ToolContext
} from 'adk-typescript';

// Constants for the app
const APP_NAME = "customer_support_agent";
const USER_ID = "user1234";
const SESSION_ID = "1234";

// Configure logging (simplified version for TypeScript)
const logger = {
  info: (message: string, ...args: any[]) => console.info(message, ...args),
  error: (message: string, ...args: any[]) => console.error(message, ...args)
};

/**
 * Checks if a query requires escalation and transfers to another agent if needed.
 * 
 * @param query The user's query to check for urgency
 * @param toolContext The context for the tool execution
 * @returns A message indicating if transfer occurred
 */
function checkAndTransfer(query: string, toolContext: ToolContext): string {
  if (query.toLowerCase().includes("urgent")) {
    console.log("Tool: Detected urgency, transferring to the support agent.");
    toolContext.actions.transferToAgent = "support_agent";
    return "Transferring to the support agent...";
  } else {
    return `Processed query: '${query}'. No further action needed.`;
  }
}

// Create the escalation tool
const escalationTool = new FunctionTool(checkAndTransfer);

// Create the main agent
const mainAgent = new Agent("main_agent", {
  model: "gemini-2.0-flash",
  instruction: "You are the first point of contact for customer support of an analytics tool. Answer general queries. If the user indicates urgency, use the 'check_and_transfer' tool.",
  tools: [escalationTool]
});

// Create the support agent
const supportAgent = new Agent("support_agent", {
  model: "gemini-2.0-flash",
  instruction: "You are the dedicated support agent. Mentioned you are a support handler and please help the user with their urgent issue."
});

// Add the support agent as a sub-agent of the main agent
mainAgent.subAgents = [supportAgent];

// Create Session and Runner
const sessionService = new InMemorySessionService();
const session = sessionService.createSession({
  appName: APP_NAME, 
  userId: USER_ID, 
  sessionId: SESSION_ID
});

const runner = new Runner({
  agent: mainAgent, 
  appName: APP_NAME, 
  sessionService: sessionService
});

// Agent Interaction function
function callAgent(query: string): void {
  // Create content for the request
  const content: Content = {
    role: 'user',
    parts: [{ text: query }]
  };

  // Run the agent and collect results
  (async () => {
    try {
      const events = runner.run({
        userId: USER_ID, 
        sessionId: SESSION_ID, 
        newMessage: content
      });

      for await (const event of events) {
        if (event.isFinalResponse && event.content && event.content.parts && event.content.parts[0].text) {
          const finalResponse = event.content.parts[0].text;
          console.log("Agent Response: ", finalResponse);
        }
      }
    } catch (error) {
      console.error("Error running agent:", error);
    }
  })();
}

// Execute with a sample query
if (require.main === module) {
  callAgent("this is urgent, i cant login");
}

// Export for external use
export const agent = mainAgent;
export function runCustomerSupportDemo(query: string): void {
  callAgent(query);
} 
Explanation
  • We define two agents: main_agent and support_agent. The main_agent is designed to be the initial point of contact.
  • The check_and_transfer tool, when called by main_agent, examines the user's query.
  • If the query contains the word "urgent", the tool accesses the tool_context, specifically tool_context.actions, and sets the transfer_to_agent attribute to support_agent.
  • This action signals to the framework to transfer the control of the conversation to the agent named support_agent.
  • When the main_agent processes the urgent query, the check_and_transfer tool triggers the transfer. The subsequent response would ideally come from the support_agent.
  • For a normal query without urgency, the tool simply processes it without triggering a transfer.

This example illustrates how a tool, through EventActions in its ToolContext, can dynamically influence the flow of the conversation by transferring control to another specialized agent.

Authentication

ToolContext provides mechanisms for tools interacting with authenticated APIs. If your tool needs to handle authentication, you might use the following:

  • auth_response: Contains credentials (e.g., a token) if authentication was already handled by the framework before your tool was called (common with RestApiTool and OpenAPI security schemes).

  • request_credential(auth_config: dict): Call this method if your tool determines authentication is needed but credentials aren't available. This signals the framework to start an authentication flow based on the provided auth_config.

  • get_auth_response(): Call this in a subsequent invocation (after request_credential was successfully handled) to retrieve the credentials the user provided.

For detailed explanations of authentication flows, configuration, and examples, please refer to the dedicated Tool Authentication documentation page.

Context-Aware Data Access Methods

These methods provide convenient ways for your tool to interact with persistent data associated with the session or user, managed by configured services.

  • list_artifacts(): Returns a list of filenames (or keys) for all artifacts currently stored for the session via the artifact_service. Artifacts are typically files (images, documents, etc.) uploaded by the user or generated by tools/agents.

  • load_artifact(filename: str): Retrieves a specific artifact by its filename from the artifact_service. You can optionally specify a version; if omitted, the latest version is returned. Returns a google.genai.types.Part object containing the artifact data and mime type, or None if not found.

  • save_artifact(filename: str, artifact: types.Part): Saves a new version of an artifact to the artifact_service. Returns the new version number (starting from 0).

  • search_memory(query: str): Queries the user's long-term memory using the configured memory_service. This is useful for retrieving relevant information from past interactions or stored knowledge. The structure of the SearchMemoryResponse depends on the specific memory service implementation but typically contains relevant text snippets or conversation excerpts.

Example

/**
 * TypeScript port of the doc_analysis.py example from the Python ADK library
 * 
 * This example demonstrates how to use ToolContext with artifacts and memory services
 * to analyze documents and save the results.
 * 
 * NOTE: This is a template file that demonstrates how to use the ADK TypeScript library.
 * You'll see TypeScript errors in your IDE until you install the actual 'adk-typescript' package.
 * The structure and patterns shown here match how you would use the library in a real project.
 */

import { ToolContext, FunctionTool, Part } from 'adk-typescript';

/**
 * Analyzes a document using context from memory.
 * 
 * @param documentName The name of the document to analyze
 * @param analysisQuery The query to guide the analysis
 * @param toolContext The context for the tool execution with access to artifacts and memory
 * @returns A status object with analysis results information
 */
function processDocument(
  documentName: string, 
  analysisQuery: string, 
  toolContext: ToolContext
): Record<string, string | number> {
  // 1. Load the artifact
  console.log(`Tool: Attempting to load artifact: ${documentName}`);
  const documentPart = toolContext.loadArtifact(documentName);

  if (!documentPart) {
    return { 
      "status": "error", 
      "message": `Document '${documentName}' not found.` 
    };
  }

  // Assuming it's text for simplicity
  const documentText = documentPart.text || "";
  console.log(`Tool: Loaded document '${documentName}' (${documentText.length} chars).`);

  // 2. Search memory for related context
  console.log(`Tool: Searching memory for context related to: '${analysisQuery}'`);
  const memoryResponse = toolContext.searchMemory(`Context for analyzing document about ${analysisQuery}`);

  // Simplified extraction from memory response
  const memoryContext = memoryResponse.memories
    .filter(m => m.events && m.events.length > 0 && m.events[0].content)
    .map(m => m.events[0].content.parts[0].text)
    .join("\n");

  console.log(`Tool: Found memory context: ${memoryContext.substring(0, 100)}...`);

  // 3. Perform analysis (placeholder)
  const analysisResult = `Analysis of '${documentName}' regarding '${analysisQuery}' using memory context: [Placeholder Analysis Result]`;
  console.log("Tool: Performed analysis.");

  // 4. Save the analysis result as a new artifact
  const analysisPart = Part.fromText(analysisResult);
  const newArtifactName = `analysis_${documentName}`;
  const version = toolContext.saveArtifact(newArtifactName, analysisPart);
  console.log(`Tool: Saved analysis result as '${newArtifactName}' version ${version}.`);

  return { 
    "status": "success", 
    "analysis_artifact": newArtifactName, 
    "version": version 
  };
}

// Create the function tool
const docAnalysisTool = new FunctionTool(processDocument);

// Export for use in an Agent
export const documentAnalysisTool = docAnalysisTool;

/**
 * Usage example in an Agent:
 * 
 * ```typescript
 * import { Agent } from 'adk-typescript';
 * import { documentAnalysisTool } from './doc-analysis';
 * 
 * const myAgent = new Agent("analysis_agent", {
 *   model: "gemini-2.0-flash",
 *   instruction: "You can analyze documents when asked.",
 *   tools: [documentAnalysisTool]
 * });
 * ```
 * 
 * Notes:
 * - Assume artifact 'report.txt' was previously saved.
 * - Assume memory service is configured and has relevant past data.
 * - The agent must be configured with appropriate artifact and memory services.
 */ 

By leveraging the ToolContext, developers can create more sophisticated and context-aware custom tools that seamlessly integrate with ADK's architecture and enhance the overall capabilities of their agents.

Defining Effective Tool Functions

When using a standard Python function as an ADK Tool, how you define it significantly impacts the agent's ability to use it correctly. The agent's Large Language Model (LLM) relies heavily on the function's name, parameters (arguments), type hints, and docstring to understand its purpose and generate the correct call.

Here are key guidelines for defining effective tool functions:

  • Function Name:

    • Use descriptive, verb-noun based names that clearly indicate the action (e.g., get_weather, search_documents, schedule_meeting).
    • Avoid generic names like run, process, handle_data, or overly ambiguous names like do_stuff. Even with a good description, a name like do_stuff might confuse the model about when to use the tool versus, for example, cancel_flight.
    • The LLM uses the function name as a primary identifier during tool selection.
  • Parameters (Arguments):

    • Your function can have any number of parameters.
    • Use clear and descriptive names (e.g., city instead of c, search_query instead of q).
    • Provide type hints for all parameters (e.g., city: str, user_id: int, items: list[str]). This is essential for ADK to generate the correct schema for the LLM.
    • Ensure all parameter types are JSON serializable. Standard Python types like str, int, float, bool, list, dict, and their combinations are generally safe. Avoid complex custom class instances as direct parameters unless they have a clear JSON representation.
    • Do not set default values for parameters. E.g., def my_func(param1: str = "default"). Default values are not reliably supported or used by the underlying models during function call generation. All necessary information should be derived by the LLM from the context or explicitly requested if missing.
  • Return Type:

    • The function's return value must be a dictionary (dict).
    • If your function returns a non-dictionary type (e.g., a string, number, list), the ADK framework will automatically wrap it into a dictionary like {'result': your_original_return_value} before passing the result back to the model.
    • Design the dictionary keys and values to be descriptive and easily understood by the LLM. Remember, the model reads this output to decide its next step.
    • Include meaningful keys. For example, instead of returning just an error code like 500, return {'status': 'error', 'error_message': 'Database connection failed'}.
    • It's a highly recommended practice to include a status key (e.g., 'success', 'error', 'pending', 'ambiguous') to clearly indicate the outcome of the tool execution for the model.
  • Docstring:

    • This is critical. The docstring is the primary source of descriptive information for the LLM.
    • Clearly state what the tool does. Be specific about its purpose and limitations.
    • Explain when the tool should be used. Provide context or example scenarios to guide the LLM's decision-making.
    • Describe each parameter clearly. Explain what information the LLM needs to provide for that argument.
    • Describe the structure and meaning of the expected dict return value, especially the different status values and associated data keys.

    Example of a good definition:

    function lookup_order_status(order_id: string) -> dict {
      /** Fetches the current status of a customer's order using its ID.
    
      Use this tool ONLY when a user explicitly asks for the status of
      a specific order and provides the order ID. Do not use it for
      general inquiries.
    
      Args:
          order_id: The unique identifier of the order to look up.
    
      Returns:
          A dictionary containing the order status.
          Possible statuses: 'shipped', 'processing', 'pending', 'error'.
          Example success: {'status': 'shipped', 'tracking_number': '1Z9...'}
          Example error: {'status': 'error', 'error_message': 'Order ID not found.'}
      */
      // ... function implementation to fetch status ...
      if (status = fetch_status_from_backend(order_id)) {
           return {"status": status.state, "tracking_number": status.tracking} // Example structure
      } else {
           return {"status": "error", "error_message": `Order ID ${order_id} not found.`}
      }
    }
    
  • Simplicity and Focus:

    • Keep Tools Focused: Each tool should ideally perform one well-defined task.
    • Fewer Parameters are Better: Models generally handle tools with fewer, clearly defined parameters more reliably than those with many optional or complex ones.
    • Use Simple Data Types: Prefer basic types (str, int, bool, float, List[str], etc.) over complex custom classes or deeply nested structures as parameters when possible.
    • Decompose Complex Tasks: Break down functions that perform multiple distinct logical steps into smaller, more focused tools. For instance, instead of a single update_user_profile(profile: ProfileObject) tool, consider separate tools like update_user_name(name: str), update_user_address(address: str), update_user_preferences(preferences: list[str]), etc. This makes it easier for the LLM to select and use the correct capability.

By adhering to these guidelines, you provide the LLM with the clarity and structure it needs to effectively utilize your custom function tools, leading to more capable and reliable agent behavior.