Tool Use is the capability that transforms a language model from a text generator into an agent. By providing the model with descriptions of available tools—their names, parameters, expected behaviours—the model can be prompted or fine-tuned to generate structured tool calls (typically JSON) that are then executed by the host system. The tool's output is fed back to the model, which then decides whether to take further actions or produce a final response.
Common tools include web search (for accessing current information the model was not trained on), code execution (calculations, data analysis, plotting), file operations, database queries, and REST API calls. More specialised tools might include mathematical solvers, theorem provers, molecular simulators, or bespoke enterprise APIs. The model's ability to select the appropriate tool, formulate correct parameters, interpret results, and chain multiple tool calls together is what enables complex multi-step task completion.
Tool use addresses several fundamental limitations of LLMs. It gives them access to current information beyond their training cutoff, grounds them in verifiable sources, provides them with reliable computation (LLMs are famously bad at multi-digit arithmetic but can call a calculator), and allows them to take actions in the world. Frameworks like ReAct (Reasoning and Acting) interleave reasoning steps with tool calls. The Model Context Protocol (MCP) and similar standards are emerging to standardise how tools are described to and invoked by LLMs. Tool use is the core mechanism behind the agentic AI systems of the mid-2020s and represents one of the most significant extensions of LLM capability since the introduction of in-context learning.
Related terms: Agent, Large Language Model, Retrieval-Augmented Generation
Discussed in:
- Chapter 15: Modern AI — AI Agents
Also defined in: Textbook of AI