How to pass run time values to a tool
This guide assumes familiarity with the following concepts: - Chat models - LangChain Tools - How to create tools - How to use a model to call tools :::
This how-to guide uses models with native tool calling capability. You can find a list of all models that support tool calling.
:::
You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.
Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.
Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.
This how-to guide shows a simple design pattern that creates the tool dynamically at run time and binds to them appropriate values.
We can bind them to chat models as follows:
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-pro",
temperature: 0
});
Passing request time information
The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example, this information may be the user ID as resolved from the request itself.
import { z } from "zod";
import { StructuredTool } from "@langchain/core/tools";
const userToPets: Record<string, string[]> = {};
function generateToolsForUser(userId: string): StructuredTool[] {
class UpdateFavoritePets extends StructuredTool {
name = "update_favorite_pets";
description = "Add the list of favorite pets.";
schema = z.object({
pets: z.array(z.string()),
});
async _call(input: { pets: string[] }): Promise<string> {
userToPets[userId] = input.pets;
return "update_favorite_pets called.";
}
}
class DeleteFavoritePets extends StructuredTool {
name = "delete_favorite_pets";
description = "Delete the list of favorite pets.";
schema = z.object({
no_op: z.boolean().optional().describe("No operation."),
});
async _call(input: never): Promise<string> {
if (userId in userToPets) {
delete userToPets[userId];
}
return "delete_favorite_pets called.";
}
}
class ListFavoritePets extends StructuredTool {
name = "list_favorite_pets";
description = "List favorite pets if any.";
schema = z.object({
no_op: z.boolean().optional().describe("No operation."),
});
async _call(input: never): Promise<string> {
return JSON.stringify(userToPets[userId]) || JSON.stringify([]);
}
}
return [
new UpdateFavoritePets(),
new DeleteFavoritePets(),
new ListFavoritePets(),
];
}
Verify that the tools work correctly
const [updatePets, deletePets, listPets] = generateToolsForUser("brace");
await updatePets.invoke({ pets: ["cat", "dog"] });
console.log(userToPets);
console.log(await listPets.invoke({}));
{ brace: [ 'cat', 'dog' ] }
["cat","dog"]
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
async function handleRunTimeRequest(
userId: string,
query: string,
llm: BaseChatModel
): Promise<any> {
if (!llm.bindTools) {
throw new Error("Language model does not support tools.");
}
const tools = generateToolsForUser(userId);
const llmWithTools = llm.bindTools(tools);
return llmWithTools.invoke(query);
}
This code will allow the LLM to invoke the tools, but the LLM is unaware of the fact that a user ID even exists!
const aiMessage = await handleRunTimeRequest(
"brace",
"my favorite animals are cats and parrots.",
llm
);
console.log(aiMessage.tool_calls[0]);
{
name: 'update_favorite_pets',
args: { pets: [ 'cats', 'parrots' ] },
id: 'call_to1cbIVqMNuahHCdFO9oQzpN'
}
Click here to see the LangSmith trace for the above run.
Chat models only output requests to invoke tools, they donβt actually invoke the underlying tools.
To see how to invoke the tools, please refer to how to use a model to call tools.