How to use Runnables as Tools
Here we will demonstrate how to convert a LangChain Runnable
into a
tool that can be used by agents, chains, or chat models.
Dependenciesβ
Note: this guide requires @langchain/core
>= 0.2.16. We will also
use OpenAI for embeddings, but
any LangChain embeddings should suffice. We will use a simple
LangGraph agent for
demonstration purposes.
@langchain/core @langchain/langgraph @langchain/openai zod
LangChain [tools](/docs/concepts#tools) are interfaces that an agent, chain, or chat model can use to interact with the world. See [here](/docs/how_to/#tools) for how-to guides covering tool-calling, built-in tools, custom tools, and more information.
LangChain tools-- instances of [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html)-- are [Runnables](/docs/concepts/#runnable-interface) with additional constraints that enable them to be invoked effectively by language models:
- Their inputs are constrained to be serializable, specifically strings and objects;
- They contain names and descriptions indicating how and when they should be used;
- They contain a detailed `schema` property for their arguments. That is, while a tool (as a `Runnable`) might accept a single object input, the specific keys and type information needed to populate an object should be specified in the `schema` field.
Runnables that accept string or object inputs can be converted to tools using the [`asTool`](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#asTool) method, which allows for the specification of names, descriptions, and additional schema information for arguments.
## Basic usage
With object input:
::: {.cell execution_count=1}
``` {.typescript .cell-code}
import { RunnableLambda } from "@langchain/core/runnables";
import { z } from "zod";
const schema = z.object({
a: z.number(),
b: z.array(z.number()),
});
const runnable = RunnableLambda.from<z.infer<typeof schema>, number>((input) => {
return input.a * Math.max(...input.b);
})
const asTool = runnable.asTool({
name: "My tool",
description: "Explanation of when to use tool.",
schema,
})
:::
console.log(asTool.description);
Explanation of when to use tool.
await asTool.invoke({ a: 3, b: [1, 2] });
6
String input is also supported:
const firstRunnable = RunnableLambda.from<string, string>((input) => {
return input + "a";
});
const secondRunnable = RunnableLambda.from<string, string>((input) => {
return input + "z";
});
const runnable = firstRunnable.pipe(secondRunnable);
const asTool = runnable.asTool({
schema: z.string(),
});
await asTool.invoke("b");
baz
In agentsβ
Below we will incorporate LangChain Runnables as tools in an agent application. We will demonstrate with:
We first instantiate a chat model that supports tool calling:
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0 });
Following the RAG tutorial, letβs first construct a retriever:
import { Document } from "@langchain/core/documents";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
const documents = [
new Document({
pageContent:
"Dogs are great companions, known for their loyalty and friendliness.",
}),
new Document({
pageContent: "Cats are independent pets that often enjoy their own space.",
}),
];
const vectorstore = await MemoryVectorStore.fromDocuments(
documents,
new OpenAIEmbeddings()
);
const retriever = vectorstore.asRetriever({
k: 1,
searchType: "similarity",
});
We next create use a simple pre-built LangGraph agent and provide it the tool:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
const tools = [
retriever.asTool({
name: "pet_info_retriever",
description: "Get information about pets.",
schema: z.string(),
}),
];
const agent = createReactAgent({ llm, tools });
11:39 - Type 'RunnableToolLike<ZodType<string, ZodTypeDef, string>, DocumentInterface<Record<string, any>>[]>[]' is not assignable to type 'ToolNode<MessagesState> | StructuredTool<ZodAny>[]'.
11:39 - Type 'RunnableToolLike<ZodType<string, ZodTypeDef, string>, DocumentInterface<Record<string, any>>[]>[]' is not assignable to type 'StructuredTool<ZodAny>[]'.
11:39 - Type 'RunnableToolLike<ZodType<string, ZodTypeDef, string>, DocumentInterface<Record<string, any>>[]>' is missing the following properties from type 'StructuredTool<ZodAny>': _call, call, returnDirect, verbose
for chunk in agent.stream({"messages": [("human", "What are dogs known for?")]}):
print(chunk)
print("----")
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD', 'function': {'arguments': '{"__arg1":"dogs"}', 'name': 'pet_info_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 60, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d7f81de9-1fb7-4caf-81ed-16dcdb0b2ab4-0', tool_calls=[{'name': 'pet_info_retriever', 'args': {'__arg1': 'dogs'}, 'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD'}], usage_metadata={'input_tokens': 60, 'output_tokens': 19, 'total_tokens': 79})]}}
----
{'tools': {'messages': [ToolMessage(content="[Document(id='86f835fe-4bbe-4ec6-aeb4-489a8b541707', page_content='Dogs are great companions, known for their loyalty and friendliness.')]", name='pet_info_retriever', tool_call_id='call_W8cnfOjwqEn4cFcg19LN9mYD')]}}
----
{'agent': {'messages': [AIMessage(content='Dogs are known for being great companions, known for their loyalty and friendliness.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 134, 'total_tokens': 152}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ca5847a-a5eb-44c0-a774-84cc2c5bbc5b-0', usage_metadata={'input_tokens': 134, 'output_tokens': 18, 'total_tokens': 152})]}}
----
See LangSmith trace for the above run.
Going further, we can create a simple RAG chain that takes an additional parameterβ here, the βstyleβ of the answer.
from operator import itemgetter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
system_prompt = """
You are an assistant for question-answering tasks.
Use the below context to answer the question. If
you don't know the answer, say you don't know.
Use three sentences maximum and keep the answer
concise.
Answer in the style of {answer_style}.
Question: {question}
Context: {context}
"""
prompt = ChatPromptTemplate.from_messages([("system", system_prompt)])
rag_chain = (
{
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
"answer_style": itemgetter("answer_style"),
}
| prompt
| llm
| StrOutputParser()
)
Note that the input schema for our chain contains the required arguments, so it converts to a tool without further specification:
rag_chain.input_schema.schema();
{'title': 'RunnableParallel<context,question,answer_style>Input',
'type': 'object',
'properties': {'question': {'title': 'Question'},
'answer_style': {'title': 'Answer Style'}}}
rag_tool = rag_chain.as_tool(
(name = "pet_expert"),
(description = "Get information about pets.")
);
Below we again invoke the agent. Note that the agent populates the
required parameters in its tool_calls
:
agent = create_react_agent(llm, [rag_tool])
for chunk in agent.stream(
{"messages": [("human", "What would a pirate say dogs are known for?")]}
):
print(chunk)
print("----")
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_17iLPWvOD23zqwd1QVQ00Y63', 'function': {'arguments': '{"question":"What are dogs known for according to pirates?","answer_style":"quote"}', 'name': 'pet_expert'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 59, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7fef44f3-7bba-4e63-8c51-2ad9c5e65e2e-0', tool_calls=[{'name': 'pet_expert', 'args': {'question': 'What are dogs known for according to pirates?', 'answer_style': 'quote'}, 'id': 'call_17iLPWvOD23zqwd1QVQ00Y63'}], usage_metadata={'input_tokens': 59, 'output_tokens': 28, 'total_tokens': 87})]}}
----
{'tools': {'messages': [ToolMessage(content='"Dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages."', name='pet_expert', tool_call_id='call_17iLPWvOD23zqwd1QVQ00Y63')]}}
----
{'agent': {'messages': [AIMessage(content='According to pirates, dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 119, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5a30edc3-7be0-4743-b980-ca2f8cad9b8d-0', usage_metadata={'input_tokens': 119, 'output_tokens': 27, 'total_tokens': 146})]}}
----
See LangSmith trace for the above run.