How to add values to a chain's state
This guide assumes familiarity with the following concepts:
An alternate way of passing data through
steps of a chain is to leave the current values of the chain state
unchanged while assigning a new value under a given key. The
RunnablePassthrough.assign()
static method takes an input value and adds the extra arguments passed
to the assign function.
This is useful in the common LangChain Expression Language pattern of additively creating a dictionary to use as input to a later step.
Hereโs an example:
import {
RunnableParallel,
RunnablePassthrough,
} from "@langchain/core/runnables";
const runnable = RunnableParallel.from({
extra: RunnablePassthrough.assign({
mult: (input: { num: number }) => input.num * 3,
modified: (input: { num: number }) => input.num + 1,
}),
});
await runnable.invoke({ num: 1 });
{ extra: { num: 1, mult: 3, modified: 2 } }
Letโs break down whatโs happening here.
- The input to the chain is
{"num": 1}
. This is passed into aRunnableParallel
, which invokes the runnables it is passed in parallel with that input. - The value under the
extra
key is invoked.RunnablePassthrough.assign()
keeps the original keys in the input dict ({"num": 1}
), and assigns a new key calledmult
. The value islambda x: x["num"] * 3)
, which is3
. Thus, the result is{"num": 1, "mult": 3}
. {"num": 1, "mult": 3}
is returned to theRunnableParallel
call, and is set as the value to the keyextra
.- At the same time, the
modified
key is called. The result is2
, since the lambda extracts a key called"num"
from its input and adds one.
Thus, the result is {'extra': {'num': 1, 'mult': 3}, 'modified': 2}
.
Streamingโ
One convenient feature of this method is that it allows values to pass
through as soon as they are available. To show this off, weโll use
RunnablePassthrough.assign()
to immediately return source docs in a
retrieval chain:
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: "harrison worked at kensho", metadata: {} }],
new OpenAIEmbeddings()
);
const retriever = vectorstore.asRetriever();
const template = `Answer the question based only on the following context:
{context}
Question: {question}
`;
const prompt = ChatPromptTemplate.fromTemplate(template);
const model = new ChatOpenAI({ model: "gpt-4o" });
const generationChain = prompt.pipe(model).pipe(new StringOutputParser());
const retrievalChain = RunnableSequence.from([
{
context: retriever.pipe((docs) => docs[0].pageContent),
question: new RunnablePassthrough(),
},
RunnablePassthrough.assign({ output: generationChain }),
]);
const stream = await retrievalChain.stream("where did harrison work?");
for await (const chunk of stream) {
console.log(chunk);
}
{ question: "where did harrison work?" }
{ context: "harrison worked at kensho" }
{ output: "" }
{ output: "H" }
{ output: "arrison" }
{ output: " worked" }
{ output: " at" }
{ output: " Kens" }
{ output: "ho" }
{ output: "." }
{ output: "" }
We can see that the first chunk contains the original "question"
since
that is immediately available. The second chunk contains "context"
since the retriever finishes second. Finally, the output from the
generation_chain
streams in chunks as soon as it is available.
Next stepsโ
Now youโve learned how to pass data through your chains to help to help format the data flowing through your chains.
To learn more, see the other how-to guides on runnables in this section.