Conversational RAG
In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of “memory” of past questions and answers, and some logic for incorporating those into its current thinking.
In this guide we focus on adding logic for incorporating historical messages. Further details on chat history management is covered here.
We will cover two approaches:
- Chains, in which we always execute a retrieval step;
- Agents, in which we give an LLM discretion over whether and how to execute a retrieval step (or multiple steps).
For the external knowledge source, we will use the same LLM Powered Autonomous Agents blog post by Lilian Weng from the RAG tutorial.
Setup​
Dependencies​
We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any ChatModel or LLM, Embeddings, and VectorStore or Retriever.
We’ll use the following packages:
npm install --save langchain @langchain/openai cheerio
We need to set environment variable OPENAI_API_KEY
:
export OPENAI_API_KEY=YOUR_KEY
LangSmith​
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=YOUR_KEY
Initial setup​
import "cheerio";
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { pull } from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
const loader = new CheerioWebBaseLoader(
"https://lilianweng.github.io/posts/2023-06-23-agent/"
);
const docs = await loader.load();
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splits = await textSplitter.splitDocuments(docs);
const vectorStore = await MemoryVectorStore.fromDocuments(
splits,
new OpenAIEmbeddings()
);
// Retrieve and generate using the relevant snippets of the blog.
const retriever = vectorStore.asRetriever();
const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });
const ragChain = await createStuffDocumentsChain({
llm,
prompt,
outputParser: new StringOutputParser(),
});
Let’s see what this prompt actually looks like:
console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: {question}
Context: {context}
Answer:
await ragChain.invoke({
context: await retriever.invoke("What is Task Decomposition?"),
question: "What is Task Decomposition?",
});
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 208 more characters
Contextualizing the question​
First we’ll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.
We’ll use a prompt that includes a MessagesPlaceholder
variable under
the name “chat_history”. This allows us to pass in a list of Messages to
the prompt using the “chat_history” input key, and these messages will
be inserted after the system message and before the human message
containing the latest question.
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
const contextualizeQSystemPrompt = `Given a chat history and the latest user question
which might reference context in the chat history, formulate a standalone question
which can be understood without the chat history. Do NOT answer the question,
just reformulate it if needed and otherwise return it as is.`;
const contextualizeQPrompt = ChatPromptTemplate.fromMessages([
["system", contextualizeQSystemPrompt],
new MessagesPlaceholder("chat_history"),
["human", "{question}"],
]);
const contextualizeQChain = contextualizeQPrompt
.pipe(llm)
.pipe(new StringOutputParser());
Using this chain we can ask follow-up questions that reference past messages and have them reformulated into standalone questions:
import { AIMessage, HumanMessage } from "@langchain/core/messages";
await contextualizeQChain.invoke({
chat_history: [
new HumanMessage("What does LLM stand for?"),
new AIMessage("Large language model"),
],
question: "What is meant by large",
});
'What is the definition of "large" in the context of a language model?'
Chain with chat history​
And now we can build our full QA chain.
Notice we add some routing functionality to only run the “condense question chain” when our chat history isn’t empty. Here we’re taking advantage of the fact that if a function in an LCEL chain returns another chain, that chain will itself be invoked.
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { formatDocumentsAsString } from "langchain/util/document";
const qaSystemPrompt = `You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don't know the answer, just say that you don't know.
Use three sentences maximum and keep the answer concise.
{context}`;
const qaPrompt = ChatPromptTemplate.fromMessages([
["system", qaSystemPrompt],
new MessagesPlaceholder("chat_history"),
["human", "{question}"],
]);
const contextualizedQuestion = (input: Record<string, unknown>) => {
if ("chat_history" in input) {
return contextualizeQChain;
}
return input.question;
};
const ragChain = RunnableSequence.from([
RunnablePassthrough.assign({
context: (input: Record<string, unknown>) => {
if ("chat_history" in input) {
const chain = contextualizedQuestion(input);
return chain.pipe(retriever).pipe(formatDocumentsAsString);
}
return "";
},
}),
qaPrompt,
llm,
]);
let chat_history = [];
const question = "What is task decomposition?";
const aiMsg = await ragChain.invoke({ question, chat_history });
console.log(aiMsg);
chat_history = chat_history.concat(aiMsg);
const secondQuestion = "What are common ways of doing it?";
await ragChain.invoke({ question: secondQuestion, chat_history });
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Task decomposition is a technique used to break down complex tasks into smaller and more manageable "... 278 more characters,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Task decomposition is a technique used to break down complex tasks into smaller and more manageable "... 278 more characters,
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Common ways of task decomposition include using prompting techniques like Chain of Thought (CoT) or "... 332 more characters,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Common ways of task decomposition include using prompting techniques like Chain of Thought (CoT) or "... 332 more characters,
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}
See the first LastSmith trace here and the second trace here
Here we’ve gone over how to add application logic for incorporating historical outputs, but we’re still manually updating the chat history and inserting it into each input. In a real Q&A application we’ll want some way of persisting chat history and some way of automatically inserting and updating it.
For this we can use:
- BaseChatMessageHistory: Store chat history.
- RunnableWithMessageHistory: Wrapper
for an LCEL chain and a
BaseChatMessageHistory
that handles injecting chat history into inputs and updating it after each invocation.
For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the How to add message history (memory) LCEL page.