How to handle multiple queries
This guide assumes familiarity with the following:
Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that.
Setupβ
Install dependenciesβ
- npm
- yarn
- pnpm
npm i @langchain/community @langchain/openai zod chromadb
yarn add @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/community @langchain/openai zod chromadb
Set environment variablesβ
OPENAI_API_KEY=your-api-key
# Optional, use LangSmith for best-in-class observability
LANGSMITH_API_KEY=your-api-key
LANGCHAIN_TRACING_V2=true
Create Indexβ
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";
import { OpenAIEmbeddings } from "@langchain/openai";
import "chromadb";
const texts = ["Harrison worked at Kensho", "Ankush worked at Facebook"];
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, {
collectionName: "multi_query",
});
const retriever = vectorstore.asRetriever(1);
Query analysisβ
We will use function calling to structure the output. We will let it return multiple queries.
import { z } from "zod";
const searchSchema = z
.object({
queries: z.array(z.string()).describe("Distinct queries to search for"),
})
.describe("Search over a database of job records.");
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-pro",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
const system = `You have the ability to issue search queries to get information to help answer user information.
If you need to look up two distinct pieces of information, you are allowed to do that!`;
const prompt = ChatPromptTemplate.fromMessages([
["system", system],
["human", "{question}"],
]);
const llmWithTools = llm.withStructuredOutput(searchSchema, {
name: "Search",
});
const queryAnalyzer = RunnableSequence.from([
{
question: new RunnablePassthrough(),
},
prompt,
llmWithTools,
]);
We can see that this allows for creating multiple queries
await queryAnalyzer.invoke("where did Harrison Work");
{ queries: [ "Harrison" ] }
await queryAnalyzer.invoke("where did Harrison and ankush Work");
{ queries: [ "Harrison work", "Ankush work" ] }
Retrieval with query analysisβ
So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time.
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";
const chain = async (question: string, config?: RunnableConfig) => {
const response = await queryAnalyzer.invoke(question, config);
const docs = [];
for (const query of response.queries) {
const newDocs = await retriever.invoke(query, config);
docs.push(...newDocs);
}
// You probably want to think about reranking or deduplicating documents here
// But that is a separate topic
return docs;
};
const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("where did Harrison and ankush Work");
[
Document { pageContent: "Harrison worked at Kensho", metadata: {} },
Document { pageContent: "Ankush worked at Facebook", metadata: {} }
]
Next stepsβ
Youβve now learned some techniques for handling multiple queries in a query analysis system.
Next, check out some of the other query analysis guides in this section, like how to deal with cases where no query is generated.