- Passarella Dev Newsletter
- Posts
- Getting Started with LangchainJS: Build a Flexible AI Prompt Service
Getting Started with LangchainJS: Build a Flexible AI Prompt Service
Step-by-Step guide for Using Langchain to Handle JSON Outputs and Dynamic Prompts
Table of Contents
LangchainJS is a library used to build AI applications using LLMs, ranging from simple chatbots to complex use cases! I have used it many times, professionally and in personal projects too. It’s a double-edged sword, a lot of people say that Langchain is complex because it abstracts too much, which I agree with, also, the documentation is all over the place and constantly changing, making it difficult to understand what to use.
But at the same time, it’s a nice widely-used library to start working with AI and learn about prompt engineering.
For this article, I’m gonna teach you the technical bits of Langchain, and in the end, we’ll build a nice reusable class that you can start as a base for your AI-powered projects.
Goal: Create a reusable AI prompt service, that you can use for multiple scenarios in your application.
Requirements:
Separate System Prompt and User Prompt
Return structured JSON data as a response
Prompts should have dynamic variables for flexibility
It should be able to select a different model and other parameters
Given the goal and requirements, let’s build this. But first, I need to quickly explain some Langchain concepts:
📋 Langchain Concepts
ChatOpenAI
ChatOpenAI is a wrapper class around OpenAI LLMs that use the Chat model (Chat endpoint). Chat models are Language models that use a sequence of messages as inputs and return chat messages as outputs
To use it, you need to have the OPENAI_API_KEY
environment variable set.
You use it like such:
// Create a new instance of ChatOpenAI with specific temperature and model name
const model = new ChatOpenAI({
temperature: 0.5,
model: "gpt-4o",
});
// Invoke the model with a message
const message = await model.invoke("What is Langchain?");
console.log(message);
Prompt Templates
Prompt templates help translate user input into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate the output. We’re gonna use this to fill one of our requirements, “Prompts should have dynamic variables“.
PromptTemplate
Used for simple use cases, with plain string input.
import { PromptTemplate } from "@langchain/core/prompts";
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);
await promptTemplate.invoke({ topic: "cats" });
ChatPromptTemplate
Instead of a simple string, it receives a list of Messages with a specific role.
import { ChatPromptTemplate } from "@langchain/core/prompts";
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["user", "Tell me a joke about {topic}"],
]);
await promptTemplate.invoke({ topic: "cats" });
🏗️ Structured Output
There are many ways to return JSON responses with Langchain and OpenAI, I have tried a few, but my favorite way is using withStructuredOutput
+ Zod. Zod is a library to create schemas, similar to Typescript, but also used at run-time to validate data.
By invoking withStructuredOutput
and passing the Zod schema to it, the model will abstract whatever parsers are necessary to return the structured output matching the Zod schema.
const schema = z.object({
dish: z.string(),
ingredients: z.array(z.string())
})
const structuredLlm = model.withStructuredOutput(schema);
await structuredLlm.invoke("Give me a recipe using potatoes");
This returns:
{
"dish": "Garlic Parmesan Roasted Potatoes",
"ingredients": ["potatoes", "olive oil", "garlic", "grated Parmesan cheese", "salt", "black pepper", "dried oregano", "dried thyme", "fresh parsley"]
}
🛠️ Creating the class
Now, let’s create that class combining the concepts above so that we can reuse it in our app.
First, we have the class called OpenAIService
, in the constructor we define the model with the config parameters, such as which model we want to use, the temperature, and max concurrency.
export class OpenAIService {
private model: ChatOpenAI
constructor(config: Config) {
this.model = new ChatOpenAI({
model: config?.model || 'gpt-4o',
temperature: config?.temperature || 0.5,
maxConcurrency: 5
})
}
}
Then we define a method called promptAI
, in this method, we create the new structured model using withStructuredOutput
, this returns a Runnable sequence, meaning that they can be invoked and/or piped, basically defining a chain. We pass the Zod schema to the structured output model via input parameters.
We also create our prompt template with ChatPromptTemplate
, defining both the system and the user prompt. The user and system prompts are passed via the input parameters.
async promptAI<T extends Record<string, any>>(
input: InputParameters
): Promise<T> {
try {
const structuredModel = this.model.withStructuredOutput<T>(input.schema)
const promptTemplate = ChatPromptTemplate.fromMessages([
['system', input.systemPrompt],
['user', input.userPrompt]
])
} catch (error) {
log.error((error as Error).message)
throw error
}
}
We now have the model
and prompt
, but we need to invoke our prompt passing in the variables, like we saw in the ChatPromptTemplate section.
We can’t just invoke it directly, we need to use our structured model instead. So first we create a chain using the pipe
method.
const chain = await promptTemplate.pipe(structuredModel)
const result = chain.invoke(input.params)
The result
will return the JSON data according to the Zod schema you passed to it. input.params
is optional, and only needed when there are dynamic variables in the prompt.
We don’t need to create the chain and then invoke it, let’s do it in one go:
const result = await promptTemplate
.pipe(structuredModel)
.invoke(input.params)
Final code
import { ChatOpenAI } from '@langchain/openai'
import { ChatPromptTemplate } from '@langchain/core/prompts'
export class OpenAIService {
private model: ChatOpenAI
constructor(config: Config) {
this.model = new ChatOpenAI({
model: config?.model || 'gpt-4o',
temperature: config?.temperature || 0.5,
maxConcurrency: 5
})
}
async promptAI<T extends Record<string, any>>(
input: InputParameters
): Promise<T> {
try {
const promptTemplate = ChatPromptTemplate.fromMessages([
['system', input.systemPrompt],
['user', input.userPrompt]
])
const structuredModel = this.model.withStructuredOutput<T>(input.schema)
const result = await promptTemplate
.pipe(structuredModel)
.invoke(input.params)
return result
} catch (error) {
log.error((error as Error).message)
throw error
}
}
}
Let’s not forget the types we used
import { z } from 'zod'
export type InputParameters = {
systemPrompt: string
userPrompt: string
schema: z.ZodTypeAny
params?: Record<string, any>
}
export type Config = {
temperature?: number
model?: string
}
🔥 Using the class
Let’s say I need to generate recipes based on the user input of ingredients.
System Prompt: “You are a skilled chef AI, specialized in creating unique recipes. When the user provides a list of ingredients, generate a creative recipe using those ingredients. Keep instructions clear and concise, and include only common kitchen techniques to make it easy to follow“
User Prompt: “Create a recipe using the following ingredients: {ingredients}.“
Zod schema:
const recipeSchema = z.object({
recipe: z.string().describe('Recipe title'),
ingredients: z.array(z.string().describe('List of ingredients')),
instructions: z.string()
})
Let’s now use our class with the parameters we defined above.
interface Recipe {
recipe: string,
ingredients: string[],
instructions: string
}
const userIngredients = ['pasta', 'eggs', 'bacon']
const openAiService = new OpenAIService({
temperature: 0.6,
model: 'gpt-4o'
})
const aiResponse = await openAiService.promptAI<Recipe>({
systemPrompt: 'You are a skilled chef AI, specialized in creating unique recipes. When the user provides a list of ingredients, generate a creative recipe using those ingredients. Keep instructions clear and concise, and include only common kitchen techniques to make it easy to follow',
userPrompt: 'Create a recipe using the following ingredients: {ingredients}.',
params: {
ingredients: userIngredients.join(', ')
},
schema: skillVariationSchema
})
Perfect! aiResponse
will be the JSON response we need. userIngredients
of course could come from anywhere, such as a database, UI, etc.
As you can see, we pass the ingredients
params because we use the {ingredients}
variable in the userPrompt
.
That’s all 🎉 this service covers most simple use cases, but it can definitely be improved, for example, to support RAG using Document Loaders and Retrievers.
Thank you for reading, make sure to keep following me to know when I post more content! I really appreciate that. (I would love it if you share this article with someone who wants to learn more about AI implementation).
Check out my other posts too!
Reply