In the world of modern technology, artificial intelligence (AI) is revolutionizing the way we interact with computer systems. One of the most interesting applications of AI is the creation of advanced chatbots, capable of understanding and answering questions in a natural and contextualized way. In this article we will see how to create a chatbot using LangChain and OpenAI (GPT).
In the previous article of our guide, we have already seen how to use LangChain with Ollama to create a simple application capable of responding to user prompts. Now, we’ll take the topic back and create a chatbot that can provide contextualized responses, based on the entire conversation, improving interaction and user experience.
Implementing a ChatBot with LangChain
The first thing to do is install the LangChain module for integration with OpenAI. This module provides the functionality needed to connect and interact with OpenAI GPT models:
pip install -qU langchain-openai
As in the previous tutorial, we will use a ChatPromptTemplate
to define the system prompt. Our goal is to build a chatbot that helps users plan a trip.
Below are the key steps:
- Instead of using Ollama, this time we will use
ChatOpenAI
, the specific connector for OpenAI. - We will insert a
MessagesPlaceholder
, which will subsequently be replaced by the list of messages that make up the conversation. This placeholder is essential for maintaining the context of the conversation.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
llm = ChatOpenAI(model="gpt-3.5-turbo")
prompt = ChatPromptTemplate.from_messages(
[
( "system", "You are an assistant who helps users plan trips.",),
MessagesPlaceholder(variable_name="messages"),
]
)
chain = prompt | llm
To manage the history of the conversation, we will use the ChatMessageHistory
object, which represents a list of messages. This will allow us to keep track of the messages exchanged during the chat session.
Furthermore, to simulate the workflow of a chat, we will create a loop that will end only when the user enters the “exit” command. This loop will allow the chatbot to continue responding until the user decides to end the conversation.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.chat_message_histories import ChatMessageHistory
llm = ChatOpenAI(model="gpt-3.5-turbo")
prompt = ChatPromptTemplate.from_messages(
[
( "system", "You are an assistant who helps users plan trips.",),
MessagesPlaceholder(variable_name="messages"),
]
)
chain = prompt | llm
history = ChatMessageHistory()
while(True):
message = input("User: ")
if(message.lower() == "exit"):
break
history.add_user_message(message)
response = chain.invoke({"messages": history.messages})
history.add_ai_message(response.content)
print("AI: " + response.content)
The heart of the program is the invoke method. This method takes as input the message history and replaces it with the previously defined placeholder. In this way, the GPT model can generate responses by taking the entire conversation into account.
So all that remains is to test our chatbot with a few requests:
User: Hi, I need help planning a trip
AI: Of course! I'd be happy to help you plan your trip. Where are you thinking of going and what kind of trip are you interested in?
User: I would like to go to Mexico!
AI: That's great! Mexico is a beautiful country with so much to offer. What specific cities or regions are you interested in visiting in Mexico? And do you have any particular activities or attractions in mind that you'd like to include in your itinerary?
...
User: exit
In this tutorial, we explored how to use LangChain with OpenAI and prompt templates to create dynamic, custom language chains. LangChain offers great flexibility and power, allowing you to build complex and scalable natural language processing applications.