DevTurtle logo DevTurtle

Introduction to LangChain with Ollama and llama3

guide/ GenAI Guide with LangChain

LangChain is an open-source Python framework designed to facilitate the development of applications based on large language models (LLMs). This framework offers a set of tools, components, and interfaces that make it easier to build AI-powered applications. In this tutorial we will see how to create an elementary application integrated with the llama3 model.

What is LangChain?

Launched by Harrison Chase in 2022, LangChain has seen a rapid rise to fame, becoming the fastest-growing open source project on GitHub. This framework, acts as a generic interface for almost all LLMs, offering a centralized environment for developing LLM applications and integrating them with external data sources.

Unlike Spring AI which we have already presented in our previous articles and which is still in beta, LangChain is now a stable and complete solution that has an ecosystem of integrated tools. The core library of the framework is in fact distributed together with other components:

  • Lang Smith: monitoring tool that allows the observability of the “chains” at runtime and to identify any potential improvements.
  • Lang Serve: server to deploy your own LLM-based applications transforming them into “APIs”
  • Third-party libraries: which allow you to integrate LangChain with external tools such as OpenAI or Ollama.

Installing LangChain

As a prerequisite for this guide, we invite you to read our article that explains how to start llama3 on Ollama. It is also necessary to install Python on your device and download the LangChain library by running the following command in the console:

pip install langchain

Once the download is complete we can create an elementary Python script for a first test:

Python
from langchain_community.llms import Ollama

llm = Ollama(model="llama3")
out = llm.invoke("what is LangChain?")

print(out)

The script consists of two simple instructions:

  • On line 3 we instantiated the Ollama client and specified that we want to use the “llama3” model.
  • On the next line we invoked the LLM by asking the question “what is LangChain?”

Before running the program, you need to make sure that Ollama is started locally and that the llama3 template has been downloaded. The output of the script will contain the answer to the question “what is LangChain?”.

The use of chains

LangChain uses a modular approach, representing complex processes as modular components. These components can be combined and reused to create custom LLM applications.

Chains are the core of LangChain workflows. A chain is a sequence of automatic actions that are performed from start to finish to achieve a certain result. To give an example we will define a second script and this time we will also use a ChatPromptTemplate.

Python
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate

llm = Ollama(model="llama3")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a social media manager who writes social media posts on the topic provided as input."),
    ("user", "{input}")
])

chain = prompt | llm 

out = chain.invoke({"input": "what is LangChain?"})
print(out)

Let’s try to comment the script going in order:

  • First of all we defined the Ollama connector to use as in the previous example.
  • Subsequently, on line 6 we created a ChatPromptTemplate that allows you to define the structure of a generic conversation. In this case we used a “system prompt” to set the behavior of the LLM and ask it to respond to an input with a post to publish on social media as if it were to impersonate a “social media manager”.
  • Finally we combined the first block (LLM) with the second (ChatPromptTemplate) to execute a chain.

Below is the result obtained:

Ready to unlock new linguistic possibilities? Try LangChain today and start speaking like a native in no time! #LangChain #LanguageLearning #Travel #Business

The content of the “out” variable is an object of type message. To have an output string we would have to add a third block to our chain using a StrOutputParser:

Python
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

...

output_parser = StrOutputParser()

chain = prompt | llm | output_parser
out = chain.invoke({"input": "what is LangChain?"})

print(out)

This, our first tutorial on LangChain ends here. Hoping that it was useful to you, I invite you to read the next articles on the same topic.