0:00
/
0:00
Transcript

LangChain 101: Building a Chatbot from Scratch

Learn how the basics of LangChain and LLMs to build a chat bot with Groq and Streamlit for FREE.

Today, we'll explore the fundamentals of this powerful framework and build a functional chatbot along the way. This is intended to be a comprehensive guide on LangChain basics. Watch the video for a more detailed version of this article.

What is LangChain?

Let's start with a proper definition, shall we?

LangChain: An open-source framework designed to facilitate the development of applications using Large Language Models (LLMs). It provides a standardized interface for chains, prompts, and integration of external data sources and tools with LLMs.

In simpler terms, LangChain is like a Swiss Army knife for AI applications. You reach for it when you want your LLM to do more than generate witty comebacks. In my opinion, the best feature is that you can easily plug and play almost any LLMs into your applications with LangChain.

The Building Blocks of LangChain

LangChain operates on three main components. Let's break them down:

  1. LLMs (Large Language Models): Artificial intelligence models trained on vast amounts of text data to understand and generate human-like text. They are the brains of the operation. We'll be using Groq because, apparently, good things can be free.

  1. Prompts: Structured input provided to an LLM to guide its output in a specific direction. In our context, conversation starters are like the small talk of the AI world, but hopefully more enjoyable.

  1. Output Parsers: Components that process and structure the raw output from an LLM into a more usable format. As you explore LangChain, you will also encounter many other components.

  2. Agents: Advanced constructs that use LLMs to determine actions and tools to use. This is one feature that can make your applications more versatile but since it is a bit more advanced, we won’t be covering it today.

Check out the video if you want more detailed information about how it works with proper code breakdown in Jupyter.

Setting Up Your Development Environment

Before we dive into coding, let's prepare our workspace. After all, a poorly set environment is as helpful as a chocolate teapot. Once in a virtual environment,

  1. Install the required packages:

pip install langchain langchain-groq streamlit python-dotenv
  1. Get your API keys:

    • For LangSmith (optional): LangSmith is used for testing and debugging LLM applications, but we can also use it to monitor how our chains work to better understand them. Visit LangSmith Settings to get your API key.

    • For Groq (our LLM provider): Head to Groq Console and get your FREE API key; you will have access to multiple open-source LLM models.

    Pro tip: Keep these keys secret. Treat them like your diary from middle school – not for public consumption.

3. Set up your environment variables:

from dotenv import load_dotenv

load_dotenv()

This loads your API keys from a .env file.

LangChain's Core Components

Let's examine the three primary components we'll be working with. In our project, we'll use Groq's implementation:

from langchain_groq import ChatGroq

llm = ChatGroq(model='llama3-8b-8192')

LangChain provides the ChatPromptTemplate for creating dynamic prompts:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
       ("system", "You are a helpful assistant with a slight sarcastic edge."),
       ("user", "{user_input}")
])

We'll use the StrOutputParser for simplicity:

from langchain_core.output_parsers import StrOutputParser

output_parser = StrOutputParser()

A chain in LangChain combines these components to create a processing pipeline. Let's construct a basic chain:

chain = prompt | llm | output_parser
response = chain.invoke({"user_input": "Explain quantum computing in simple terms."})
print(response)

This chain takes user input, formats it with the prompt, processes it through the LLM, and then parses the output.

Crafting Your First Chain:

Now, let's see how these components work together to create a basic chain.

from langchain_groq import ChatGroq
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Initialize the LLM
llm = ChatGroq(model='llama3-8b-8192')

# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with a slightly sarcastic sense of humor."),
    ("user", "{user_input}")
])

# Construct the chain
chain = prompt | llm | StrOutputParser()

# Invoke the chain
response = chain.invoke({"user_input": "Explain quantum computing like I'm five."})

print(response)

Let's break this down again:

1. We initialize our LLM (ChatGroq) with a specific model.

2. We create a ChatPromptTemplate, which structures our input to the LLM.

3. We build our chain using the `|` operator, which is LangChain's way of saying "pipe this into that".

4. Finally, we invoke the chain with our input.

Adding Memory: Because Even AIs Need to Remember

Now, let's give our chatbot a memory. After all, what's the point of artificial intelligence if it can't remember that you hate pineapple on pizza?

from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory

# Initialize chat history
msgs = ChatMessageHistory()

# Create a new prompt template with chat history
prompt_with_history = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with a slightly sarcastic sense of humor."),
    ("placeholder", "{chat_history}"),
    ("user", "{query}")
])

# Construct the chain with history
chain_with_history = RunnableWithMessageHistory(
    prompt_with_history | llm | StrOutputParser(),
    lambda session_id: msgs,
    input_messages_key="query",
    history_messages_key="chat_history"
)

# Have a conversation
config = {"configurable": {"session_id": "sarcastic_chat"}}
response1 = chain_with_history.invoke({"query": "Tell me a joke about programming."}, config=config)

print(response1)

response2 = chain_with_history.invoke({"query": "Now explain the joke."}, config=config)

print(response2)

Here's what's happening:

1. We create a ChatMessageHistory to store our conversation.

2. We update our prompt template to include a placeholder for chat history.

3. We use RunnableWithMessageHistory to create a chain that maintains a conversation state.

4. We invoke the chain multiple times, and it remembers previous interactions.

Now, it can maintain context across multiple interactions. I’d recommend watching the video since it will drive home these concepts in a more practical and fun approach.

Building a Streamlit App: Bringing Your Chatbot to Life

Finally, let's wrap our chatbot in a Streamlit app for a more user-friendly interface. After all, not everyone appreciates the raw beauty of a command line:

import streamlit as st
from langchain_groq import ChatGroq
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import StreamlitChatMessageHistory

st.title("🤖 Sarcastic AI Assistant")

# Model selection and temperature setting
models = ["llama3-70b-8192", "llama3-8b-8192", "mixtral-8x7b-32768", "gemma-7b-it"]

model = st.selectbox("Choose your LLM model:", models)
temperature = st.slider("Set the sarcasm level:", 0.0, 2.0, 1.0, 0.1)

# Initialize LLM
llm = ChatGroq(model=model, temperature=temperature)

# Set up prompt and chain
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with a slightly sarcastic sense of humor."),
    ("placeholder", "{chat_history}"),
    ("user", "{query}")
])

chain = prompt | llm | StrOutputParser()

# Set up chat history
msgs = StreamlitChatMessageHistory()

# Create chain with history
chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: msgs,
    input_messages_key="query",
    history_messages_key="chat_history"
)

# Chat interface
if "messages" not in st.session_state:
    st.session_state.messages = []

for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

if prompt := st.chat_input("Ask me anything, if you dare..."):
    st.session_state.messages.append({"role": "user", "content": prompt})

    with st.chat_message("user"):
        st.markdown(prompt)

    with st.chat_message("assistant"):
        config = {"configurable": {"session_id": "sarcastic_chat"}}
        response = chain_with_history.invoke({"query": prompt}, config=config)
        st.markdown(response)

    st.session_state.messages.append({"role": "assistant", "content": response})

st.sidebar.write(f"Current model: {llm.model_name}")

This Streamlit app does the following:

1. Provides a user interface for selecting the LLM model and adjusting the "sarcasm level" (temperature).

2. Initializes the LLM and sets up the chain with history.

3. Creates a chat interface where users can interact with the AI.

4. Displays the conversation history and the current model in use.

To run this masterpiece, save it as a .py file and run:

streamlit run your_app_name.py

And voilà! You now have a web app to chat with a slightly sarcastic AI. It's like talking to a tech support rep with fewer sighs and eye-rolls.

Conclusion

Congratulations! You've successfully navigated the basics of LangChain and created a functional, slightly sarcastic chatbot. You've also learned about LLMs, prompts, chains, and how to implement memory in your AI applications. Plus, you've wrapped it all in a Streamlit app.

Remember, this is just scratching the surface of what LangChain can do. There's a whole world of advanced features waiting for you to explore, like connecting to databases, using different LLMs, and even giving your AI the ability to use external tools. We will cover some of them in the future.

Use your newfound LangChain skills wisely. Maybe don't use them to automate your social media presence or to write your wedding vows—unless, of course, you want your significant other to question your life choices.

Happy coding, and may your error messages be few! 💻


That is it from me! I hope this exploration was helpful in some way! What are your thoughts on the Langchain? What are some topics you might want me to cover next?

If you found value in this article, please share it with someone who might also benefit from it. Your support helps spread knowledge and inspires more content like this. Don’t forget to like this article and — share your thoughts and experiences below! :)

Discussion about this video

User's avatar