LangChain 01 –Build a Simple LLM Application with LangChain

Overview

We’re going to explore a number of practical applications of LLMs.

We’ll start with LangChain and build a simple application that translates text from English into another language.

We’ll touch briefly on LangChain’s logging and tracing features via LangSmith.

And then we’ll deploy the application locally with LangServe.

This is adapted from LangChain’s Build a Simple LLM App and llm_chain.ipynb.

More specifically, you’ll have a high level LangChain overview of:

Setup

Installation

To install LangChain run:

pip install langchain langchain-openai
conda install langchain langchain-openai -c conda-forge

For more details, see the Installation guide.

Example Application

We’re going to use this Jupyter notebook to build the application.

You can clone this repo and run everything locally.

You can also run it in Google Colab, but you’ll need to install the dependencies there, and run the cell to set the environment variables.

Serving with LangServe

In the previous section we wrote a script that programmatically invokes our simple chain.

In this section we’ll see how to serve the application with LangServe.

LangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we’ll show how you can deploy your app with LangServe.

This part needs to be run in a Python script and executed from the command line.

Install with:

pip install "langserve[all]"

Server

To create a server for our application we’ll make a serve.py file.

This will contain our logic for serving our application. It consists of three things:

  1. The definition of our chain that we just built above
  2. A FastAPI app
  3. A definition of a route from which to serve the chain, which is done with langserve.add_routes
serve.py
#!/usr/bin/env python
from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langserve import add_routes

# 1. Create prompt template
system_template = "Translate the following into {language}:"
prompt_template = ChatPromptTemplate.from_messages([
    ('system', system_template),
    ('user', '{text}')
])

# 2. Create model
model = ChatOpenAI()

# 3. Create parser
parser = StrOutputParser()

# 4. Create chain
chain = prompt_template | model | parser

# 5. App definition
app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple API server using LangChain's Runnable interfaces",
)

# 6. Adding chain route
add_routes(
    app,
    chain,
    path="/chain",
)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)

And that’s it! If we execute this file:

python serve.py

we should see our chain being served at http://localhost:8000.

Playground

Every LangServe service comes with a simple built-in UI for configuring and invoking the application with streaming output and visibility into intermediate steps.

Head to http://localhost:8000/chain/playground/ to try it out!

Pass in the same inputs as before - {"language": "italian", "text": "hi"} - and it should respond same as before.

Client

Now let’s set up a client for programmatically interacting with our service. We can easily do this with the langserve.RemoteRunnable. Using this, we can interact with the served chain as if it were running client-side.

client.py
from langserve import RemoteRunnable

remote_chain = RemoteRunnable("http://localhost:8000/chain/")
remote_chain.invoke({"language": "italian", "text": "hi"})

print(type(result))
print(result)

To learn more about the many other features of LangServe head here.

Conclusion

For further reading on the core concepts of LangChain, see Conceptual Guides.

See more detailed guides:

And the LangSmith docs:

Back to top