Quick start guide to get LangChain running in under 5 minutes.
Using uv (recommended for speed):
uv pip install langchain
Using pip:
pip install langchain
With provider extras:
# OpenAI
uv pip install langchain "langchain[openai]"
# Anthropic
uv pip install langchain "langchain[anthropic]"
# Multiple providers
uv pip install langchain "langchain[openai,anthropic]"
export OPENAI_API_KEY="sk-your-openai-api-key"
Or create a .env file:
OPENAI_API_KEY=sk-your-key
ANTHROPIC_API_KEY=sk-ant-key
Create agent.py:
from langchain.agents import create_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="claude-sonnet-4",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
# Run the agent
response = agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)
print(response)
Run it:
python agent.py
Create rag_app.py:
import os
from langchain.text import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
# Load documents
loader = TextLoader("documents.txt")
documents = loader.load()
# Split text
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(texts, embeddings)
# Create retriever
retriever = vectorstore.as_retriever()
# Create chain
prompt = ChatPromptTemplate.from_messages([
("system", "Answer based on context: {context}"),
("human", "{question}")
])
llm = ChatOpenAI(model="gpt-4o")
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
# Query
response = chain.invoke("What is the main topic?")
print(response)
Run it:
python rag_app.py
# Main package (recommended)
pip install langchain
# Core only (minimal)
pip install langchain-core
# Community integrations
pip install langchain-community
# OpenAI
pip install "langchain[openai]"
# Anthropic
pip install "langchain[anthropic]"
# Google
pip install "langchain[google-genai]"
# AWS
pip install "langchain[aws]"
# Azure
pip install "langchain[azure-ai]"
# Ollama (local)
pip install "langchain[ollama]"
# Hugging Face
pip install "langchain[huggingface]"
# Initialize project
uv init my-langchain-app
cd my-langchain-app
# Add LangChain
uv add langchain "langchain[openai]"
# Run
uv run python app.py
poetry add langchain langchain-openai
poetry add --group dev langchain-community
| Package | Purpose |
|---|---|
| langchain | Main framework |
| langchain-core | Base abstractions |
| langchain-community | Community integrations |
| langchain-openai | OpenAI integration |
| langchain-anthropic | Anthropic integration |
| langchain-google-genai | Google AI integration |
| langchain-aws | AWS Bedrock integration |
| langchain-azure-ai | Azure AI integration |
| langchain-ollama | Ollama (local LLM) |
| langchain-huggingface | Hugging Face models |
Enable tracing and monitoring:
export LANGCHAIN_API_KEY="your-langsmith-key"
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_PROJECT="my-project"
export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
In code:
from langchain_core.tracers.langchain import LangChainTracer
tracer = LangChainTracer(project_name="my-project")
For complex workflows:
pip install langgraph
from langgraph.graph import StateGraph
# Define graph
workflow = StateGraph()
# ... define nodes and edges
app = workflow.compile()