Building production-ready AI agents has evolved quickly. Two key tools in this space are LangChain v1 and LangGraph v1—and while they’re closely related, they solve different layers of the problem.
This guide breaks down both, how they work, and when to use each.
🚀 LangChain v1 Overview
LangChain v1 is designed for building AI agents quickly and reliably in production. It abstracts away complexity and gives you a clean, standardized interface.
1. The create_agent API
The biggest shift in v1 is the introduction of a single, unified API for agents.
Instead of stitching together chains and tools manually, you can now define everything in one place:
from langchain.agents import create_agent agent = create_agent( model="claude-sonnet-4-6", tools=[search_web, analyze_data], system_prompt="You are a helpful research assistant." )
Why it matters
- Eliminates boilerplate
- Standardizes agent patterns
- Built on top of LangGraph (so you get durability for free)
- Supports middleware for customization
2. Standard Content Blocks
LangChain v1 introduces a provider-agnostic output format via content_blocks.
for block in response.content_blocks: if block["type"] == "reasoning": print(block["reasoning"]) elif block["type"] == "text": print(block["text"])
Key benefits
- Works across OpenAI, Anthropic, Google, AWS, Ollama
- Structured outputs: reasoning, text, citations, tool calls
- Strong typing + backward compatibility
👉 This removes one of the biggest pains: provider-specific parsing logic
3. Middleware System
Middleware gives you fine-grained control over agent execution.
Built-in middleware
- HumanInTheLoopMiddleware → approval before sensitive actions
- SummarizationMiddleware → compress long context
- PIIMiddleware → redact sensitive data
Lifecycle hooks
| Hook | Purpose |
|---|---|
before_agent | Validate input, load memory |
before_model | Modify prompts |
wrap_model_call | Intercept LLM calls |
wrap_tool_call | Control tool execution |
after_model | Apply guardrails |
after_agent | Persist results |
👉 Think of middleware as Express.js for AI agents
4. Built on LangGraph (Important!)
Every agent created with create_agent automatically inherits:
- Persistence → resume conversations
- Streaming → real-time tokens + tool calls
- Human-in-the-loop → pause for approval
- Time travel → replay and branch execution
You don’t see LangGraph—but it’s doing the heavy lifting underneath.
5. Simplified Package Design
LangChain v1 is now agent-focused and minimal:
| Module | Responsibility |
|---|---|
langchain.agents | Agent creation |
langchain.messages | Messages + content blocks |
langchain.tools | Tooling system |
langchain.chat_models | Model abstraction |
langchain.embeddings | Embeddings |
Legacy features moved to:
pip install langchain-classic
6. Better Structured Output
Structured output is now:
- Cheaper (fewer LLM calls)
- Native to agent loop
- More reliable
With features like:
ToolStrategy- Built-in error handling (
handle_errors) - Multi-tool coordination
7. Multimodal Support
LangChain supports mixed inputs:
- Text
- Images
- Video (depending on provider)
All unified through the same message + content block system.
✅ When to Use LangChain
Use LangChain if you want to:
- Build agents fast
- Focus on business logic, not infrastructure
- Use standard patterns
- Avoid low-level orchestration
⚙️ LangGraph v1 Overview
LangGraph is a low-level orchestration engine for building stateful, long-running agents.
If LangChain is the “framework,” LangGraph is the runtime system.
1. Graph-Based Architecture
LangGraph models execution as a graph:
| Component | Role |
|---|---|
| State | Shared data |
| Nodes | Functions (LLM, tools, logic) |
| Edges | Execution flow |
Key concepts
- Super-steps → execution ticks
- Message passing → state flows between nodes
- Command → control flow + updates
- Send → parallel fan-out
👉 This enables deterministic, debuggable workflows
2. Durable Execution
LangGraph is built for failure-resistant systems:
- Checkpoint every step
- Resume after crashes
- Avoid re-running side effects (
@task) - Configurable durability modes (
sync,async,exit)
3. Persistence & State Management
- Thread-based execution (
thread_id) - Full state history tracking
- Manual state updates (
update_state) - Multiple backends (SQLite, Postgres, Redis, etc.)
4. Human-in-the-Loop (HITL)
Pause execution anywhere:
interrupt()
Resume later:
Command(resume=...)
Use cases
- Approval workflows
- Manual review
- Tool validation
- Multi-step human feedback
5. Memory Store
- Cross-session memory
- Semantic search via embeddings
- Namespaced storage (e.g. per user)
6. Time Travel
One of the most powerful features:
- Replay from checkpoints
- Fork execution paths
- Debug past states
👉 This is Git for agent execution
7. Streaming
Flexible streaming modes:
valuesupdatesmessages- Combined streams
Supports nested graphs and HITL flows.
8. Advanced Capabilities
- Subgraphs (modular workflows)
- Functional API (simpler alternative)
- Node caching (TTL-based)
- Dependency injection (
context_schema) - Built-in visualization
- Optional encryption
✅ When to Use LangGraph
Use LangGraph if you need:
- Full control over execution
- Complex workflows (multi-step, branching)
- Reliability for long-running agents
- Debugging + observability
- Human-in-the-loop systems
🔥 LangChain vs LangGraph: What’s the Difference?
This is where most people get confused.
🧠 Mental Model
- LangChain = “What the agent does”
- LangGraph = “How the agent runs”
⚔️ Side-by-Side Comparison
| Category | LangChain v1 | LangGraph v1 |
|---|---|---|
| Level | High-level | Low-level |
| Goal | Build agents fast | Control execution |
| Abstraction | Prebuilt patterns | Graph primitives |
| Learning curve | Easy | Steeper |
| Flexibility | Moderate | Very high |
| Control flow | Implicit | Explicit (nodes + edges) |
| State management | Hidden | Fully controlled |
| Durability | Built-in (via LangGraph) | Core feature |
| Human-in-loop | Middleware | Native (interrupt) |
| Debugging | Limited | Full replay + time travel |
🧩 How They Work Together
They are not competitors.
👉 LangChain is built on top of LangGraph
Your App ↓ LangChain (create_agent) ↓ LangGraph (execution engine)
🧭 When to Choose What
Choose LangChain if:
- You want to ship quickly
- Your workflow is relatively standard
- You don’t need deep control
Choose LangGraph if:
- You need custom orchestration
- You care about reliability and recovery
- You’re building complex agent systems
