LangChain for agents
LangChain supplies the primitives: message types, model wrappers, runnables, retrievers, and structured tool schemas (@tool). You compose directed pipelines for preprocessing, deterministic routing, context packing, single-hop tool calls, or nodes that feed into LangGraph for loops.
For durable loops and branching, evolve into LangGraph for agents.
Process — compose then invoke
LCEL intuition: A | B | C means “output schema of (A) must satisfy input expectations of (B)”. Fail fast with typed contracts.
Tool binding pipeline
Docstrings drive few-shot ergonomics: keep them factual—models read them verbatim.
Chain vs agent decision
Easy rule: LangChain is excellent for repeatable typed steps. Move to a graph when the next step depends on prior observations or when you need checkpoints and human interrupts.
Minimal example — bind tools without a manual loop
Below is illustrative of composition around a chat model. In production loops, prefer LangGraph patterns (e.g. create_react_agent in the LangGraph tool tutorial); chains still excel for deterministic pre-/post-steps.
# pip install langchain-core langchain-openai
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
@tool
def unit_convert_c_to_f(celsius: float) -> float:
"""Convert Celsius to Fahrenheit."""
return celsius * 9 / 5 + 32
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
bound = llm.bind_tools([unit_convert_c_to_f])
msg = bound.invoke([HumanMessage("Convert 37C to Fahrenheit using the tool")])
print(msg.tool_calls or msg.content)Senior bar: cite streaming chunks, cancellation tokens, retries on transport layers (distinct from semantic repair).
Tool design standards
| Standard | Example |
|---|---|
| Small surface | get_order_status(order_id) beats query_database(sql). |
| Typed arguments | Prefer enums, ids, bounded strings, and numeric ranges. |
| Server-side authorization | Tool receives user/tenant context and checks access before work. |
| No hidden writes | Retrieval tools should not mutate production state. |
| Idempotency | Write tools require a request id or deterministic operation key. |
| Clear observation | Return concise facts, ids, and status codes; do not return secrets or raw stack traces. |
TypeScript tangent
LangChain.js mirrors the same primitives (bindTools, DynamicStructuredTool). Use JS when deploying on Node-heavy stacks; diagrams above still apply.
Interview questions — LangChain/LCEL
1. What is a Runnable in LangChain terms?
- Object with
invoke/ainvoke,stream,batchand composable|with schema expectations.
2. Why gate tool descriptions?
- They become part of prompt surface—overlong catalogs waste tokens & invite wrong routing.
3. Difference between PromptTemplate chaining vs Runnable passthrough.
RunnablePassthroughhelps carry multimodal keys (context + question) merges; PromptTemplate binds variables only.
4. Where would retrieval live?
- Dedicated retriever runnable before model call—or inside graph node LangGraph—but keep ACL + dedupe at retriever boundary.
5. How avoid silent JSON drift on outputs?
- Prefer structured output / pydantic parsers with repair loop capped—see Gen AI structured outputs narrative on
/gen-ai.
6. Why avoid a huge tool catalog?
- It increases token cost and wrong-tool routing. Expose only the tools relevant to the current task and user permissions.
7. Where do retries belong?
- Transport retries belong around providers/tools. Semantic retries belong in a capped repair path with trace visibility.
Next
Spotted something unclear or wrong on this page?