Agentic AI — Track overview
Agentic AI means the model does more than answer once: it can choose the next state in a controlled workflow, such as retrieve evidence, call a tool, ask for approval, retry, or stop. The model proposes actions; the application enforces state, permissions, budgets, validation, and audit.
Study this track top to bottom, or jump to one layer when you already know the rest.
2026 baseline
| Anchor | What it means in practice |
|---|---|
| NIST AI RMF + Generative AI Profile | Treat agents as risk-managed systems: map risks, measure behavior, govern changes, and keep human accountability. |
| OWASP LLM Top 10 2025 | Design against prompt injection, sensitive data disclosure, improper output handling, excessive agency, vector weaknesses, misinformation, and unbounded consumption. |
| OWASP Agentic security guidance | Agent risk is not only the model; it is also tool scope, memory, delegated autonomy, identity, and recoverability. |
| Modern agent SDKs | Prefer typed tools, guardrails, tracing, handoffs, and structured outputs over free-form "do anything" agents. |
| LangGraph-style runtimes | Use explicit state graphs, checkpoints, human interrupts, and durable execution for non-trivial loops. |
Simple rule: do not deploy an agent where a deterministic workflow is enough. Use agents when the task needs multi-step reasoning over changing context or tools, and then narrow the action space.
Topic map
| Page | Focus |
|---|---|
| Agentic architecture workflow | End-to-end request path: orchestration, RAG, tools, state, human review, observability |
| Agentic fundamentals | Agents, tools, loops vs one-shot prompts, state transitions, guardrails |
| Agent memory, state & storage | Checkpoints, scratchpad, long-term memory, vector memory, audit storage |
| LangChain for agents | Messages, runnables, typed tools, structured chains used inside agents |
| LangGraph for agents | State graphs, routing, checkpoints, human interrupts, durable execution |
| LangSmith observability | Tracing, datasets, evaluations, regression gates, release feedback |
| Agentic production | Serving path, security controls, failures, rollout discipline, on-call probes |
Companion: Generative AI hub · Security · Gen AI interview pacing.
How the pieces fit together
State-to-state lifecycle
How to read it: every box is a state owned by your application. The model may recommend the next step, but the orchestrator decides whether the transition is legal.
Interview sound bite
Agents are state machines with stochastic transitions: define state, legal actions (tools), stop conditions, and observability before you chase model tweaks.
Quick interview drills
1. When is an agent justified?
- Use one when the path depends on intermediate observations. Do not use one for fixed extract-transform-answer flows.
2. What is "excessive agency"?
- Giving the model too much functionality, permission, or autonomy, so a malformed or injected instruction can cause real damage.
3. What makes an agent production-grade?
- Typed tools, least privilege, step budgets, human approval for irreversible actions, durable checkpoints, trace redaction, and regression evaluations.
4. Where should safety checks live?
- In prompts, schemas, tool gateways, policy engines, runtime budgets, and deployment gates. Prompt-only safety is not enough.
Spotted something unclear or wrong on this page?