THN Interview Prep

Agentic fundamentals

An agent is a controlled loop: given state, policy, and typed tools, it chooses a next action until a termination condition is met. A production agent is not "an LLM with freedom"; it is a workflow where the model can suggest transitions and the orchestrator enforces them.

Pair with /security for injection/confused deputy and with Agentic production for rollout practice.


Core concepts

TermMeaning
AgentPolicy loop coupling model reasoning with explicit actions.
ToolA bounded capability exposed by schema, description, authZ, rate limit, and server-side validation.
StateWhat persists across steps: messages, task plan, retrieved evidence, tool results, approvals, counters, tenant context.
ObservationThe trusted result returned from retrieval or a tool, appended back into state.
TransitionLegal movement from one state to another: plan -> retrieve, retrieve -> decide, decide -> final.
TerminationClear stop: final answer, refusal, escalation, step cap, token cap, timeout, or human handoff.
GuardrailA check before or after model/tool execution: policy, schema, content, PII, spend, or risk threshold.

Process — one-shot vs agentic

Loading diagram…

Agents pay latency + variability toll; use them when multi-hop evidence or actions outweigh that cost.


Agent state machine

Loading diagram…

Interview explanation: the important word is legal. The model does not get to move from Decide to ToolRun directly; the orchestrator must pass schema, identity, tenant, rate, and risk checks first.


Observe → decide → act (detail)

Loading diagram…

Interview tip: cite validation before execute—the orchestrator rejects malformed IDs, over-broad queries, rate limits before side effects run.


Best-practice checklist

LayerWhat good looks likeCommon mistake
PromptClear role, domain boundaries, refusal rules, tool-use rules.Treating the prompt as the only control.
ToolsNarrow schema, least privilege, idempotency key, tenant-aware implementation.One broad tool like execute_sql(query) or run_command(cmd).
StateMinimal, typed, redacted, checkpointed when long-running.Dumping full chat history and secrets into every model call.
RetrievalACL-filter before ranking, cite source ids, handle stale or missing evidence.Letting retrieved text override system policy.
Loop controlStep, token, cost, time, and duplicate-action caps.Infinite retries with "try harder" prompts.
Human reviewRequired for destructive, financial, legal, security, or external-message actions.Letting the model self-approve high-risk work.
EvaluationTest tool choices, args, refusal paths, citations, and recovery behavior.Only testing final prose quality.

When not to build an agent

  • One intent classification with explicit schema outputs.
  • Stateless summarization/transform with deterministic prompts.
  • Retrieval can be answered in fixed steps known up front (“always embed → top-k → pack → answer”). That is chain territory; see LangChain for agents.
  • Rules are legally or financially strict enough that a deterministic workflow with human approval is simpler and safer.
  • The tool surface cannot be narrowed. If the only possible tool is "full admin access," the design is not ready.

Interview questions — fundamentals

1. What differentiates agentic workflows from chained prompts?

  • Stateful loop: model may revisit tools; branches depend on intermediate observations—not a fixed DAG only.

2. Name three termination strategies.

  • Explicit “final answer”; max step or token budget; human escalation gate; deterministic DONE classifier.

3. Who owns correctness when tools hallucinate IDs?

  • Tool layer validates IDs against ACLs/schema; orchestration never trusts raw strings for privileged operations.

4. Explain “least privilege” for tools.

  • Narrow args, impersonation inside implementation, rate limits per tenant—not instructions in prose only.

5. Sketch the failure mode “runaway loop.”

  • Missing step cap → model repeats tool variants; mitigation: counter + backoff + escalate.

6. Explain direct vs indirect prompt injection for agents.

  • Direct injection comes from the user prompt. Indirect injection comes from retrieved docs, web pages, emails, tickets, tool output, or another agent. Indirect is often worse because it enters through "trusted" context.

7. What is the safest way to expose database access?

  • Prefer purpose-built tools like get_invoice_status(invoice_id) with server-side tenant checks. Avoid raw SQL tools except tightly scoped, read-only, audited internal workflows.

8. How do you make an agent easy to debug?

  • Trace every model call, tool call, guardrail decision, state transition, token/cost metric, and final outcome with redaction.

Next: LangChain for agentsLangGraph for agentsLangSmith.

Spotted something unclear or wrong on this page?

On this page