What is Agentic AI? A reading list from the trenches

From chatbots that answer to systems that act. The sources that taught me what an agent really is, and how we use them every day at LTC.

The term agentic AI has gone from research jargon to boardroom buzzword in under two years. Stripped of hype, it describes a fairly concrete shift: instead of asking a language model for an answer, you give it a goal, a set of tools, and the freedom to decide what to do next. The model writes code, calls APIs, queries databases, opens browser tabs, checks its own output, and loops until the goal is met or it gives up.

That is a small change in interface and an enormous change in what the software is for. A chatbot answers your question. An agent does your work.

The problem is that most of what gets written about agents is either too abstract (philosophy of AGI) or too tactical (which framework to import). The sources below are the middle: the ones that taught me what an agent actually is, when it makes sense to build one, and how to keep it from falling apart in production. They are ordered from "mental model" to "hands on keyboard".

1. Foundations: getting the mental model right

Anthropic, Building Effective Agents. The post and accompanying talks by Erik Schluntz and Barry Zhang are now the de facto reference. The contribution that matters: a sharp distinction between workflows (LLM steps stitched together by code) and agents (LLM decides the next step). Most teams should build workflows, not agents. The patterns it catalogues, routing, parallelization, orchestrator-workers, and evaluator-optimizer, cover roughly 80% of real use cases.

Andrew Ng, What's next for AI agentic workflows (Sequoia AI Ascent). Forty minutes. Four patterns: reflection, tool use, planning, and multi-agent. Ng frames "agentic" as a spectrum rather than a binary, which is the right framing. If you only watch one talk, this is the one.

2. Hands-on courses

DeepLearning.AI, AI Agents in LangGraph (with Harrison Chase). Free, code-first, well-paced. LangGraph forces you to think of an agent as a state machine, which is the right way to think about it whether or not you ship LangGraph to production.

DeepLearning.AI, Multi AI Agent Systems with crewAI (with Joao Moura). Useful as a contrast. LangGraph is explicit and graph-shaped; CrewAI is role-shaped, with Researcher, Writer, and Reviewer roles. Doing both courses tells you which mental model fits your brain.

3. Technical YouTube channels in English

LangChain (official channel). Watch the long-form tutorials by Lance Martin. The deep agents and ambient agents series are the clearest explanations I have seen of agents that run on their own schedule rather than in response to a user prompt.

AI Jason. End-to-end builds with real targets: sales, research, scraping. Less framework-shopping, more "here is a working thing".

IndyDevDan (disler). Agentic coding with Claude Code: sub-agents, hooks, parallel agents. If your work is making the codebase write more of itself, this is the channel.

Cole Medin. Production flavour: n8n combined with LLMs, agentic RAG, and self-hosted setups.

Matthew Berman. News-driven, useful for keeping up with releases like the OpenAI Agents SDK, AutoGen, and new models, without subscribing to twenty newsletters.

Sam Witteveen. Dense, code-heavy, function calling and tool use done well.

4. Long-form conversations

Latent Space podcast (swyx and Alessio). The episodes with Harrison Chase, Erik Schluntz from Anthropic, and the Cognition team behind Devin are unusually good. Listen for the parts where the guest disagrees with the host.

Dwarkesh Patel. For the deeper "why" behind agent capabilities, the interviews with Sholto Douglas and Trenton Bricken from Anthropic on reasoning and tool use are worth the runtime.

Sequoia AI Ascent 2025 playlist. Short talks from founders actually shipping agents at scale. Filters the signal from the framework chatter.

5. In Spanish

DotCSV, Carlos Santana. High-quality popular science. Has covered agents several times and is good for sending to non-technical colleagues.

La Hora Maker, Cesar Garcia. Practical and self-hosted. Agents on Ollama and open-source tooling. The right channel if you do not want to depend on a single LLM vendor.

Xavier Mitjana. n8n plus agents, very accessible for non-developers. Good for showing what is possible without writing Python.

6. Official documentation worth treating as a tutorial

Anthropic Cookbook (GitHub). Notebooks for tool use, computer use, and orchestration. Reading the code is faster than watching most videos.

OpenAI Agents SDK docs and DevDay keynote on agents. Whether or not you build on OpenAI, the SDK primitives, handoffs, guardrails, and tracing, are a useful taxonomy.

A note from LTC

We use these tools every day. The post you are reading was drafted inside Claude Code, the agentic coding environment Anthropic ships as a CLI. Our voice receptionist Maria runs on LiveKit, Claude, ElevenLabs, and a small set of custom Odoo tools; she answers our phone in Spanish and registers each call as an opportunity in the CRM, with the recording attached. Our internal co-pilot fer-pilot logs every meaningful work session as a task in Odoo and pastes the full terminal trace into the chatter, so we can replay any decision months later. None of this is hypothetical. It is the same software stack the sources above describe, applied at the scale of one small company.

If you want to see what an agent actually does in a small business, call us. The agent that picks up is one of them.

Closing comment

If I had to compress all of this into one paragraph of advice for someone starting today: read the Anthropic post twice, watch Andrew Ng's Sequoia talk once, build something small in LangGraph end-to-end, and then stop reading and start measuring. The hard problems in agentic AI are not which framework you pick. They are evaluations, observability, and the discipline to keep the agent's job small enough that it can actually finish it. None of the sources above can hand you those skills. They can only stop you from reinventing the patterns that already exist.

host1, back in the cluster: a boot freeze, an old NVMe, and 319 GiB of stale TRIM
host1 went dark, came back on a different CPU, and revealed that years of LXC snapshots had quietly left 319 GiB of free blocks the SSD never knew about. The recovery story and the storage cleanup, in two phases.