AI Learning Digest

Daily curated insights from Twitter/X about AI, machine learning, and developer tools

The Rise of Agentic RAG: Why 2025 Is the Year AI Learns to Think, Not Just Retrieve

The Evolution Beyond RAG

The most striking theme emerging today is the death knell being sounded for traditional RAG (Retrieval-Augmented Generation) pipelines. As Connor Davis boldly declared:

"RAG is dead. Long live Agentic RAG. 2025 isn't about retrieval or generation anymore—it's about decision-making systems that can think, route, reflect, and correct themselves."

This sentiment is echoed by Victoria Slocum, who's been exploring the progression from RAG → Agentic RAG → Agent Memory:

"The progression from RAG → Agentic RAG → Agent Memory isn't about adding [complexity]—it's about systems that can actually learn and adapt."

The Missing Piece: Learning from Experience

Avi Chawla highlighted a key insight from Karpathy's recent podcast that crystallizes what's missing from current agent architectures:

"Tools help Agents connect to the external world, and memory helps them remember, but they still can't learn from experience."

This is the frontier everyone's racing toward. We have agents that can use tools, agents that can remember context, but truly autonomous systems require something more: the ability to improve through iteration without explicit retraining.

The Practitioner's Perspective: Guardrails Matter

While the theoretical discussions continue, Santiago shared a practical win from the trenches:

"I got an additional 8%+ in my multi-agent application with guardrails. Basically, everything that comes out of an LLM is now being checked by a deterministic piece of code. Even things that aren't likely to break will eventually break."

This highlights a crucial reality of production AI systems: agentic architectures require robust safety nets. The more autonomous the system, the more important deterministic validation becomes.

Context Engineering: The Real Meta

Spencer Baggins dropped an interesting perspective on what separates hobbyists from professionals:

"I just found out why OpenAI, Anthropic, and Google engineers never worry about prompts. They use context stacks. Context engineering is the real meta."

This aligns with the broader theme: the future isn't about clever prompts, it's about architectural patterns—how you structure context, route decisions, and orchestrate multiple agents.

Resources Making Waves

Hayes shared that a senior Google engineer released a 424-page document on Agentic Design Patterns covering:

  • Prompt chaining and routing
  • MCP & multi-agent coordination
  • Guardrails, reasoning, and planning

Pau Labarta Bajo also pointed developers toward structured output libraries—a foundational building block for reliable agent systems.

The Local AI Movement

Amidst all the cloud-focused agent discussion, Ahmad made a contrarian prediction:

"Opensource AI will win. AGI will run local, not on someone else's server. The real ones are already learning how it works."

Whether or not AGI runs locally, the sentiment captures something real: understanding these systems at a deep level—not just API calls—may prove to be the differentiating skill.

Analysis

Today's posts reveal an industry at an inflection point. The conversation has moved past "can AI retrieve relevant information?" to "can AI reason about what to do with that information?" The shift from RAG to Agentic RAG to full agent memory systems represents a fundamental change in how we architect AI applications.

The key challenges emerging:

1. Learning from experience without full retraining

2. Reliable guardrails for autonomous decision-making

3. Context engineering as a discipline unto itself

4. Multi-agent coordination at scale

The builders who master these patterns won't just be using AI—they'll be creating systems that can genuinely think for themselves.

Source Posts

H
Hayes @hayesdev_ ·
A senior Google engineer just dropped a 424-page doc called Agentic Design Patterns. Every chapter is code-backed and covers the frontier of AI systems: → Prompt chaining, routing, memory → MCP & multi-agent coordination → Guardrails, reasoning, planning This isn’t a blog… https://t.co/N0cYAyQapz
S
Santiago @svpino ·
I got an additional 8%+ in my multi-agent application with guardrails. Basically, everything that comes out of an LLM is now being checked by a deterministic piece of code. Even things that aren't likely to break will eventually break. The guardrail is there to stop that from… https://t.co/Grq7Xvkd7Q
J
Jafar Najafov @JafarNajafov ·
This prompt will make you rich: ----------------------------------- You are my personal wealth acceleration strategist, obsessed with turning attention into income at scale. Here's who you are: - You operate with razor-sharp market intelligence and spot money-making…
C
Connor Davis @connordavis_ai ·
🚨 RAG is dead. Long live Agentic RAG. 2025 isn’t about retrieval or generation anymore it’s about decision-making systems that can think, route, reflect, and correct themselves. Every serious AI builder I know is moving from basic RAG pipelines to Agentic RAG architectures… https://t.co/Omw9zkNBDY
S
Spencer Baggins @bigaiguy ·
Holy shit… I just found out why OpenAI, Anthropic, and Google engineers never worry about prompts. They use context stacks. Context engineering is the real meta. It’s what separates AI users from AI builders. Here's how to write prompts to get best results from LLMs:
P
Pyrate @CEOLandshark ·
I don't think people realize this but just like porn traffic (which is a serious % of the whole bandwidth of the internet) - roleplaying (adult or not) stuff are a huge % of what people use AI for. So, AI has a ton of roleplay-style data and since it's conversational, it gets…
A
Avi Chawla @_avichawla ·
First tools, then memory... ...and now there's another key layer for Agents. Karpathy talked about it in his recent podcast. Tools help Agents connect to the external world, and memory helps them remember, but they still can't learn from experience. He said that one key gap… https://t.co/gWg5y80UxI
V
Victoria Slocum @victorialslocum ·
Why do RAG systems feel like they hit a ceiling? I've been diving into @helloiamleonie's latest article on agent memory, and it provided so much clarity into the current evolution of RAG systems. The progression from RAG → Agentic RAG → Agent Memory isn't about adding… https://t.co/ecmKZqhCcK
P
Pau Labarta Bajo @paulabartabajo_ ·
Need to get structured output from a Language Model? Here's the best library I know for that ↓ https://t.co/RQCgQO9wPF
A
Ahmad @TheAhmadOsman ·
calling it now, bookmark this for later: - opensource AI will win - AGI will run local, not on someone else’s server - the real ones are already learning how it works > be early > Buy a GPU > get ur hands dirty > learn how it works > you’ll thank yourself it’s gonna be great