The Three Layers of AI Engineering: From Infrastructure to Agents
The Architecture of AI Engineering
A clarity-bringing thread from @lochan_twt breaks down AI engineering into three distinct layers that aspiring practitioners should understand:
"1) Application layer: building ai products, fullstack + agents, agentic stuff. 2) Model layer: training and finetuning models, LLMs, CV. 3) Infrastructure layer: deploying..."
This framework helps newcomers navigate what has become an increasingly fragmented field. The application layer—where most developers will operate—focuses on integrating AI into products, while the model and infrastructure layers require deeper specialization.
Agentic Infrastructure Evolves
SQLite as the Agent Filesystem
Turso's @glcst makes a bold architectural claim:
"There is no better filesystem abstraction for the agentic era than SQLite. That is why we built agentfs: an entire filesystem backed by a sqlite file that can be moved anywhere."
This approach addresses a fundamental challenge: agents need portable, self-contained state management. SQLite's single-file database model makes agent environments trivially movable and reproducible.
MCP-Use: Democratizing Agent Tool Access
@DailyDoseOfDS_ highlights MCP-Use, an open-source solution for connecting any LLM to any MCP server:
"Build custom agents that have tool access, without using closed source or application clients. Build 100% local MCP clients."
This represents a significant shift toward local-first, privacy-preserving agent development.
Agent0: Self-Evolving Agents Without Human Data
Stanford's Agent0 framework caught attention from @rryssf_:
"They just built an AI agent framework that evolves from zero data—no human labels, no curated tasks, no demonstrations—and it somehow gets better than every existing self-play method."
The implications for autonomous agent training are substantial—reducing dependence on expensive human-curated datasets.
Claude Code and Opus 4.5 Dominate Developer Mindshare
New Capabilities and Plugins
@kieranklaassen reports breakthrough results with Opus 4.5:
"Just shipped v2 of my compounding engineering plugin... This wouldn't have worked a week ago. Previous models would derail after the second parallel..."
The improvement in handling complex, multi-step tasks marks a significant capability jump.
@boringmarketer shares the frontend-design skill installation:
"1) /plugin marketplace add anthropics/claude-code 2) /plugin install frontend-design@claude-code-plugins"
Practical Tips
@donvito shares a useful Claude Code configuration for monitoring usage via statusline in ~/.claude/settings.json. Meanwhile, @RayFernando1337 points to what he calls "the best Claude Skills breakdown I've seen."
Model Updates and Optimizations
Gemini 3 Pro System Instructions
@_philschmid from Google shares performance improvements:
"System Instructions for Gemini 3 Pro that improved performance on several agentic benchmarks by around 5%. We collaborated with the @GoogleDeepMind post-training research team to include some best practices in our docs."
A 5% improvement on agentic benchmarks from system prompt optimization alone demonstrates how much performance remains on the table through careful prompt engineering.
Gemini 3.0's Multimodal Capabilities
@0xROAS enumerates practical use cases:
"With gemini 3.0 you can literally: analyze videos, drop youtube links and extract full scripts, upload competitor ads and reverse engineer the psychology..."
From Prompt Engineering to Context Engineering
@svpino highlights a conceptual shift in how we think about LLM interaction:
"@karpathy said: 'Context engineering is the delicate art and science of filling the context window with just the right information for the next step.' This book will help you stop thinking about 'prompt engineering' and start focusing on 'context...'"
This reframing acknowledges that modern LLM work is less about crafting the perfect prompt and more about curating the right context window content.
Niche Applications
Multi-Agent High-Frequency Trading
@tom_doerr shares research on multi-agent LLMs for HFT—an intersection that raises both technical and regulatory questions about AI in financial markets.
AI Coding Agent Workshop
Also from @tom_doerr: a workshop for building AI coding agents with Claude, signaling growing educational infrastructure around these tools.
Key Takeaways
1. AI engineering has matured into distinct layers—understanding where you want to operate (application, model, or infrastructure) helps focus learning efforts
2. SQLite is emerging as the state management solution for agents—portable, simple, battle-tested
3. Opus 4.5 shows meaningful capability gains in handling complex multi-step tasks that previous models couldn't sustain
4. The shift from "prompt engineering" to "context engineering" reflects a maturing understanding of how to work with LLMs effectively
5. Open-source tooling (MCP-Use, Agent0) continues to democratize what was recently cutting-edge research