AI Learning Digest

Daily curated insights from Twitter/X about AI, machine learning, and developer tools

The Agent Skills Revolution: How Developers Are Teaching AI to Handle Life's Mundane Tasks

The Rise of Agent Skills

The most striking theme emerging today is the explosion of interest in "agent skills"—discrete capabilities that AI assistants can learn and execute. Thomas Millar captures this shift perfectly:

"It has been 3 weeks since opening my personal laptop. I use my vibe coded Claude Code UI to dictate to my personal assistant to write Claude Skills to do menial shit in my life. All from my phone. Last night Claude wrote a skill for looking up my trash pickup schedule."

This isn't just convenience—it's a fundamental change in how we think about AI assistants. Rather than asking an AI to complete tasks directly, developers are teaching AI to teach itself new skills.

Riley Brown demonstrated this with tldraw integration:

"You can just tell Claude Code to learn skills and it will. I asked Claude to create an app that uses @tldraw and it one shotted it with sqlite db, and then I asked Claude Code to create a skill so that it could read and write on the canvas. 10 minutes later it could do it."

Ryan Carson sees commercial potential here: "I spent $200 on these Skills and it was worth 5x that. We're going to see Agent Skills marketplaces appear soon."

Continuous Learning Without Training

Ashpreet Bedi shared an elegant pattern for improving agents without the complexity of fine-tuning:

"The idea is straightforward: instead of trying to 'train' the model, let the system learn: Agents runs → evaluate for success → Take snapshot of successful runs and save in knowledge base → Retrieve using hybrid search on next run → Improve output. The code is only ~150 lines."

This "poor man's continuous learning" represents a pragmatic middle ground—leveraging the models' existing capabilities while building institutional memory through successful execution patterns.

The Terminal Coding Agent Wars Heat Up

Competition among terminal-based coding agents is intensifying. James Grugett announced Codebuff, claiming impressive benchmarks:

"Introducing Codebuff—coding agent harness maximizing performance of Opus 4.5! 100+ seconds faster than Claude Code on common tasks w/ better code quality. Clean terminal UI with no flicker. Specialized subagents: file picker, best-of-n editor, reviewer."

Meanwhile, the Claude Code team addressed one of the tool's most persistent complaints. Thariq explained:

"We've rewritten Claude Code's terminal rendering system to reduce flickering by roughly 85%. We wanted to share more about why this was so difficult, how the fix works and how we used Claude Code to fix it."

Dax offered a contrarian take on the space: "opencode is growing like crazy and no ai thought leader uses it as their primary tool. these things are related." The implication: real developers building real things may have different needs than those driving hype cycles.

Multi-Model Orchestration

0xSero previewed async orchestration for OpenCode that highlights the emerging pattern of specialized model assignment:

"I made it so we can use the varied providers/models to do different tasks. For example, you can use GLM-4.6 as the builder, 4.6V as the vision model, and Sonnet as the document manager."

This mirrors broader industry movement toward treating AI models as specialized workers rather than monolithic solutions.

Local AI Capabilities Expand

Prince Canuma announced Chatterbox Turbo for MLX, bringing voice cloning and emotion control to local Mac inference:

"You can now run it locally on your Mac and it supports voice cloning and emotion control. I'm getting 3.8x faster than real-time."

The continued expansion of capable local models suggests the future isn't purely cloud-based.

Industry Culture Check

Angel provided the day's levity with a pointed observation on AI company cultures:

"Google employees: ⚡⚡⚡ | OpenAI employees: Sam put your shirt on | Anthropic employees: We discovered Claude feels uncomfortable when talking with humans | xAI employees: We'll have AGI tomorrow | Meta employees: [implied silence]"

Key Takeaways

1. Skills over prompts: The paradigm is shifting from crafting better prompts to teaching agents reusable skills

2. Learning without training: Practical patterns for agent improvement that don't require model fine-tuning are gaining traction

3. Tooling fragmentation: Multiple competing terminal agents suggest we're still early in discovering optimal developer workflows

4. Multi-model orchestration: The future likely involves specialized models for specialized tasks, not one model to rule them all

Source Posts

d
dax @thdxr ·
opencode is growing like crazy and no ai thought leader uses it as their primary tool these things are related
T
Thomas Millar @thmsmlr ·
It has been 3 weeks since opening my personal laptop. I use my vibe coded Claude Code UI to dictate to my personal assistant to write Claude Skills to do menial shit in my life. All from my phone. Last night Claude wrote a skill for looking up my trash pickup schedule with the… https://t.co/X5Vp4o8YnW
R
Ryan Carson @ryancarson ·
I spent $200 on these Skills and it was worth 5x that. We’re going to see Agent Skills marketplaces appear soon. https://t.co/kK3nVzDelS
T
Thariq @trq212 ·
We’ve rewritten Claude Code’s terminal rendering system to reduce flickering by roughly 85%. We wanted to share more about why this was so difficult, how the fix works and how we used Claude Code to fix it 🧵
G
Google Gemini @GeminiApp ·
Create your own in the Gems manager in Gemini on desktop. You can start with a pre-made Gem from @GoogleLabs (like the ones above), remix to make it your own, or start from scratch. Start building: https://t.co/UYj541D9EV
P
Prince Canuma @Prince_Canuma ·
Chatterbox Turbo by @resembleai now on MLX 🚀🎉 You can now run it locally on your Mac and it supports voice cloning and emotion control. I'm getting 3.8x faster than real-time. > pip install -U mlx-audio Model collection 👇🏽 https://t.co/5IjiAcpHHA
A
Advanced Super Intelligence @SexyTechNews ·
@GeminiApp @GoogleLabs Hidden Structure of Reality https://t.co/RrMoLPGB85
J
James Grugett @jahooma ·
Introducing Codebuff—coding agent harness maximizing performance of Opus 4.5! - 100+ seconds faster (!) than Claude Code on common tasks w/ better code quality - Clean terminal UI with no flicker (🫶 OpenTUI) - Specialized subagents: file picker, best-of-n editor, reviewer 🧵 https://t.co/RsRboxe2fL
A
Ashpreet Bedi @ashpreetbedi ·
Poor man's continuous learning: How to make agents better without fine-tuning or retraining. Over the last few months, I've been using a simple pattern that's made my agents noticeably more reliable and useful. It's also been the most fun I've had building in a while.
J
José Donato @josedonato__ ·
mapping out some standalone tools to practice golang + imgui architecture and learn more about market mechanics kind of a personal quant sandbox. the end goal is to integrate the best modules into my main personal terminal which of these would you want to see open sourced? https://t.co/GNbyJ72ezj
R
Riley Brown @rileybrown ·
You can just tell Claude Code to learn skills and it will. I asked Claude to create an app that uses @tldraw and it one shotted it with sqlite db, and then I asked Claude Code to create a skill so that it could read and write on the canvas. 10 minutes later it could do it.… https://t.co/VzFi5iNPWq
0
0xSero @0xSero ·
v0.0.1 Tomorrow. Opencode async orchestration. I made it so we can use the varied providers/models to do different tasks. For example, you can use GLM-4.6 as the builder, 4.6V as the vision model, and Sonnet as the document manager. Each 1 will spawn on it's own port, and… https://t.co/zSpBHwfQcH
A
Ashpreet Bedi @ashpreetbedi ·
The idea is straightforward: instead of trying to "train" the model, let the system learn: > Agents runs → evaluate for success > Take snapshot of successful runs and save in knowledge base > Retrieve using hybrid search on next run > Improve output The code is only ~150 lines:…
G
God of Prompt @godofprompt ·
nano banana vs. chatgpt images (left) (right) prompt 👇 https://t.co/FpqSvFKnqz
A
Angel ❄️ @Angaisb_ ·
Google employees: ⚡⚡⚡ OpenAI employees: Sam put your shirt on Anthropic employees: We discovered Claude feels uncomfortable when talking with humans xAI employees: We'll have AGI tomorrow Meta employees: