AI Learning Digest

Daily curated insights from Twitter/X about AI, machine learning, and developer tools

The Terminal Renaissance: Claude Code Workflows, CUDA Mastery, and Google's Memory Breakthrough

The New Developer Stack: Terminal-Pilled and Loving It

There's a growing movement of developers who've abandoned the IDE for a pure terminal existence—and they're not looking back.

"nvim + claude-code + tmux + lazygit + ghostty. couldn't be happier" — @iamsahaj_xyz

This isn't just aesthetic minimalism. The real power move emerging from the Claude Code community is using the AI as an orchestrator rather than a worker:

"If you have a substantial plan you want Claude to execute, tell it to act as a manager and have subagents tackle the actual work. Huge quality of life improvement" — @nbaschez

This pattern—AI-as-manager delegating to specialized subagents—represents a significant shift in how developers are thinking about agentic workflows. It's less about getting Claude to write code and more about getting it to coordinate work.

Claude Skills: Teaching AI New Tricks

One of the more underappreciated capabilities getting attention is Claude's skill system:

"Claude skills are extremely malleable allowing you to teach claude to be an expert at any domain even if it's outside its training data." — @nityeshaga

This extensibility is what makes the terminal workflow so powerful—you're not limited to what the model knows, you can teach it what you need.

Google's Titans: Beyond the Context Window

Google quietly dropped what could be one of the most significant architectural advances in a while:

"Google just dropped Titans + MIRAS, a long-term memory system for AI that updates itself in real time. It's a new architecture that combines the speed of RNNs with the performance of Transformers... and It's NOT a bigger context window" — @DataChaz

The key distinction here is real-time self-updating memory. While everyone else is racing to expand context windows to millions of tokens, Google is asking: what if the model could actually learn and remember during inference?

Open Source Punches Up

The open-source community had a win worth celebrating:

"Rnj-1 is a big deal because it's the first truly open model that punches at frontier-level quality at 8B, hitting GPT-4o-tier scores on SWE-bench while staying fully transparent." — @kimmonismus

An 8 billion parameter model matching GPT-4o on coding benchmarks is remarkable. The gap between open and closed models continues to narrow, at least for specialized tasks.

CUDA: The Skill That Keeps Paying

Two separate posts emphasized CUDA programming, suggesting the AI boom is driving renewed interest in GPU fundamentals:

"Writing a CUDA kernel requires a shift in mental model. Instead of one fast processor, you manage thousands of tiny threads." — @asmah2107

"Start with the new CUDA Programming Guide - Section 4 is your gold mine! It's packed with features most developers don't even know exist" — @msharmavikram

As AI workloads grow, understanding the hardware layer becomes increasingly valuable.

The AgentOps Moment

Santiago articulated what many are feeling—we need formalized practices for agent systems:

"I think it's time to start talking about AgentOps. DevOps → MLOps → AgentOps. If you want autonomous agents that work and scale, we need to start formalizing the discipline that supports them." — @svpino

This tracks with the release of 17+ agentic architecture implementations shared by @tom_doerr—the ecosystem is maturing fast.

Context Management Gets Smarter

An intriguing development on the API side:

"New compaction endpoint where the model has been trained to compact its own conversation intelligently (not just summarization, but potentially even writing scripts for its own custom algorithmic truncation?)" — @pashmerepat

Models that can intelligently manage their own context represent a meaningful step toward more autonomous operation.

The Nano Banana Economy

@nanobanana (Nano Banana Pro) continues to generate buzz, with multiple posts about maximizing its image generation capabilities:

"There are probably hundreds of $1M ARR businesses that can be built off @nanobanana alone." — @petergyang

"This is how you get 100% accuracy in Nano Banana Pro image generation. Use JSON prompts." — @thisguyknowsai

The pattern of structured prompts (JSON) for better output control is becoming standard practice across tools.

The Bigger Picture

Today's posts paint a picture of an ecosystem in rapid maturation. Developers aren't just using AI tools—they're building sophisticated workflows around them, demanding better memory systems, and pushing for formalized operational practices. The terminal renaissance isn't nostalgia; it's developers finding that text-based interfaces compose better with AI than GUIs ever could.

The question isn't whether AI will change development—it's whether you'll be building the workflows or following them.

Source Posts

S
Santiago @svpino ·
I think it's time to start talking about AgentOps. DevOps → MLOps → AgentOps If you want autonomous agents that work and scale, we need to start formalizing the discipline that supports them. Some of the things *everyone* has to worry about: • Agent evaluations (using…
f
fofr @fofrAI ·
And here's a checklist of things to double check everything that's in the prompt is in the image. https://t.co/VfRdz9OOdA https://t.co/UgQKZwCpwD
T
Tom Dörr @tom_doerr ·
Implementations of 17+ agentic architectures https://t.co/l03dJU2k8p https://t.co/B4k3p1E33Z
G
Google Cloud Tech @GoogleCloudTech ·
Stop repeating context to your AI. @agenticamit shows how to use local markdown files to chain prompts and scaffold apps faster with Gemini CLI. Watch the breakdown on this week’s #DEVcember livestream → https://t.co/zPsvIGIcLa https://t.co/TkFwwd9aTG
S
Sahaj @iamsahaj_xyz ·
once again, I'm fully terminal-pilled nvim + claude-code + tmux + lazygit + ghostty couldn't be happier
P
Peter Yang @petergyang ·
There are probably hundreds of $1M ARR businesses that can be built off @nanobanana alone. https://t.co/LaiTKN7vka
N
Nityesh @nityeshaga ·
This is one of the most insane applications of Claude skills. Claude skills are extremely malleable allowing you to teach claude to be an expert at any domain even if it’s outside its training data. https://t.co/fvezYTbo1S
C
Chubby♨️ @kimmonismus ·
Rnj-1 is a big deal because it’s the first truly open model that punches at frontier-level quality at 8B, hitting GPT-4o-tier scores on SWE-bench while staying fully transparent. Really love to see small models improving that fast. https://t.co/hgfjFNOEsa https://t.co/RUNvH0esyl
O
Omoalhaja @omoalhajaabiola ·
Take these 20 formats, feed them into ChatGPT, and put them into an excel sheet. Use n8n with nanobanana and seedream. This allows you to create 20-30 short faceless YouTube videos per day. You are getting about 100 different videos using this approach in a week Competitive… https://t.co/sSVQ6CCvMY
p
pash @pashmerepat ·
New compaction endpoint where the model has been trained to compact its own conversation intelligently (not just summarization, but potentially even writing scripts for its own custom algorithmic truncation?) This changes our previous assumptions about context management. https://t.co/rhHUb6Ir9T
?
Unknown ·
omg.. I can't believe Gemini 3 can do this it can generate an entire 3D interactive building facility management system(digital twin) using Three.js.. no code, just text prompt it analyse, simulate real time data to identify potential issues tutorial and prompts in comments https://t.co/iYy6r5qMo4 https://t.co/k6Xa0z69tX
B
Brady Long @thisguyknowsai ·
This is how you get 100% accuracy in Nano Banana Pro image generation. Use JSON prompts. Here's how to you can write JSON prompts to get shockingly accurate outputs from Nano Banana Pro easily:
V
Vikram @msharmavikram ·
🚀 Want to become a CUDA ninja? Start with the new CUDA Programming Guide - Section 4 is your gold mine! It’s packed with features most developers don’t even know exist, and it can unlock serious performance gains, smarter debugging, and cleaner GPU code. https://t.co/HkOzJ9tHqz
A
Ashutosh Maheshwari @asmah2107 ·
Writing a CUDA kernel requires a shift in mental model. Instead of one fast processor, you manage thousands of tiny threads. Here is the code and the logic explained for Matrix Multiplication. https://t.co/ZXfaDaNFrw https://t.co/8nSUq7Bj1H
Y
Yu Lin @yulintwt ·
This guy literally leaks how to actually learn from ChatGPT https://t.co/9TyhHVzQdK
C
Charly Wargnier @DataChaz ·
AGI might be closer than we think. Google just dropped Titans + MIRAS, a long-term memory system for AI that updates itself in real time. It's a new architecture that combines the speed of RNNs with the performance of Transformers. ... and It’s NOT a bigger context window,… https://t.co/9CPW8dsLdB
S
Stefan @creativestefan ·
Surfed the internet for cool UI inspos and resources, Here's what i found: - 60fps (motion) https://t.co/xYon1tVHuE - Divs (Webflow component) https://t.co/XQfh81lsd6 - Efecto (CSS effects) https://t.co/mQSJnYxqBQ - Raydain (design and code) https://t.co/gbRM43pYdz - Visitors…
N
Nathan Baschez @nbaschez ·
Maybe this is an obvious Claude Code thing, but I only just now figured it out: If you have a substantial plan you want Claude to execute, tell it to act as a manager and have subagents tackle the actual work Huge quality of life improvement