AI Learning Digest

Daily curated insights from Twitter/X about AI, machine learning, and developer tools

The Degen AI Trader Revolution: When 'Be Retarded' Beats Data Science

The Unorthodox AI Trading Strategy That's Turning Heads

In what might be the most chaotic approach to AI-powered trading we've seen, @boneGPT shared a strategy that's equal parts absurd and apparently effective:

"i told the LLM to trade like a retarded degenerate and it's up 200%... all the algo traders told me i need to have a background in data analysis to pull this off. they lied."

While this should absolutely be taken with a grain of salt (past performance, risk management, etc.), it does highlight an interesting phenomenon in the AI space: sometimes the most effective prompts aren't the sophisticated, carefully engineered ones—they're the ones that capture a certain vibe or risk profile that traditional quantitative approaches struggle to model.

Is this sustainable? Probably not. Is it entertaining? Absolutely. Does it say something about how LLMs can embody trading personas that would be difficult to codify traditionally? That's actually worth thinking about.

Open Source AI Trading Agents Gain Momentum

@MoonDevOnYT reports impressive early traction for open-source AI trading agents:

"almost 1000 stars and over 500 forks in less than 3 days is crazy. ai agents for trading 100% open sourced are here to stay"

The rapid adoption suggests real demand for accessible AI trading tools. As these frameworks mature and more developers contribute, we may see a democratization of strategies that were previously locked behind proprietary hedge fund systems. The question remains whether open-source approaches can compete with well-resourced institutional players—or whether they'll find their niche in markets and strategies the big players ignore.

The Economics of Fine-Tuned Vision Models

@paulabartabajo_ shared practical advice for AI engineers looking to optimize costs without sacrificing quality:

"A small Visual Language Model fine-tuned on your custom dataset is as accurate as GPT-5... and costs 50 times less."

The specific callout to LiquidAI's LFM2-VL-3B model points to an emerging pattern: for domain-specific applications, smaller fine-tuned models can match or exceed frontier model performance at a fraction of the cost. This is particularly relevant for production deployments where inference costs compound quickly.

Key Takeaways

1. Prompt engineering isn't always about sophistication—sometimes capturing a behavioral archetype (even an absurd one) produces interesting results

2. Open-source AI agents are finding product-market fit in trading, with rapid community adoption

3. The fine-tuning value proposition is becoming clearer: 50x cost reduction with comparable accuracy is a compelling business case for domain-specific applications

The through-line today is accessibility: whether it's unconventional prompting strategies, open-source trading agents, or cost-effective fine-tuned models, the barriers to building with AI continue to fall.

Source Posts

P
Pau Labarta Bajo @paulabartabajo_ ·
Advice for AI engineers 💡 A small Visual Language Model fine-tuned on your custom dataset is as accurate as GPT-5... ... and costs 50 times less. For example, LFM2-VL-3B by @LiquidAI_ ↓ https://t.co/VnfyBt8wqZ
b
bone @boneGPT ·
i told the LLM to trade like a retarded degenerate and it's up 200% all the algo traders told me i need to have a background in data analysis to pull this off they lied literally just tell the LLM to be retarded https://t.co/37yChUXZy7
M
Moon Dev @MoonDevOnYT ·
almost 1000 stars and over 500 forks in less than 3 days is crazy ai agents for trading 100% open sourced are here to stay and we are just getting started https://t.co/tX9cOfuQak