A customer clicked a Stripe Connect button in my Rails app and nothing happened. Solid Errors showed RecordNotUnique exceptions from them clicking again and creating duplicate rows. The original click wasn't logged anywhere because nothing went wrong on the server. The redirect to Stripe was leaving my server fine. Turbo Drive was silently eating it, because fetch can't follow cross-origin redirects. Three things had to line up for the bug to exist. Remove any one and it goes away.
Karpathy posted his LLM Wiki gist and I had 600 books already chunked for a RAG system. I pointed Claude Code at them to test the wiki approach instead. Six days later: 679 interlinked pages, 6,000+ cross-references, and concept pages synthesizing 36+ sources each. The first attempt was garbage because Claude skimmed instead of reading. The fix was expensive: read every chunk of every book. But the result compounds in a way RAG never will.
I had a Rails side project and decided to rewrite it in Elixir/Phoenix. Streaming LLM responses, concurrency, personal interest. Built the whole thing, shared the same PostgreSQL database between both apps. Then came back to Rails. Not because Elixir was bad, but because writing code with Claude Code was a noticeably worse experience in Elixir than in Ruby. More errors, more iteration rounds, slower path to working software. When AI writes most of your code, that gap compounds fast.
I needed to extract speakers and topics from 40K+ YouTube videos for a spiritist knowledge base. Started with Groq's free tier, hit every rate limit, discovered my exception handling was silently flooding Solid Queue with 18K duplicate jobs, then moved to local models on Ollama. Along the way I found that Qwen3's default thinking mode turns a sub-second extraction into a 100-second one, and that 4B models need JSON sanitization to be reliable.
I spent three weeks tuning Litestream for Backblaze B2's free tier, wrote a blog post about it, then ripped the whole thing out and replaced it with a cron job. Meanwhile, my earlier SQLite auto_vacuum post led to a Rails PR that changes the default for every new SQLite database.
I needed search across ten models in a multi-tenant Rails app on SQLite. Instead of Elasticsearch or FTS5, I used a denormalized table with LIKE queries and ActiveSupport's transliterate for accent-insensitive Portuguese. It took a fraction of the setup time and handles everything the app needs.
I deployed a background job to fetch transcripts from 6,000 YouTube videos. Within 15 seconds, YouTube IP-banned the server. Every approach I tried — different endpoints, different client contexts — failed from a datacenter IP. The fix was inverting the architecture: make the server an API and let a local machine do the fetching.
Litestream's default configuration generates thousands of API calls per day that silently exceed Backblaze B2's free tier. Once you hit the cap, B2 returns 403 errors, and Litestream retries without backoff, creating a spiral that stops your backups entirely. Here's how to configure it properly.
SQLite's default auto_vacuum is OFF. Delete a million rows and your database file stays the same size. If you're running Solid Cache, Solid Queue, or any other churn-heavy table on SQLite in production, your disk is filling up and you probably don't know it.
Twelve parallel test workers, one shared Elasticsearch cluster. Per-worker index prefixes, a clean-room fixture company, safe_reindex without deletes, and the parallelize(workers: 1) trap that cost me three days.