Links
Interesting links I've found.
- Comprehension Debt - the hidden cost of AI generated code
Addy Osmani puts a name to something I’ve been feeling for a while: Comprehension Debt.
When we lean on AI tools (Copilot, Claude Code, Cursor) to generate code, velocity metrics look great. The PRs are clean, the tests pass. But we quietly stop understanding why our systems work the way they do.
AI generates code far faster than humans can evaluate it. What used to be a quality gate is now a throughput problem… Surface correctness is not systemic correctness.
PR review used to be a bottleneck, but a useful one. It forced you to understand design decisions and architecture. Now a junior engineer can generate 1,000 lines of syntactically perfect code faster than a senior can audit it. An Anthropic study found that engineers who passively delegated to AI scored 17% lower on comprehension quizzes than those without AI. The interesting bit: engineers who used AI to ask questions and explore tradeoffs kept their understanding intact.
Tests and specs don’t save you here either. Tests only cover behaviors you thought to specify. And when AI changes implementation behavior and updates hundreds of test cases to match, your safety net is no longer trustworthy. Only actual understanding catches that.
What makes this worse than regular technical debt is that nothing in your metrics captures it. Velocity is up, DORA looks fine, coverage is green, and comprehension is hollowing out underneath.
Osmani’s point is clear: as generating code gets cheaper, the engineer who actually understands the system — the load-bearing behaviors, the architectural history, the context — becomes the scarce resource everything depends on.
I don’t think we should stop using AI to write code. But we do need to stop pretending that passing tests means we understand what shipped.
- Willingness to look stupid is a genuine moat in creative work
Sharif Shameem wrote about how the fear of looking stupid kills creative output, and I felt called out. I have drafts on this blog that will never see the light of day because I keep telling myself they’re not good enough.
The funny thing is, I already learned this lesson. During my 28 posts in 28 days challenge last month, the breakthrough was lowering my standards. Not every post needs to be a deep dive. Some of my best writing that month came from just saying what I was thinking without overthinking it. But here I am, a few weeks later, already back to filtering everything.
Sharif’s jellyfish bit is what stuck with me. Evolution produced jellyfish, these weird brainless sacs of jelly that have been around for 500 million years. But it only got there by churning out endless bad mutations without any shame. If evolution could feel embarrassed, life wouldn’t exist.
I keep relearning the same thing: publish more, filter less.
- Agent Psychosis
We let AI run wild, generating massive amounts of unverified “slop” code that overwhelms maintainers.
The problem isn’t the tech—it’s the laziness. Generating a PR takes a minute; reviewing it takes an hour. He points to Steve Yegge’s “Gas Town” as a cautionary tale of this loop gone wrong.
We need to stop blindly trusting the machine and start acting like senior engineers again. AI is a power tool, not a replacement for thinking.
- Python Meets JavaScript, Wasm With the Magic of PythonMonkey
- Postgres is eating the database world
- Table of Contents · Crafting Interpreters
- How to get coworkers to stop giving me ChatGPT-generated suggestions
- How to defend your website with ZIP bombs