Skip to content
Rezha Julio
Go back

The AI Paradox: Coding is Easier, Engineering is Harder

3 min read

Writing code has never been this easy. AI assistants will generate boilerplate, wire up API calls, scaffold tests. You describe what you want and it appears. I use these tools every day and they’re genuinely useful. But I keep noticing something: the typing part of my job shrank, and everything around it got bigger.

I wrote about how the compound engineering loop works recently, and about what happened when Max Woolf stress-tested agents on real Rust projects. This post is about a simpler observation: the work didn’t get easier. It moved.

I don’t write code anymore, I review it

Before AI, I spent most of my time turning logic into syntax. Line by line, I understood the code because I was the one who wrote it.

Now the AI writes it. My job is to read what it produced and decide if it’s correct. That sounds easier. It isn’t. When I wrote code myself, I had the full context in my head. When I’m reviewing something the AI generated, I have to rebuild that context from scratch. I’m watching for hallucinated dependencies, wrong assumptions, bugs that look fine on first read. I called this “management” before, and the label still fits. You spend your time auditing someone else’s output instead of producing your own.

I spend more time staring at diffs than I ever spent typing.

The ghost bugs

AI doesn’t hedge or leave TODO comments. It will hand you a function that references a library that doesn’t exist, or call an API endpoint that was deprecated two years ago. The code compiles, the types check, and it still does the wrong thing.

Human bugs tend to follow patterns I recognize. AI bugs are weirder. I once spent an hour tracking down a failure that turned out to be a hallucinated enum variant. The AI generated it, used it consistently across three files, and it wasn’t real. That kind of thing messes with your trust. If you’ve ever dug into how LLMs actually work, you know why: the model is completing the most statistically likely sequence, not reasoning about whether the symbol exists.

Speed without architecture is just fast debt

Generating code is cheap now. That’s the problem. It’s tempting to let the AI build out a feature quickly, ship it, move on. But “move fast” without architecture just means you pile on technical debt faster than before.

Nobody needs me to write a for loop in 2026. What they need is someone who can look at five AI-generated modules and figure out whether they’ll still work together six months from now. Woolf figured this out too. His results came from writing detailed specs and AGENTS.md files before letting agents touch the code. The architecture work happened up front, not after.

Sometimes I just want to type

A few developers I follow have been talking about turning off completions for side projects. No Copilot, no agents, just a text editor and your brain.

I tried it last weekend. Wrote a small CLI tool by hand, maybe 200 lines. It took me four times as long as it would have with AI, and I enjoyed every minute of it. The slowness forces you to actually think about each decision instead of rubber-stamping whatever the model suggests.

Not practical for work. But for the stuff I build for myself, I might keep doing it.

The trade

AI made me faster at producing code and slower at everything else. I review more and debug stranger problems. The writing is easy now. The engineering never was.


Related Posts


Previous Post
Demystifying AI: Learning LLMs Through 200 Lines of MicroGPT