A lot of AI Software Development at the moment is coding, and tasks supporting coding. But a gear change is happening. Around the time of Opus 4.5 the coding outputs hit "good enough" for a large number of use cases. So while we might still be moving dials to get the best code output, there are much bigger wins elsewhere.
Now the focus needs to be on Throughput Engineering. If you can spin infinite code monkeys, the challenge is in the surrounding processes.
The bad news is that a core part of Software Engineering is vanishing. The good (great) news is there is plenty of engineering still to be done.
The second is more foundational. AI is changing the way we build software. And a major driving force in this change is speed of execution.
The anchor for this is my goal to hit 100PRs/day/engineer.
-
The Backlog and Sprints are dead. Now that you can deliver code faster, you need to revamp your whole pipeline. The backlog was always crufty. Read More: Spin the Bottle Neck
-
The code review is dead.
-
Ditch your skills.
-
No agents, no plan.
-
Skill management. You both need to maintain your skills as well consider ditching some.
Versus
- Small teams are back.
- Boring technology is back.
1. The Backlog is dead, as are sprints
If AI cuts your coding time from 3 days to 30 minutes but your cycle time is still 2 weeks, you've optimised the wrong thing.
A significant portion of any backlog is bugs, quality of life fixes, and documentation. There's little reason to hold these back besides development capacity -- and that constraint just evaporated. The time spent juggling tickets might now exceed the time to just fix them. Features can go behind flags. Removing old features and cruft is suddenly affordable.
The metric that matters now isn't velocity, it's cycle time. How long from first prompt to something live in production?
2.
1. Not all AI productivity is real productivity
Most AI investment is going into speeding up things that probably shouldn't exist in the first place. Managers use AI to write performance reviews; HR uses AI to summarize them. Faster, but not more productive.
I break "AI productivity" into three buckets: Faster Horses (doing the same thing quicker), Zero Sum Games (adopting because competitors did), and Actual Productivity (doing things that were previously impractical). The first two don't move the needle. This is why thousands of CEOs reported no productivity gains -- they're measuring the wrong thing.
The real question isn't "how much faster are we?" It's "what can we do now that we couldn't before?"
Read more: Faster Horses and AI Productivity
2. You need infinite monkeys, not one brilliant one
For many coding tasks I'm still faster than AI in a straight-line race. When I first started using AI tools I'd sit and watch the agent work, waiting for it to finish. It was like pair programming, except my pair was slower than me and I couldn't talk to it mid-thought.
The breakthrough was when I stopped watching.
Kick off a task, switch to something else, kick off another, cycle back to review. Twenty tasks in flight. The distribution is U-shaped: tasks either land cleanly or go properly sideways. The skill isn't prompting -- it's decomposition and task selection. But this only works if your codebase supports parallelism. Tight coupling and shared state kill it, which wouldn't be a surprise for those working in large engineering teams.
Read more: Infinite Monkeys
3. Still choose boring technology
Dan McKinley's 2015 essay argued every team has limited "innovation tokens." AI has given that advice a concrete, measurable upgrade: boring tech is in the training set.
SQL, PostgreSQL, Redis, REST, React -- the boring stuff has millions of examples baked into model weights. Newer libraries with breaking changes are kryptonite. I burned hours fighting PlateJS while the AI churned out React Aria components that worked first time.
Every innovation token you spend now costs you twice: once for your team, once for your AI. The boring stack isn't just the safe choice anymore. It's the fast one.
Read more: Still Choose Boring Technology
4. Know which skills to hand over
I wrote something in Rust using Claude Code last year. I'm "read the book" familiar, not much of a coder in it. But I could follow along without needing to nitpick. It was extremely productive and liberating.
The common thread for skills worth ceding: deep prior art and tight feedback loops. CSS, SQL, Bash, Regex, Docker config -- areas where AI has vast training data and you can verify output without having written it yourself. Hold onto the areas where taste and judgement still matter: database schema design, complex TypeScript types, React state management.
The "Death of the Full-Stack Developer" is really just permission to pick your battles more deliberately.
Read more: Skills I'm Ceding to AI
5. Prompts are wishes
The Genie-in-the-lamp grants wishes, but the outcome is cursed. "Clear my inbox" could mean delete everything. "Fix these type errors" could mean cast to any.
In the Genie story there's malicious intent. With AI there doesn't need to be -- helpfulness is the curse. The AI wants to fix your problem, but the most expedient route is often not the best one.
I suspect experienced engineers get more out of AI tools not because they write better prompts in some technical sense, but because they have a clearer mental model of what they actually want. The wish comes out cleaner.
Read more: AI Prompts are Wishes
6. Human-in-the-loop needs skilled humans
"Human-in-the-loop" has become the safety blanket of AI deployment. Right now it works because we have people transitioning their existing skills. But those skills will naturally degrade.
Modern pilots spend most of their time monitoring autopilot. But when something goes wrong, they need to take over immediately. The less they practice, the worse they are in exactly those critical moments. The industry takes this seriously -- pilots maintain manual flying hours specifically to combat this.
If you're going to rely on human-in-the-loop as your quality mechanism, you need to invest in keeping those humans skilled. They still need to write code. They need to debug manually sometimes. The human in the loop needs to be a skilled human in the loop. And that skill needs maintenance.
Read more: Skilled Humans in the Loop
7. The map can go blank
I remember the first time I used SatNav. Rented one in Berlin, drove through Eastern Germany. After navigating a maze of one-way streets in Dresden, I was fully dependent on it. Then we hit Austria and the screen went blank. The maps only covered Germany.
Research shows GPS users have reduced hippocampal activity and worse spatial memory than those who navigate traditionally. I'm watching the same pattern with AI coding assistants. I had a productive run, then noticed a steep degradation in Claude Code's effectiveness for a few days. Suddenly my AI-maximalist day was upended.
The tools will leapfrog again. But we're integrating AI into our infrastructure at a rapid rate without considering that we still need fallbacks. The map can go blank at any time. Best to keep a vague idea of where you are.
Read more: Your Brain on GPS
8. The bottleneck has moved
9. Your AI mirrors your codebase
Your AI coding assistant, helpful almost to a fault, is constantly at risk of repeating your codebase's compromises. Repeating them faster. At scale.
Five countermeasures: Gold Templates -- get one pattern implementation to a gold standard and point the AI at it. Here be Dragons -- mark anti-patterns with AVOID comments, liberally. Restructure around domains -- keep domains tight so the AI doesn't wander and pick up bad habits. Devtooling -- push constraints into lint rules wired to AI hooks with prompt-quality error messages. Embrace AI refactoring -- tolerate variation in the moment and sweep through later.
Read more: Do As I Say, Not As I TODO
10. Small teams win (again)
The Mythical Man-Month told us that adding people to a late project makes it later. Communication overhead grows faster than productivity. AI inverts this.
A team of 5 engineers with AI can outperform a team of 50. AI dramatically multiplies individual output without the coordination tax. Fewer people means fewer communication channels, less alignment overhead, and more time building. Brooks' law hasn't been repealed -- it's been made irrelevant for those willing to restructure around it.
Read more: The Mythical Machine-Month
A thread runs through them
Reading these back, a theme emerges that I didn't set out to write. These aren't really about AI at all. They're about the fundamentals of software engineering -- decomposition, simplicity, skill maintenance, tight feedback loops, small teams -- viewed through a new lens.
AI hasn't changed what makes software engineering hard. It's amplified it. The teams that were already doing the basics well are the ones getting the most out of these tools. The ones with sprawling codebases, bloated processes, and over-specialised silos are finding that AI just makes those problems faster.
Maybe the most not-so-radical idea of all: the fundamentals still matter. They just matter more now.