< Make it impossible, or Make it safe

My dad is the safest person I know. You'd think that would lead to a cotton-wool childhood, but it didn't, because of one of his key aphorisms:

"Either make it impossible, or make it safe."

He didn't bubble-wrap the world. He engineered one where you can learn by trying and take the risks you choose.

That philosophy has served me well in this era of agentic coding. The instinct many teams have is to lock everything down -- restrict who can use AI, limit what it can touch, require approval for every change. That's the bubble-wrap approach. It's safe, but much of it can be safety theatre[1]. It also kills the throughput gains that make AI valuable in the first place.

Equally, some choose the --dangerously-skip-permissions route. Which increases throughput (perhaps dramatically) at the cost of accepting a high level of unquantifiable risk.

The alternative is to build systems where moving fast is inherently safe[2].

This often requires an investment in your development pipeline. The good news is many of these will be familiar. These are good practices that benefit AI and non-AI coding alike. But now those investments might give you a better ROI.

You will already know some of the usual suspects: Easy Rollbacks, Progressive Deployments, Sandboxed environments. Here are a few more:

The common thread is that none of these slow you down. Most of them speed you up because you spend less time worrying and more time shipping. The teams I've been talking to that are seeing real gains aren't the ones with the most restrictions.

Speed is a feature of safety. Either make it impossible, or make it safe.

Also read: Skilled Humans in the Loop, What does it take to build towards 100 PRs/day per engineer?, and Sharpening the Axe (Branch).

I write about AI, organizations, and engineering leverage: find out about me and subscribe here.

Discuss and share via the meta page . Filed under AI, Code, People, 100PR, and Highlights.

Footnotes

  1. Many years ago I worked at an organization that had quite brutal restrictions on accessing production systems. Access was highly fragmented and limited. Control was with security rather than the operators. When a production incident did happen: 1. it took an inordinate amount of time for key people to get access and (2) they didn't know how to navigate and fix a real production instance.

  2. I don't mean foolproof -- I mean seatbelts, not speed limits.