< Know Your Models

I'm writing a lot of software - or at least I'm guiding AI coding tools in writing a lot of software.

Part of this is interacting with different models. Since I'm using Anthropic's Claude I usually encounter Opus, Sonnet, and Haiku. I use all of these daily, as well as OpenAI ChatGPT (ongoing conversations) and GitHub Copilot for Pull Request reviews[1].

It's tempting to just reach for "the best AI", which you could argue is Opus for coding. But Software Engineers have realized for a while that this can be a bad call. Opus is great for planning and architectural reasoning. Sonnet is fast and "good enough" for iteration. Haiku handles lightweight tasks like commit messages and lookups.

In other domains, this kind of model selection is less established. If your AI policy says "we use X" or your procurement team has only approved one vendor, there may be good reasons for that — but you may also be locking into a suboptimal outcome. Worse, if your training and onboarding only covers one tool, you're leaving a lot on the table.

The risk is that policies written without understanding this reality will either be ignored or will actively slow people down.

I write about AI, organizations, and engineering leverage: find out about me and subscribe here.

Discuss and share via the meta page . Filed under AI, Code, People, and 100PR.

Footnotes

  1. I currently prefer using Claude for the coding part, but I find Copilot excellent for reviews. The pairing works well.