Stop Chasing Every New AI Coding Model
You may be falling for shiny tool syndrome
Every few weeks, there’s a new AI coding model or large language model that helps with writing code.
I know plenty of people who try every new model the moment it releases.
When OpenAI goes from 5.1 to 5.2 to 5.3, they’re on it immediately.
Gemini releases something new, they try it out. There’s Flash, Pro, and all these different variants.
There’s Claude. And on top of that, there are coding agents like Claude Code and OpenCode that use these models behind the scenes to handle tasks without your involvement.
Me? I don’t try every AI model myself.
I use Claude. Right now it’s Opus 4.6, and I’m a big fan. If a new local model is released, I won’t try it.
That’s because this is shiny tool syndrome.
My Setup Works - Why Would I Abandon It?
I’m a Claude Code fan.
I use Opus 4.6 as my main coding agent. I have agents and hooks inside my repository. I have a Claude Code hook that uses (OpenAI) GPT 5.3 to do code reviews locally and have Opus 4.6 do another code review in my CI/CD pipeline.
So, a total of 2 models.
Now, I’ve definitely experimented with other models before. I’ve had them cross-running where Claude writes the code, OpenAI verifies the feature, and Gemini handles the tests. That approach can help reduce hallucinations.
But here’s the thing: with all these models continually getting released, switching doesn’t always save you time. It often costs you time.
The Hidden Cost of Switching
Think about it. If you’re using Opus 4.6 and a new Gemini model comes out, there’s going to be a wave of people who jump right on it. But what are they abandoning?
The resources and thought processes they’ve already created through their projects using the other model. The agents and hooks. Their CI/CD pipeline integrations. All that context and configuration - gone.
These models are not getting dramatically better than each other for coding. The real improvements are happening in the agents that orchestrate these models, not in the base models themselves.
Switching doesn’t necessarily increase your productivity. You’re trading a setup you’re comfortable with - one you’ve already configured for your environment - to test something marginally different.
This Isn’t Just a Coder Problem
We’ve been falling hard into shiny tool syndrome with AI and large language models. And this isn’t just for coders - it’s writers, marketers, everyone.
Instead of setting up an environment that works really well with one model, people keep swapping around. They’re chasing slightly different results, trying to get a feel for it, trying to get the model to match their style. Meanwhile, they could use one model they’re comfortable with and build an environment optimized for it.
One Caveat
Don’t fall in love with a specific model to the point where you’re blind to better options. You have to be willing to switch if you find enough research and validation to make the move worthwhile.
But trying every new model the week it releases? That’s a waste of time. You’re better off doubling down on the one you’re comfortable with and actually shipping work.
Cheers friends,
Eric Roby
Find me online:




Agree that the agents and hooks matter more than the base model. That's sort of why I ended up going all in on OpenCode; same workflow, same skills and configs, but you can swap to whatever model is best for the job without rebuilding anything. You keep your setup and still get to experiment. Covered the whole workflow here: https://reading.sh/the-definitive-guide-to-opencode-from-first-install-to-production-workflows-aae1e95855fb