The Difference Between Using AI and Actually Being Good at It
How I set up Claude Code and Codex to review each other's work
I’ve been building something new for backend engineers.
It’s not just another course.
It’s designed to solve the “I don’t know what I don’t know” problem - and it also teaches how backend engineers actually work with AI. It has videos and interactions.
It’s currently, check it out here: https://cwroby.com/Z7qLm2Xv9Ra
Some people know how to use AI well. And then some people don’t really understand AI at all.
The gap between those two groups is huge. But many people overlook this: a big gap remains between those who use it well and those in the top 1%.
Just communicating with AI doesn’t mean you’re using it well. Typing prompts and hoping for the best isn’t a workflow. It’s a coin flip.
You can set things up to be the winning engineer. Don’t just chat with a chatbot and hope for the best.
How I Actually Set Things Up
To do this right, you need hooks, skills, and agents set up in your project directory.
I have a CLAUDE.md file that provides overall context for the project. I have agents set up with specific roles. One agent acts as an architect. Another one is a developer. And one is a tester.
On top of that, I have a hook that calls my entire test suite after any feature is complete. I also have Claude Code review its own code. Once it reviews everything and says all the tests pass and things look good, I then have a hook that calls Codex.
Where It Gets Interesting
Codex conducts a deep code review and returns its findings to Claude.
Now here’s where the real value comes in. I have Claude and Codex discuss what is needed and what isn’t, based on the Codex review.
This works because Claude understands the implementation better than Codex does. Codex is just looking at the code changes and trying to identify what’s going on based on those changes. Even if Claude shares some details, it still doesn’t get the bigger picture of the project.
So I let them go back and forth. At the end, Claude gives me a list of what both models agree on. It’s organized by priorities: critical, high, medium, and low, based on the changes being made.
From there, I choose which ones to move forward with. Then I have Claude and Codex go through that entire review process again.
Why This Matters
This approach helps with three things. Writing cleaner code. Reducing hallucinations. And delivering better products overall.
Now I know this workflow will continue to evolve and change over time. These tools change quickly. What works today may look very different in six months.
Backend engineers who focus on multi-model development now will outpace those who don’t.
Using one model and hoping for the best is fine for small tasks. If you’re building real products, models checking each other’s work isn’t optional anymore. It’s how you stay ahead.
Cheers friends,
Eric Roby
Find me online:



