Why Learn to Code When AI Can Do It For You?
Why should I learn to code if I can just use Cursor or Claude Code?
I get this question constantly - from aspiring developers, career changers, and entrepreneurs with app ideas. The answer isn’t straightforward.
Sometimes you genuinely don’t need to learn traditional coding.
Sometimes you absolutely do.
After spending over a decade as a backend engineer and teaching thousands of developers, I’ve watched AI coding tools evolve from novelty to genuine productivity multipliers. I use them every day. But I’ve also seen the disasters that happen when people misunderstand what these tools can and can’t do.
When You Probably Don’t Need to Learn
If you want to build a static website - something for your business, portfolio, or resume - you can skip the coding bootcamp. AI tools like Claude Code and Cursor handle this better than Wix or Squarespace ever could. You get more flexibility because you can describe exactly what you want in plain English and iterate conversationally.
Even slightly more complex projects might not require traditional coding skills. A simple database for contact form submissions, basic user authentication, minor business logic like calculating quotes - AI handles these reasonably well.
There are caveats. AI might not implement token refreshing properly for authentication. It might not handle rate limiting correctly. It probably won’t think about edge cases that seem obvious to experienced developers.
But if your site is small, deployed on a cheap service with spending caps, and serves a handful of users? You can probably make it work. Not everyone needs to become a software engineer, and sometimes “good enough” is genuinely good enough.
When You Absolutely Need Technical Skills
The calculus shifts completely when you cross into these territories: building applications you’re trying to monetize, scaling to hundreds or thousands of users, handling sensitive data, creating something that needs to evolve with new features, or working on anything where bugs have real consequences.
Can Claude help with security implementations, database backups, and architectural decisions? Absolutely. But it needs a technical architect behind it who knows these concerns exist in the first place.
When you ask Claude “what am I missing?”, it gives helpful answers - but won’t cover everything for your specific scenario. It doesn’t know your business requirements intimately or the regulatory environment you’re operating in.
AI can also hallucinate in ways that aren’t obvious. I’m not talking about code that doesn’t run - that’s easy to catch. I’m talking about code that works perfectly but doesn’t actually meet your requirements. Features that function but have security implications you don’t recognize. Structures that make future changes exponentially harder.
These silent failures are far more dangerous than obvious errors.
What AI-Assisted Development Actually Looks Like
I use LLMs constantly, but here’s what that actually looks like:
Claude Code generates a plan. I read through it and correct three things that need to be different for my architecture. It writes a SQL query - I catch that it’s missing a join I need. It proposes an API endpoint structure that doesn’t match our established patterns. I redirect it.
This happens on virtually every feature of any complexity.
What makes this work: I can read every line and understand what it’s doing. I know what questions to ask. I recognize when something doesn’t fit. I understand the security implications of different approaches.
Without that foundation, I’d be accepting code I don’t understand into a codebase I can’t reason about.
The Compounding Problem
The real issue isn’t just today’s code - it’s what happens over time.
AI does a pretty good job building applications from scratch. But six months in, you need a significant new feature that touches multiple parts of your codebase. You ask Claude to implement it. It generates code that seems to work. You merge it.
Except now something else is broken. You ask Claude to fix it. The fix breaks something else. You’re playing whack-a-mole with a codebase you don’t understand.
Or worse: nothing seems broken, but you’ve introduced a subtle bug that won’t appear until you have real users. A race condition under load. A security vulnerability invisible until exploited.
Each piece of code you don’t understand makes future changes harder. Technical debt accumulates faster when you can’t recognize it.
The Two Types of People Saying “AI Can Do Everything”
When you hear someone claim Claude or Cursor can build anything, they’re usually one of two people:
Non-technical people who don’t know what they don’t know. They’ve built something that works today, so they assume it’s built well. They can’t see the security vulnerabilities or the architectural decisions that will make their next feature ten times harder.
Highly technical people who know exactly where the gaps are. They use AI extensively but constantly correct and redirect it. When they say “AI can do everything,” they mean “with my guidance” - but that qualifier gets lost.
The gap between these groups is widening as AI tools become more powerful and the things you can attempt become more ambitious.
The Bottom Line
We’re not yet at the point where non-technical people can build meaningful, scalable applications without security holes using AI alone. Could this change in five or ten years? Maybe - LLMs have improved at an insane pace. But nobody can predict the future, and betting your business on capabilities AI might have someday is risky.
Here’s what people miss: technical skills and AI tools are multiplicative, not substitutes. The more you understand about architecture, security, and systems design, the more effective AI becomes in your hands. You move faster without accumulating hidden risks.
AI isn’t replacing the need to understand code. It’s raising the floor on what technical people can accomplish - while making the gap between those who understand and those who don’t even more consequential.
If your ambitions are modest - a simple site that doesn’t need to scale - you might be fine directing AI tools without deep coding knowledge.
But if you want to build something meaningful that can grow and handle real users? You need to understand what’s happening under the hood. That knowledge isn’t becoming less valuable. It’s becoming more valuable than ever.
Cheers friends,
Eric Roby
Find me online:




I like this one @EricRoby: “Here’s what people miss: technical skills and AI tools are multiplicative, not substitutes. The more you understand about architecture, security, and systems design, the more effective AI becomes in your hands. You move faster without accumulating hidden risks.”