Back to Blog
/5 min read

AI Can Write Your Code Now. The Hard Part Was Never the Code.

AIEngineering

Anthropic just released Claude Opus 4.5. It's the best AI coding assistant available today. It writes cleaner functions than most junior engineers. It refactors legacy code with genuine understanding of the patterns involved. It generates test suites that actually cover edge cases.

The tooling has crossed a threshold. AI can write production-quality code. That part of the job is largely solved.

Here's the thing nobody in the AI hype cycle wants to admit: writing code was never the hard part of building software.

What AI coding tools are genuinely great at

Let's give credit where it's due. The current generation of AI coding assistants handles several categories of work exceptionally well.

Boilerplate generation. CRUD endpoints, form validation, data transformation functions. The patterns are well-established and repetitive. AI writes them faster and more consistently than humans. This alone saves hours per week on most projects.

Refactoring. "Convert this class component to a hook." "Extract this logic into a utility function." "Make this TypeScript strict-mode compliant." AI handles mechanical transformations with near-perfect accuracy. The kind of work that's important but tedious for humans.

Test generation. Given a function, generate unit tests covering the happy path, edge cases, and error conditions. AI does this well because tests follow predictable patterns. The test quality has improved dramatically in the last year.

Code explanation and documentation. "What does this function do?" "Write a docstring for this module." AI reads code faster than any human and explains it clearly. This is genuinely transformative for onboarding engineers onto unfamiliar codebases.

What AI still can't do

The list above sounds impressive. And it is. But notice what's missing from it.

Architecture decisions

"Should we use a microservice or a monolith for this new feature?" "Do we need a message queue here or is a direct API call fine?" "Should this data live in Postgres or Redis?"

These decisions depend on context that doesn't fit in a prompt. Traffic patterns. Team expertise. Budget constraints. The company's growth trajectory over the next two years. Existing technical debt that makes some paths viable and others catastrophic.

AI can outline the trade-offs between two architectures. It can't tell you which one is right for your specific situation. That requires judgment built from years of watching systems succeed and fail in production.

Production trade-offs

A correct function and a production-ready function are different things. Production-ready means: How does this behave under 10x expected load? What happens when the downstream API is slow? How does this interact with the database connection pool during peak hours? What does the failure mode look like and can the on-call engineer diagnose it at 3 AM?

AI writes code that works. An experienced engineer writes code that works, fails gracefully, and is debuggable by someone who's never seen it before. That gap matters every time something goes wrong.

Cross-system debugging

The nastiest production bugs don't live in a single function. They live in the interaction between systems. A race condition between a cache invalidation and a database write. A memory leak triggered by a specific sequence of API calls under load. A timeout cascade caused by a downstream service that changed its behavior after a deploy.

Debugging these requires holding the entire system in your head. Reproducing conditions. Reading logs across multiple services. Forming hypotheses and testing them. AI can help with individual steps. It can't drive the investigation.

Knowing what not to build

The most valuable engineering skill is saying no. "We don't need this feature." "This abstraction will cost more to maintain than the duplication it eliminates." "This is technically elegant and will confuse every new hire for the next two years."

AI optimizes for completion. You give it a task and it produces a solution. It doesn't push back on the task itself. It doesn't ask whether the task should exist. That judgment call is still entirely human.

The new job description

AI coding tools don't eliminate the need for engineers. They shift what engineers spend their time on. Less time writing boilerplate. More time on the problems that boilerplate exists to support.

The engineers who thrive with AI tools are the ones who were already strong at the hard parts. Architecture. System design. Debugging. Communication. They use AI to accelerate the mechanical work and spend the recovered time on the decisions that actually determine whether a project succeeds or fails.

The engineers who struggle are the ones whose primary value was writing code quickly. That skill just got commoditized.

What this means for your team

If you're evaluating AI coding tools for your engineering org, focus on how they amplify your senior engineers rather than how they replace your junior ones. The highest-leverage use case is giving experienced engineers a tool that handles the routine work so they can focus on the work that requires their experience.

Invest in the skills AI can't replicate. System design. Production thinking. Debugging methodology. Cross-team communication. Those were always the skills that separated good engineers from great ones. AI just made the distinction more visible.

You might also like