AI Tools9 min read
Sean Dees·

Here's the rewritten version:---

The AI Coding Tools of 2022 Are Not the AI Coding Tools of 2026

In March of 2022, I came across a post about a tool Microsoft was quietly working on. The idea sounded almost unbelievable at the time.

It was described as an AI assistant trained on millions of lines of code from GitHub repositories and discussions on Stack Overflow. The promise was simple: a tool that could help developers write code faster and more efficiently.

That tool eventually became GitHub Copilot.

The moment I read about it, I knew something important was happening. I couldn't fully explain why yet, but it felt like the early days of a major shift in how software would be built.

Since then, I've been completely immersed in AI tools, experimenting with everything from productivity assistants to AI coding environments, because I genuinely believed these systems would have a major impact on my future as an engineer.

At first, though, the reality didn't quite match the promise.


The Era of Smart Autocomplete

The earliest AI coding tools felt like supercharged autocomplete.

They could suggest the next line of code, generate small functions, or fill in boilerplate. Occasionally they were impressive. More often, they were questionable.

The generated code was frequently buggy, inefficient, or simply wrong. You couldn't trust it enough to integrate into a real workflow without carefully reviewing every line.

Many developers who tried these tools early came away with the same impression:

Interesting idea… but not ready yet.

At this stage, AI felt less like a collaborator and more like a novelty.


The Rise of Chatbot Coding Assistants

Then things started to change.

With the introduction of conversational AI tools like ChatGPT, Claude, and Gemini, developers could finally interact with AI in a different way.

Instead of accepting single-line suggestions, you could now have full conversations about the code.

You could ask questions about unfamiliar code, debug issues, generate components or functions, and get explanations of complex concepts.

Coding tools had evolved from autocomplete engines into interactive coding partners.

But they still had limitations.

These models often lacked awareness of the larger codebase. They could generate impressive snippets, but they didn't fully understand the architecture of the system they were working within.

And while they could write code in nearly any language, they still made mistakes that required significant refactoring.

It was powerful, but still incomplete.


The Emergence of AI-Native IDEs

The real transformation began with AI-first development environments.

Tools like Cursor changed the experience entirely.

For the first time, the model wasn't operating in isolation. It had direct access to the entire codebase.

That single change unlocked something important.

Developers could now ask questions about architecture, understand unfamiliar repositories quickly, generate features that followed existing patterns, and enforce rules and conventions automatically.

Instead of working around the codebase, the AI could finally work inside it.

And that changed the workflow entirely.


The Moment It Clicked

Last year I attended a talk with a Director of Engineering from Netflix.

In addition to his full-time role and speaking engagements, he was also building a video game on the side under his own game studio. That alone caught my attention. I kept wondering how someone with that many responsibilities could still find the time to build something as complex as a game.

At one point he began walking through his development workflow.

He talked about how he used AI tools every day, not just occasionally, but as a core part of how he worked. Planning features, generating code, debugging problems, exploring ideas. AI wasn't something he used when he got stuck. It was integrated into the entire process.

He mentioned something that made me pause.

He said he subscribed to Claude's $200 per month plan.

At the time, I couldn't wrap my head around it. Two hundred dollars a month for an AI tool felt excessive. Why would anyone pay that much?

But as he continued explaining how he worked, something started to click.

This wasn't someone casually experimenting with AI. He was using it at scale. The tools were helping him move faster, explore ideas quicker, and ship things that would have taken dramatically longer on his own.

That was the first time I realized something important.

The people getting the most out of these tools weren't treating them like toys.

They were treating them like infrastructure.

And more importantly, the tools themselves were evolving quickly enough that the experience many developers had in 2022 no longer reflected what was possible in 2026.


The Opus Moment

In my opinion, the release of Claude Opus was a turning point.

Earlier models were good at writing functions, fixing small bugs, or explaining code. But they struggled when tasks became large or required coordination across multiple files.

Opus felt different.

For the first time, a model could handle long, complex, multi-step engineering tasks with a level of competence that felt almost unsettling.

It could break down complex problems, plan solutions, generate code across multiple files, iterate on its own output, and call tools and run commands.

The quality of the output crossed a threshold.

Before this generation of models, AI coding felt like a helpful assistant.

After it, it started to feel like a real engineering partner.


The Age of Agentic Development

Today, AI systems are evolving beyond assistants into coding agents.

Modern AI tools can plan development tasks, navigate entire repositories, generate code across multiple files, run terminal commands, and refactor existing systems.

In some environments, they can even build features from a single high-level prompt.

We're moving into a world where developers don't just write code line-by-line anymore.

Instead, they direct systems that build software alongside them.


Here's a new section you can drop in before the closing section:


Stop Watching. Start Building.

The best thing you can do right now is get your hands dirty.

Not read more articles about AI. Not scroll X and engage in debates on if AI will replace us. Not watch another YouTube breakdown of what's coming. Actually download the tools and start using them on real problems.

Start with Claude Code and OpenAI Codex. Both represent where agentic development is heading. Claude Code runs directly in your terminal and can navigate your codebase, write and edit files, run commands, and work through multi-step tasks with surprisingly little hand-holding. Codex operates similarly inside the OpenAI ecosystem. They're not the same tool, and the differences matter. The only way to understand those differences is to use them.

Take a course on each one. Not because the tools are hard to pick up, but because most developers who struggle with AI are actually struggling with how they communicate with it. There's a real skill in learning how to break a complex problem into pieces and hand it off effectively. That skill doesn't come from reading docs. It comes from reps.

One of the most underrated things you can do is get comfortable with how these tools reason through problems. When you're working on something complex, don't just dump the whole task into a prompt and hope for the best. Watch how the model thinks. Push back when it goes in the wrong direction. Redirect it. The goal is to develop a feel for when to give the agent more autonomy and when to stay close and steer.

Learn how to set up your project's markdown files. Tools like Claude Code look for files like CLAUDE.md in your repository to understand the context of your project before it does anything. That file is where you put the things you'd tell a new engineer on day one: how the project is structured, what conventions the team follows, what to avoid, how to run things locally. The developers getting the most out of agentic tools aren't just prompting well in the moment. They're building environments that set the AI up to succeed before a single task is started.

Treat these early experiments as an investment. You're not just solving the problem in front of you. You're building intuition for a way of working that is going to define the next decade of software development.

The developers who figure this out early are going to have a significant head start.


If You Tried AI in 2022, You Haven't Tried AI

One of the biggest mistakes developers are making right now is assuming they already understand these tools.

Many of them tried AI assistants a few years ago, saw buggy code suggestions, and wrote the entire category off as hype.

But the tools that existed in 2022 are not the tools we have today. Not even close.

In just a few short years we've gone from smart autocomplete, to conversational coding assistants, to AI-native development environments, to fully agentic systems capable of planning and executing engineering tasks.

Each step fundamentally changed what these systems were capable of. And the pace of improvement hasn't slowed down. If anything, it's accelerating.

The developers who embrace these tools are operating with a completely different level of leverage. And the gap between those who do and those who don't is getting wider.

I now have subscriptions to Claude, OpenAI and Cursor.

Every craft has its tools. A mechanic has their wrenches. A carpenter has their saws. A photographer has their lenses.

Modern developers now have AI.

At this point, having access to tools like Claude or OpenAI is starting to feel less like a luxury and more like a basic part of the job.

The future of software development won't be built by developers competing with AI.

It will be built by developers who know how to work with it.