Back to Articles
01

Why AI Made Me 19% Slower (And Why That's Actually Fine)

A rigorous study found experienced developers were 19% slower with AI assistance—while believing they were 20% faster. Here's what that means for your workflow.

Everyone's claiming AI coding tools make them 10x faster. A rigorous study of 16 experienced developers found the opposite: they were 19% slower with AI assistance—while believing they were 20% faster.

Let that sink in.

The developers weren't just a little off in their self-assessment. They were directionally wrong. They thought AI was helping. The data said otherwise.

When I first read the METR study results in July 2025, I felt attacked. I'd been telling anyone who'd listen that Claude and Cursor had transformed my workflow. Was I lying to myself too?

After weeks of tracking my own work, here's what I discovered: the study is right, I was wrong about speed, and none of that matters as much as you'd think.

The Study That Broke My Brain

The METR research team did something most AI productivity claims don't: they actually measured.

They recruited 16 experienced open-source developers—people with years of contributions to real codebases. Not bootcamp grads. Not people learning to code. Veterans.

Each developer worked on tasks in repositories they already knew well. Half the time they used Cursor Pro with Claude 3.5 Sonnet. Half the time they went unassisted. The researchers tracked everything.

The results:

With AI assistance: 19% slower on average

Self-reported perception: "I feel about 20% faster"

The gap between reality and perception: 39 percentage points

This wasn't a fluke. The confidence interval was tight enough that researchers could say with statistical significance: AI tools made these experienced developers slower.

Fortune ran the headline. MIT Technology Review picked it up. The developer community had a collective identity crisis.

But here's what most people missed in the discourse that followed.

The Part Everyone Skipped

Buried in the methodology was a crucial detail: these were experienced developers working on familiar codebases.

Other research tells a different story for different populations:

Junior developers using AI tools showed a 26% productivity gain in comparative studies

Google's DORA 2025 report found that AI adoption is now correlated with higher team throughput—reversing the negative correlation they found in 2024

GitHub's internal data shows Copilot users complete tasks faster on average across their entire user base

So what's going on? Is AI helpful or not?

The answer is annoyingly nuanced: it depends on who you are and what you're doing.

Why Experts Slow Down

I spent two weeks after reading the METR study tracking my own AI usage patterns. Here's what I noticed about when AI actually slowed me down:

1. The Review Tax

Every AI suggestion requires evaluation. When I'm working in code I know well, I can type the solution faster than I can read, assess, and modify Claude's suggestion.

It's like having a well-meaning colleague constantly interrupting to offer help. Sometimes the interruption costs more than doing it yourself.

2. The Context-Switching Cost

AI tools break flow. I'm thinking about architecture, and suddenly I'm reading a suggestion about implementation details. The mental gear-shift is expensive.

For experienced developers, flow state is where the real productivity lives. Anything that fragments attention—even "helpful" fragments—has a cost.

3. The Trust Calibration Problem

With AI, I'm constantly asking: "Is this suggestion good enough to use, or will it create technical debt?"

That evaluation requires expertise. Ironically, the more you know, the more you realize AI gets wrong—and the more time you spend checking its work.

Junior developers don't have this problem. They can't always spot the issues, so they accept suggestions faster. This makes them measurably quicker (and sometimes creates problems later, but that's a different article).

4. The Boilerplate Paradox

AI excels at boilerplate. But experienced developers have already automated their boilerplate. We have snippets, templates, and muscle memory for the repetitive stuff.

AI is solving a problem we've already solved—and solving it slower than our existing solutions.

Why I Still Use AI Every Day

Here's where the METR study analysis falls short: speed isn't the only metric that matters.

After my two-week tracking experiment, I noticed something the study couldn't measure. On days I used AI heavily, I ended work feeling less depleted.

Same output. Same hours. But different energy expenditure.

This is the real value proposition for experienced developers, and it has nothing to do with velocity.

Mental Energy Preservation

Some tasks are cognitively expensive not because they're hard, but because they're tedious. Writing test boilerplate. Remembering API syntax for a library I use twice a year. Formatting data transformations.

I can do these things faster than AI. But they cost willpower. They drain the tank.

When AI handles the tedium, I arrive at the hard problems with more capacity. The architecture decisions. The debugging rabbit holes. The places where my experience actually matters.

Speed stayed flat. Sustainability improved.

Exploration Without Commitment

Working in an unfamiliar codebase? AI is genuinely helpful.

Not because it knows the code—it doesn't. But because it can generate hypotheses for me to evaluate. "Try looking at this file." "This function might be related." "Here's a possible pattern this codebase uses."

I'm not accepting the suggestions. I'm using them as a starting point for my own exploration. Like having a slightly confused intern who at least reads faster than I do.

The Rubber Duck That Talks Back

Sometimes I use AI as a sounding board. I describe what I'm trying to do, it reflects back an interpretation, and I notice where my thinking was muddy.

The output is usually wrong. But the process of reading wrong output clarifies my own thinking faster than staring at a blank screen.

I'm not using AI to code. I'm using it to think.

The Framework I Actually Use Now

After all this analysis, I've settled on a simple heuristic for when to reach for AI assistance:

Ask yourself: "Would I have spent mental energy on this anyway?"

If yes → Let AI take the first pass. Even if it's slower, you're preserving cognitive resources for harder problems.

If no → Skip the AI. Your muscle memory and existing workflows are faster.

Here's how that plays out in practice:

TaskAI Useful?Why
Writing a function I've written 100 timesNoMuscle memory is faster
Exploring a new API I've never usedYesHypothesis generation helps
Debugging in a familiar codebaseNoI know where to look
Debugging in unfamiliar codebaseYesAI can suggest starting points
Writing tests for straightforward codeMaybeDepends on cognitive budget
Architecting a new systemNoAI lacks the context that matters
Writing boilerplate I have templates forNoMy templates are faster
Writing boilerplate I don't have templates forYesWorth the AI review tax

The goal isn't maximum speed on any single task. It's maximum sustainable output across a day, week, and career.

What This Means for Your Career

The METR study terrified some developers and vindicated others. Both reactions miss the point.

AI tools are not going to replace experienced developers. The study proves that—we're slower with them, not faster. Our expertise matters more than autocomplete.

But AI tools are also not useless. They change the economics of mental energy in ways the productivity metrics don't capture.

Here's what actually worries me: junior developers who never learn to work without AI.

If you gain 26% productivity from AI assistance, but you never develop the expertise to work without it, what happens when:

The AI suggests something subtly wrong?

You need to debug AI-generated code?

You're working somewhere AI tools aren't allowed?

The paradigm shifts and AI doesn't know the new patterns yet?

The developers in the METR study were slower with AI because they had the expertise to know when AI was wrong. That expertise came from years of working without AI.

The 19% slowdown is a feature, not a bug. It's the cost of having standards.

The Uncomfortable Truth

I'm slower with AI tools. The study says so, and my own tracking confirms it.

I'm also happier, less burned out, and producing work I'm prouder of.

The discourse around AI productivity has been poisoned by people trying to sell tools and people trying to dismiss them. The reality is messier: AI is a cognitive tool, not a speed multiplier. It changes how you work more than how fast you work.

For experienced developers, that means accepting a truth that's hard to monetize: sometimes the best tools make you slower at the things that don't matter, so you can be better at the things that do.

That's the trade I'm making. Nineteen percent slower, and totally fine with it.

The METR study I referenced is available here. Google's DORA 2025 report and GitHub's Copilot data tell the other side of the story. All three are worth reading if you want the full picture.

What's your experience? Have AI tools made you faster, slower, or something harder to measure? Reply and let me know—I'm genuinely curious whether others see the same paradox.

If this resonated, I write weekly about the intersection of AI tools and real-world development practice. No hype, no doom—just what actually works. Subscribe to get the next one in your inbox.

Back to Articles