ゲスト寄稿者

Investigating the Great AI Productivity Divide: Why Are Some Developers 5x Faster?

投稿日 11月 14, 2025
未定義のImgur 5

AI-powered developer tools claim to boost your productivity, doing everything from intelligent auto-complete to [fully autonomous feature work](https://openai.com/index/introducing-codex/). 

But the productivity gains users report have been something of a mixed bag. Some groups claim to get 3-5x (or more), productivity boosts, while other devs claim to get no benefit at all—or even losses of up to 19%.

I had to get behind these contradictory reports. 

As a software engineer, producing code is a significant part of my role. If there are tools that can multiply my output that easily, I have a professional responsibility to look into the matter and learn to use them.

I wanted to know where, and more importantly, what separates the high-performing groups from the rest. This article reports on what I found.

The State of AI Developer Tools in 2025

AI dev tooling has achieved significant adoption: 84% of StackOverflow survey respondents in 2025 said they’re using or planning to use AI tools, up from 76% in 2024, and 51% of professional developers use these tools daily.

However, AI dev tooling is a fairly vague category. The space has experienced massive fragmentation. When AI tools first started taking off in the mainstream with the launch of GitHub Copilot in 2021, they were basically confined to enhanced IDE intellisense/autocomplete, and sometimes in-editor chat features. Now, in 2025, the industry is seeing a shift away from IDEs toward CLI-based tools like Claude Code

Some AI enthusiasts are even suggesting that IDEs are obsolete altogether, or soon will be.

That seems like a bold claim in the face of the data, though.

While adoption may be up, positive sentiment about AI tools is down to 60% from 70% in 2024. A higher portion of developers also actively distrust the accuracy of AI tools (46%) compared to those who trust them (33%).

These stats paint an interesting picture. Developers seem to be reluctantly (or perhaps enthusiastically at first) adopting these tools—likely in no small part due to aggressive messaging from AI-invested companies—only to find that these tools are perhaps not all they’ve been hyped up to be.

The tools I’ve mentioned so far are primarily those designed for the production and modification of code. Other AI tool categories cover areas like testing, documentation, debugging, and DevOps/deployment practices. In this article, I’m focusing on code production tools as they relate to developer productivity, whether they be in-IDE copilots or CLI-based agents.

What the Data Says about AI Tools’ Impact on Developer Productivity

Individual developer sentiment is one thing, but surely it can be definitively shown whether or not these tools can live up to their claims?

Unfortunately, developer productivity is difficult to measure at the best of times, and things don’t get any easier when you introduce the wildcard of generative AI. 

Research into how AI tools influence developer productivity has been quite lacking so far, likely in large part because productivity is so difficult to quantify. There have been only a few studies with decent sample sizes, and their methodologies have varied significantly, making it difficult to compare the data on a 1:1 basis.

Nevertheless, there are a few datapoints worth examining.

In determining which studies to include, I tried to find two to four studies for each side of the divide that represented a decent spread of developers with varying levels of experience, working in different kinds of codebases, and using different AI tools. This diversity makes it harder to compare the findings, but homogenous studies would not produce meaningful results, as real-world developers and their codebases vary wildly.

Data that Shows AI Increases Developer Productivity

In the “AI makes us faster” corner, studies like this one indicate that “across three experiments and 4,867 developers, [their] analysis reveals a 26.08% increase (SE: 10.3%) in completed tasks among developers using the AI tool. Notably, less experienced developers had higher adoption rates and greater productivity gains.”

This last point—that less experienced devs have greater productivity gains—is worth remembering; we’ll come back to it.

In a controlled study by GitHub, developers who used GitHub Copilot completed tasks 55% faster than those who did not. This study also found that 90% of developers found their job more fulfilling with Copilot, and 95% said they enjoyed coding more when using it. While it may not seem like fulfillment and enjoyment are directly tied to productivity, there is evidence that suggests they’re contributing factors.

I couldn’t help but notice that the most robust studies that find AI improves developer productivity are tied to companies that produce AI developer tools. The first study mentioned above has authors from Microsoft—an investor of OpenAI— and funding from the MIT Generative AI Impact Consortium, whose founding members include OpenAI. The other study was conducted by GitHub, a subsidiary of Microsoft and creator of Copilot, a leading AI developer tool. While it doesn’t invalidate the research or the findings, it is worth noting.

Data that Shows AI Tools Do Not Increase Productivity

On the other side of the house, studies have also found little to no gains from AI tooling. 

Perhaps most infamous among these is the METR study from July 2025. Even though developers who participated in the study predicted that AI tools would make them 24% faster, the tools actually made them 19% slower when completing assigned tasks.

A noteworthy aspect of this study was that the developers were all working in fairly complex codebases that they were highly familiar with.

Another study by Uplevel points in a similar direction. Surveying 800 developers, they found no significant productivity gains in objective measurements, such as cycle time or PR throughput. In fact, they found that developers who use Copilot introduced a 41% increase in bugs, suggesting a negative impact on code quality, even if there wasn’t an impact on throughput.

What’s Going On?

How can it be that the studies found such wildly different results?

I must acknowledge again: productivity is hard to measure, and generative AI is notoriously non-deterministic. What works well for one developer might not work for another developer in a different codebase.

However, I do believe some patterns emerge from these seemingly contradictory findings.

未定義のImgur 6

Firstly, AI does deliver short-term productivity and satisfaction gains, particularly for less experienced developers and in well-scoped tasks. However, AI can introduce quality risks and slow teams down when the work is complex, the systems are unfamiliar, or developers become over-reliant on the tool.

Remember the finding that less experienced developers had higher adoption rates and greater productivity gains? While it might seem like a good thing at first, it also holds a potential problem: by relying on AI tools, you run the risk of stunting your own growth. You are also not learning your codebase as fast, which will keep you reliant on AI. We can even take it a step further: do less experienced developers think they are being more productive, but they actually lack enough familiarity with the code to understand the impact of the changes being made?

Will these risks materialize? Who knows. If I were a less experienced developer, I would have wanted to know about them, at least.

My Conclusions

My biggest conclusion from this research is that developers shouldn’t expect anything in the order of 3-5x productivity gains. Even if you manage to produce 3-5x as much code with AI as you would if you were doing it manually, the code might not be up to a reasonable standard, and the only way to know for sure is to review it thoroughly, which takes time.

Research findings suggest a more reasonable expectation is that you can increase your productivity by around 20%.

未定義のImgur 7

If you’re a less experienced developer, you’ll likely gain more raw output from AI tools, but this might come at the cost of your growth and independence.

My advice to junior developers in this age of AI tools is probably nothing you haven’t heard before: learn how to make effective use of AI tools, but don’t assume that it makes traditional learning and understanding obsolete. Your ability to get value from these tools depends on knowing the language, the systems, and the context first. AI makes plenty of mistakes, and if you hand it the wheel, it can generate broken code and technical debt faster than you ever could on your own. Use it as a tutor, a guide, and a way to accelerate learning. Let it bridge gaps, but aim to surpass it.

If you’re already an experienced developer, you almost certainly know more about your codebase than the AI does. So while it might type faster than you, you won’t get as much raw output from it, purely because you can probably make changes with more focused intent and specificity than it can. Of course, your mileage may vary, but AI tools will often try to do the first thing they think of, rather than the best or most efficient thing.

That is not to say you shouldn’t use AI. But you shouldn’t see it as a magic wand that will instantly 5x your productivity.

Like any tool, you need to learn how to use AI tools to maximize your efficacy. This involves prompt crafting, reviewing outputs, and refining subsequent inputs, something I’ve written about in another post. Once you get this workflow down, AI tools can save you significant time on code implementation while you focus on understanding exactly what needs to be done.

If AI tooling is truly a paradigm shift, it stands to reason that you would need to change your ways of working to get the most from it. You cannot expect to inject AI into your current workflow and reap the benefits without significant changes to how you operate.

For me, the lesson is clear: productivity gains don’t come from the tools alone; they come from the people who use them and the processes they follow. I’ve seen enough variation across developers and codebases to know this isn’t just theory, and the findings from these studies say the same thing: same tools, different outcomes.

The difference is always the developer.

目次

関連記事