Back to blog
AI Makes You Feel Productive  But You're Not
AI ProductivityKnowledge WorkPsychologyPerformanceFuture of Work

AI Makes You Feel Productive But You're Not

2026-04-119 min readPrince Kumar

There is a specific feeling that AI tools produce in the people using them. Code flows faster. Documents get drafted in seconds. Research questions get answered without switching contexts. The friction of work decreases noticeably. People using AI tools consistently report higher satisfaction, stronger sense of flow, and the subjective experience of being more productive. McKinsey found that developers using AI tools are twice as likely to report feeling happy and experiencing flow states. This feeling is real. The question worth asking and that most AI productivity research avoids asking directly is whether the feeling corresponds to a real increase in the output that matters. In many cases, the honest answer is: not proportionally. The experience of productivity and the fact of productivity are two different things, and AI tools have proven significantly better at improving the first than the second.

McKinsey found developers using AI tools are twice as likely to report feeling in 'flow.' The same study found no proportional increase in organisational delivery velocity. The feeling of productivity and the fact of productivity are diverging and the gap between them is where most AI investment in knowledge work is currently disappearing.

Why AI Produces the Feeling of Productivity

The psychological experience of productivity is closely tied to the feeling of forward movement doing things, completing units of work, generating outputs. AI tools are exceptionally good at creating this feeling because they accelerate every stage that produces a visible output. Code gets written faster. Documents get drafted faster. Emails get composed faster. The stream of outputs is more continuous, the waiting time between tasks is shorter, and the person using the tools experiences a sustained sense of doing things rather than the cognitive load pauses that characterise the same work without AI assistance.What AI does not accelerate is the hidden work the thinking that happens before the output, the judgment that evaluates the output's quality, the context-building that makes the next decision better than the last. These activities produce no visible output in the moment they happen. They feel like not being productive. AI tools reduce the time spent on visible output generation and increase the proportion of time spent on invisible judgment work which makes the experience feel more productive even when the total business value generated per hour has not changed.

The Productivity Paradox in the Data

The Faros AI productivity research found that developers on high AI-adoption teams complete 21% more tasks and merge 98% more pull requests per day and that PR review time has increased 91%. More tasks completed is the productivity metric that feels good. PR review time increasing 91% is the cost that the organisation absorbs while the individual feels more productive. The individual's experience is positive. The organisational outcome is not proportionally positive.The same paradox appears in the MIT GenAI Divide survey, which found that while individual developers reported significant productivity improvements, organisations implementing AI coding tools showed no proportional improvement in delivery velocity at the team or product level. Individual productivity improved. Organisational throughput did not scale proportionally. The gap between the two is the Amdahl's Law problem applied to knowledge work: accelerating one stage of a multi-stage value delivery process does not increase the speed of the full process if other stages remain at the same capacity.

The Busyness That Replaces the Work

AI tools introduce a specific category of work that feels productive but does not directly create value: AI management work. Prompting, re-prompting, reviewing outputs, correcting errors, verifying facts, formatting AI-generated content for actual use all of this is real work that takes real time, and all of it produces the subjective experience of being busy and engaged with a task. The person doing it is not idle. They are actively working. But they are working on managing the AI's output rather than on the underlying business problem the AI was supposed to help solve.This substitution genuine work replaced by AI management work is most visible in research and analysis tasks. A knowledge worker asked to produce a competitive analysis using AI tools will spend time generating an AI-produced first draft, reviewing it for accuracy, identifying the claims that need verification, going to primary sources to verify them, correcting the AI's errors, and reformatting the output. They will feel productive throughout this process. The question is whether the final analysis is better than the analysis they would have produced by spending the same total time reading primary sources directly and in many cases, it is not.

The Metrics That Reveal the Gap

The most reliable way to detect the gap between felt productivity and actual productivity is to measure outcomes at the level that matters for the business, not at the level where AI tools are most visible. For a software engineering team, the business-level metric is not pull requests merged it is features shipped to customers that generate user value. For a marketing team, the business-level metric is not content pieces published it is revenue attributed to that content. For a customer service team, the business-level metric is not tickets closed it is customer satisfaction and resolution quality on the tickets that required genuine problem-solving.Organisations that measure at the business-outcome level consistently find that the relationship between individual AI-productivity improvements and business-outcome improvements is weaker than the individual-level metrics suggest. The feature velocity of teams with high AI adoption is not proportionally higher than teams without. The revenue per content piece is not proportionally higher for AI-assisted content teams. The customer satisfaction scores for AI-handled service interactions are not proportionally higher. The disconnect is not evidence that AI is not working at all it is evidence that AI is improving individual experience of work faster than it is improving the organisational systems within which that work generates value.

How to Measure Actual vs. Felt Productivity

  • Define the business-level outcome metric before deploying AI tools not the activity metric that AI tools directly affect (lines of code, emails sent, documents generated) but the outcome metric that the activity is supposed to produce (features shipped, revenue influenced, problems resolved)
  • Capture the baseline before deployment the current level of the outcome metric, measured consistently over at least four to six weeks, before any AI tools are introduced into the workflow
  • Measure the outcome metric at the same level of rigour after deployment as before if the baseline was measured weekly, the post-deployment measurement must also be weekly, not a one-time snapshot
  • Track the ratio of AI management time to direct value-creation time how much of the productivity gain from AI generation is being consumed by the verification, correction, and re-prompting work that AI generation requires, and whether that consumption is decreasing as the team becomes more proficient
  • Distinguish between felt productivity and business productivity in performance conversations team members who report being more productive with AI tools deserve to have that experience validated, while also being supported to understand whether their increased activity is translating into the business outcomes that determine organisational performance

When the Feeling and the Fact Align

The feeling of AI-enhanced productivity and the fact of AI-enhanced productivity converge in specific conditions. When the task is well-defined and the output has a clear quality criterion that the person using the AI can evaluate quickly generating SQL for a known query pattern, translating a document for internal use, producing boilerplate code for a familiar pattern AI tools deliver genuine, measurable productivity improvement that corresponds to the felt experience. The verification step is fast because the success criterion is clear. The AI management work is minimal. The output is usable with light editing.The conditions that produce this alignment well-defined task, clear quality criterion, fast verification, familiar domain are also the conditions that make work feel effortless for experienced practitioners without AI tools. AI tools are most valuable not when they remove difficulty from difficult work, but when they remove tedium from work that was tedious only because it involved a lot of pattern-following on well-understood patterns. Recognising this and investing AI productivity efforts in the tedium-removal category rather than the difficult-work-acceleration category is the adjustment that closes the gap between the feeling of AI productivity and its measurable reality.