Back to blog
AI Was Supposed to Save Time  It's Doing the Opposite
AI ProductivityKnowledge WorkTime ManagementFuture of Work

AI Was Supposed to Save Time It's Doing the Opposite

2026-04-119 min readPrince Kumar

The productivity promise of AI is simple and compelling: AI handles the repetitive, time-consuming parts of knowledge work so that humans can focus on the parts that require judgment and creativity. In practice, something different is happening. The repetitive work is being accelerated, but it is generating a new category of work AI oversight, verification, correction, and consequence management that did not exist before. Microsoft's Copilot users report saving 30 minutes per week on email. They do not report how long they spend editing Copilot's email drafts, correcting the facts it hallucinated, and undoing the tone choices it made that were subtly wrong for the specific relationship context of the message. That verification work is invisible to the productivity measurement. It is not invisible to the person doing it.

Knowledge workers using AI tools report saving 30 minutes per week on email. They also report spending 2–3 hours per week verifying AI outputs, correcting AI errors, re-prompting for better results, and managing the downstream consequences of AI-generated content that was wrong. The net is negative.

The Verification Tax

Every AI-generated output requires a verification step that human-generated output does not. When a person writes an analysis, they know what they checked and what they assumed. When AI writes an analysis, the reader does not know which claims were retrieved from accurate sources, which were plausible-sounding fabrications, and which were accurate but misapplied to the wrong context. Trusting AI output without verification is the professional equivalent of signing a document without reading it. Most professionals have learned this through experience which means they now read everything AI generates before using it, adding a review step to a process that was supposed to eliminate review steps.The DORA 2025 report found that the majority of developers use AI primarily for autocomplete features and that advanced capabilities like context-aware review or agentic task execution remain largely untapped. The reason is not lack of awareness. It is calibrated distrust. Developers who have experienced AI-generated code that looked correct and behaved incorrectly in edge cases have learned to review carefully. That careful review takes time. The net time cost of AI assistance in many workflows is: time saved in generation minus time spent in verification equals a smaller time saving than the headline figure suggests, and sometimes a time cost.

The Re-Prompting Cycle

Generating a useful output from an AI system is rarely a single-prompt operation in practice. It is an iterative cycle: generate, evaluate, identify what is wrong, refine the prompt, generate again, evaluate again, refine again. For tasks where the quality bar is high a client-facing document, a technical specification, a strategic analysis this cycle can run five to ten iterations before the output is usable. Each iteration requires the human to evaluate the AI's output against their own mental model of what good looks like, identify the gap, translate that identification into a revised prompt, and wait for the next generation.This re-prompting work is intellectually demanding in a specific way that is different from the work it was intended to replace. Writing a document from scratch requires sustained creative effort. Iteratively debugging an AI's document generation requires sustained critical evaluation identifying subtle errors, misalignments, and missed context in content that looks superficially plausible. Many knowledge workers report that the re-prompting cycle is more cognitively demanding than writing the original document would have been, while also feeling less satisfying because it positions them as editors of someone else's mediocre draft rather than authors of their own work.

The Meeting Replacement That Added Meetings

AI meeting summary tools were positioned as a direct time savings: record the meeting, get the summary, skip the notes. In practice, every organisation that has deployed AI meeting summaries at scale has discovered that the summaries require review and correction before they can be relied upon for decisions. Action items are missed or misattributed. Nuanced conclusions are flattened. Context that was implicit in the room but never stated explicitly is absent from the transcript-based summary. The review and correction process requires someone to have paid attention to the meeting well enough to know what was wrong with the summary which means that person attended the meeting, reviewed the AI summary, corrected it, and distributed it. That is more work than taking notes during the meeting would have been.A separate dynamic has emerged in organisations with mature AI meeting summary deployments: meeting count is increasing. When meetings generate automatic summaries that capture decisions and action items, the friction of calling a meeting decreases you can always catch people up via the summary. The result is more meetings, not fewer, each generating AI summaries that require review. The tool intended to reduce meeting overhead has reduced the cost of scheduling meetings, which has increased meeting volume, which has increased the total administrative overhead of meeting management.

The Hidden Time Costs Nobody Measures

AI ActivityReported Time SavingUnreported Time CostNet Reality
Email drafting (Copilot)30 min/week saved15–20 min/week reviewing, editing, correcting tone10–15 min/week net saving if that
Meeting summariesMeeting note-taking eliminatedSummary review + correction + increased meeting volumeOften net negative in organisations with high meeting frequency
Code generation30–60% of writing time savedReview of rejected suggestions, debugging AI errors, managing edge casesNet saving real for seniors, marginal or negative for juniors
Document draftingFirst draft generated instantly3–7 iteration cycles to reach usable qualityTime saving if task is well-defined; time cost if nuance is required
AI research / summarisationHours of reading condensedFact-checking hallucinated citations, re-reading sources AI misrepresentedHigh variance accurate when sources are reliable, costly when they are not

When AI Actually Saves Time

The cases where AI generates genuine, unambiguous time savings share a consistent profile: the task is well-defined, the required output format is standardised, the quality bar is verifiable against a clear criterion, and the cost of error is low enough that imperfect output can be used without extensive verification. Automated status report generation from structured project data is genuinely faster with AI. Generating code boilerplate for a known pattern is genuinely faster. Producing the first draft of a meeting agenda for a recurring meeting type is genuinely faster. Translating a document between languages for internal use where subtle mistranslations will be caught by the native speaker recipient is genuinely faster.The tasks where AI makes things slower are the ones that require contextual judgment, relationship awareness, precision under uncertainty, or the kind of tacit knowledge that the person doing the task has accumulated over years and cannot fully articulate in a prompt. These tasks happen to be the ones that consume the most time in most knowledge worker roles and the ones that AI optimists point to as the biggest opportunity. The honest answer is that AI is saving time on the tasks that were already fastest, and not yet reliably saving time on the tasks that are slowest. That is why the aggregate time savings figures are smaller than expected.