Back to blog
Why AI Adoption Is Not Delivering Expected ROI
AI ROIEnterprise AIDigital TransformationAI StrategyBusiness

Why AI Adoption Is Not Delivering Expected ROI

13-04-202610 min readNirmal Nambiar

The organisations investing most heavily in AI and the organisations generating the strongest AI ROI are not the same organisations. Microsoft's own market research found AI investments returning an average of 3.5x with 5% of companies reporting returns as high as 8x. That same research, read carefully, reveals that the 5% figure is not a claim about the typical enterprise AI experience. It is a claim about the tail of the distribution. The median enterprise AI deployment is not returning 3.5x on its investment. It is sitting in the pilot stage, generating reports that nobody acts on, or running at a fraction of its designed capacity because the data quality, governance, and change management investments that would make it work were never made. The question worth asking is not whether AI can deliver ROI it can, and in specific contexts it already is. The question is why the gap between the demonstrated capability and the typical enterprise deployment outcome is so large, and what specifically needs to be true for an organisation to be in the 5% rather than the 95%.

88% of senior executives have approved larger AI budgets for 2026. Only 17% of organisations report seeing 5% or more of their EBIT attributable to AI. The gap between investment and return is not a capability problem. It is a deployment problem and it follows a consistent pattern across industries.

The Five Reasons ROI Is Not Materialising

The problem was defined after the tool was purchased

Every ROI case requires a specific problem with a measurable baseline cost, a solution that addresses that specific problem, and a measurement framework that captures the improvement. Most enterprise AI investments are made in the opposite order: the tool is purchased based on general capability demonstrations, then the organisation attempts to identify problems it can solve with the tool, then it deploys without a baseline measurement, and then it attempts to evaluate ROI against no prior reference point. The result is an AI deployment that does useful things but cannot prove it because nobody measured what the useful things were worth before the AI started doing them.

The ROI was measured at the wrong level

AI tools improve individual activity metrics reliably. Copilot users send emails faster. GitHub Copilot users write code faster. AI meeting summaries get distributed faster. These improvements are real and measurable at the individual activity level. The business-level metrics that determine whether the organisation is actually better off customer satisfaction, revenue per employee, delivery velocity, error rate, margin do not necessarily improve in proportion to the individual activity improvements. The Faros AI research documents this gap precisely: developers completing 21% more tasks and merging 98% more PRs per day, with no proportional improvement in delivery velocity at the product level. Measuring at the individual activity level produces ROI numbers that look strong. Measuring at the business outcome level produces numbers that are harder to defend.

The AI is deployed on broken processes

Deploying AI on top of a poorly designed process makes the process run faster without making it better. A reconciliation process where the underlying data is inconsistent produces incorrect reconciliation outputs faster with AI than it did with human review. A customer service process where the resolution scripts are outdated provides incorrect information to more customers per hour with AI automation than with human agents. Henry Ford's observation that there is no progress in finding a better way to do something that should not have to be done at all applies directly here. AI deployment should begin with process design, not process automation. The process needs to be right before the process needs to be fast.

Adoption never reached the level required to generate ROI

Enterprise AI licences are purchased at the organisational level. Adoption happens at the individual level. The gap between 'the organisation has licences for 10,000 Copilot seats' and 'those 10,000 people are using Copilot for high-value work that generates measurable business outcomes' is the adoption gap and it is where most enterprise AI ROI disappears. Usage statistics may show that people are logging in. ROI requires that they are using the tool for work that matters, at the level of depth that produces the improvement the business case assumed, consistently enough to generate a measurable business outcome. Achieving this requires change management investment that was cut from the budget.

The investment was made to avoid falling behind, not to solve a problem

A significant fraction of enterprise AI investment in 2024 and 2025 was defensive: organisations investing because they feared being left behind if they did not, not because they had identified a specific business problem that AI would solve better than existing approaches. Defensive investment produces the deployment pattern MIT documented: pilots that never reach production, tools that are used for low-stakes tasks while high-value work continues on existing systems, and ROI analyses that cannot produce a specific number because the investment was never tied to a specific outcome. Defensive investment is not irrational the risk of falling behind in AI capability is real. But it produces a different ROI profile than problem-anchored investment, and conflating the two produces investment decisions that are evaluated on the wrong criteria.

What the Organisations Generating Real ROI Do Differently

The pattern across the organisations generating documented, specific, large-scale AI ROI is consistent. They define the problem before selecting the tool. Walmart's $75 million supply chain AI saving started with a specific problem forecast accuracy and overstock cost not with a decision to invest in AI. BMW's 60% defect reduction started with specific quality failure patterns in specific production stages. JPMorgan's 360,000 automated staff hours started with a specific compliance document review process that was consuming a known number of hours at a known cost.They capture baseline metrics before deployment. The ROI calculation requires knowing what the before looked like cost per unit, time per task, error rate, volume measured consistently over enough time to establish a reliable baseline. They assign a named owner for outcomes, not for the technology. The person accountable for whether the AI investment delivers is accountable for the business outcome, not for the platform deployment. And they time-box evaluations aggressively go/no-go on production deployment within six weeks of pilot start, forcing the organisation to evaluate real performance against a deadline rather than extending the pilot indefinitely while licence costs accumulate and the business case erodes.

The ROI Conditions Checklist

  • A specific business problem defined before vendor selection with a measurable baseline cost and a clear criterion for what improvement looks like
  • Baseline metrics captured before deployment across the business-level outcome measure, not only the activity-level measure that AI will directly affect
  • A named owner for outcomes a specific person whose performance evaluation includes whether the investment delivers its stated objective
  • Data quality assessed and the specific issues that will cause incorrect outputs identified and resolved for the first agent's data sources before deployment begins
  • An adoption plan with specific behaviour change targets by role, not just licence provisioning and training attendance metrics
  • A production deployment timeline with a hard evaluation gate at six weeks, requiring the deployment to demonstrate measurable progress toward the stated business outcome or be restructured