
Companies Bought AI Now They Don't Know What To Do With It
The AI buying cycle in most large enterprises has followed a consistent and dysfunctional pattern. The board asks about AI. The CEO asks the CTO. The CTO asks the vendors. The vendors demonstrate impressive things in controlled conditions. The board approves a budget. The licences are purchased. And then the hard question what specifically are we going to do with this gets handed to people who were not involved in the purchase decision, with a timeline set by the licence renewal date rather than by any thoughtful assessment of what is actually achievable. Deloitte found that 42% of organisations are still developing their AI strategy roadmap. Thirty-five percent have no formal strategy at all. These are organisations that have already committed budget. The buying happened before the thinking, and the thinking is still catching up.
88% of senior executives have approved larger AI budgets for 2026. 42% of organisations are still developing their AI strategy roadmap. 35% have no formal strategy at all. The purchase decision was made before the use case was defined and now the clock is running on expensive licences that nobody knows how to use.
The Reverse Order Problem
Every technology investment framework recommends the same sequencing: define the problem, identify the solution, evaluate options, purchase the best fit. AI investment in most large enterprises in 2024 and 2025 followed the reverse: purchase the most credible AI platform, then discover what problems it can solve, then attempt to fit those solutions to the organisation's actual operational context. This reverse sequencing is not irrational AI is a general-purpose technology whose applications are not fully knowable in advance, and early investment can build the organisational capability needed to realise those applications. The problem is that reverse-sequenced technology investment produces reverse-sequenced outcomes: organisations that own powerful tools they do not know how to use, paying licence costs for value they have not yet extracted.The Harvard Business Review's analysis of enterprise AI investment in 2026 found that companies cutting based on AI's potential rather than its performance the pattern the survey documented as dominant are making a specific strategic error: they are removing the human capacity that would have been needed to deploy and govern AI effectively, before the AI systems that would replace that capacity are functional enough to do so. The result is a capability gap that is wider than it was before the AI investment.
The Three Organisational Failure Patterns
The pilot that never becomes production
Organisations run AI pilots to satisfy board curiosity about AI, to generate internal learnings, and to check the box of AI experimentation. The pilot produces interesting results. The pilot team moves to the next pilot. Nothing ships to production. The pattern repeats across quarterly cycles, generating reports and presentations about AI exploration while the licence costs accumulate and the gap between what the organisation is paying for and what it is using widens. The MIT survey found that most AI projects stall in the proof-of-concept stage with no clear owner, no economic model for scaling, and no accountability for the transition from pilot to production.
The tool with no workflow home
AI tools purchased at the platform level enterprise licences for Copilot, Claude, ChatGPT Teams, or similar often reach individual users who do not know how to incorporate them into their actual work. The tool is available. The use case is not defined. The user experiments, finds some marginal utility, and settles into using the tool for low-stakes tasks while continuing to use existing processes for the work that actually matters. Adoption statistics look healthy (users are logging in) while value delivery statistics are weak (users are not using the tool for high-value work). The organisation is paying enterprise-tier prices for consumer-tier usage.
The strategy built around the vendor's roadmap
Organisations that anchor their AI strategy to a specific vendor's product roadmap inherit all of that vendor's bets. When the vendor pivots when Microsoft's Copilot strategy shifts, when Salesforce's Agentforce roadmap changes, when a key AI model is deprecated the organisation's AI strategy pivots with it, without any internal continuity of direction. The organisations generating the strongest AI ROI are building their AI capability around specific business problems they own, using platforms as infrastructure rather than as the strategy itself. The AI strategy should be defined by the business outcome, not by what the vendor's product currently does.
What a Strategy Actually Requires
An AI strategy is not a list of AI tools the organisation has purchased or plans to purchase. It is a specific answer to three questions: what business problems are we solving with AI, how will we measure whether we are solving them, and who is accountable for the outcomes. Each of these questions is harder to answer than it appears. The business problems worth solving with AI are not the obvious ones 'make our customer service faster' is not a problem definition, it is a direction. The specific problem is: our tier-1 support volume is 4,200 tickets per week, 68% of which are resolved using three standard scripts, and the cost per ticket is ₹320. AI that automates the 68% of standard resolutions would reduce cost per ticket to under ₹100 and free tier-2 agents to handle the complex cases currently being escalated at ₹850 per ticket. That is a problem definition.Measurement frameworks for AI value need to be defined before deployment, not after. An organisation that deploys AI and then asks 'is it working?' three months later has no baseline to compare against. The before/after measurement requires knowing what the before looked like which means capturing baseline metrics (cost per ticket, resolution time, error rate, time spent on the task) before the AI system is introduced. Most AI deployments skip this step and are then unable to demonstrate ROI with any precision, which makes the renewal decision political rather than analytical.
What the Organisations Getting ROI Are Doing Differently
- They start with the problem, not the tool identifying a specific operational process with a measurable cost, a clear inefficiency, and data that is already captured, before selecting the AI approach
- They assign a named owner for AI outcomes not a committee, not the IT department, but a specific person whose performance evaluation includes whether the AI investment delivers its stated objective
- They capture baseline metrics before deployment cost per unit, time per task, error rate, volume so that post-deployment measurement produces a specific ROI number rather than an impression
- They time-box pilots aggressively six weeks maximum from pilot start to the go/no-go decision on production deployment, forcing the organisation to evaluate real performance against a deadline rather than extending the pilot indefinitely to avoid making a decision
- They separate infrastructure investment from use case investment the platform licence is infrastructure; the use case is a specific application of that infrastructure to a specific problem, and use case investment decisions are made independently of platform investment decisions