The worst AI question is also the most common one: “How do we use AI?”
It sounds responsible. It sounds urgent. It sounds like the kind of question a leadership team should ask when the board, investors, or an executive offsite demands proof that the company is moving fast. But it points the organization in the wrong direction. Once the question starts with AI, the answer usually becomes activity: pilots, tools, demos, task forces, infrastructure programs, and slide decks showing where AI might fit. Some of that work teaches. Most of it never turns into value.
A real AI strategy starts somewhere less glamorous: with painful customer or business problems. It asks which workflows are expensive, slow, repetitive, judgment-heavy, document-heavy, or hard to coordinate. Then it asks whether AI changes what is now possible, whether the value is measurable, and whether the solution can survive production.
That distinction matters now because the first wave of enterprise AI experimentation is running into a wall of unrealized business impact. Gartner’s 2025 IT spending research found that failed AI proofs of concept commonly break down because they do not demonstrate value, reveal additional needs too late, or fail to show enough business impact. Other industry research points in the same direction: many AI leaders are under pressure to justify investments that still have not produced the returns executives expected. AI can create value. The issue is that many organizations confuse AI activity with AI strategy.
AI activity looks productive until it has to produce
Picture the pattern. A board asks the CEO, “Where is our AI strategy?” The CEO turns to the executive team. The executive team turns to business units, IT, product, and outside consultants. Soon the company has 20 proofs of concept.
One pilot summarizes customer calls. One adds a chatbot to an internal portal. One drafts sales emails. One searches policy documents. One promises to automate reporting. The demos look polished. The steering committee sees motion. Everyone can say the company is doing AI.
Then the hard questions arrive. Which customer pain does this solve? Which business metric moves? Who owns the workflow after the pilot? What data does it need? What happens when the system is wrong? Who approves production access? How does this affect cost, throughput, revenue, risk, or retention? Too often, no one has a good answer.
This is why “how do we use AI?” is such a dangerous starting point. It sends teams looking for places to attach the technology instead of problems worth solving. It rewards visible experimentation, not disciplined selection. The failure data points in the same direction: AI initiatives often stall because they fail to demonstrate value, lack sufficient business impact, or uncover additional needs too late. That does not mean the starting question is the only cause. Data readiness, governance, integration, risk, ownership, and change management all matter. But starting with AI makes those problems easier to avoid until the demo is over.
A proof of concept asks “can we make this work?” A proof of value asks “does this change anything that matters?” Most AI strategies need fewer proofs of concept and more proofs of value.
The better starting point is painful work
The strongest AI opportunities often begin with boring work. Not visionary work. Not brand-new work. Not work that makes for the most exciting demo. The best candidates are the tasks people already hate: reading documents, comparing reports, cleaning spreadsheets, routing exceptions, checking status across systems, preparing summaries, reconciling records, drafting first-pass responses, or finding the few issues that deserve human attention.
These workflows share a pattern. They are repetitive enough to matter, but not simple enough for older automation. They involve text, judgment, messy inputs, unstructured data, or context spread across systems. They waste time, but they also require enough interpretation that a basic rule or script could not handle them. That is where current AI capabilities change the economics of work.
A customer does not wake up wanting an AI feature. They want to get rid of the spreadsheet, stop digging through reports for three hours, have fewer manual follow-ups, or see a claim, invoice, ticket, or underwriting file move faster without losing control. The same logic applies inside the company. A workflow owner can usually name the pain before anyone mentions AI: “Our onboarding takes too long.” “The system says we have inventory, but the warehouse cannot find it.” “Managers spend hours reading reports before they know what to fix.” “Our team handles the same customer exceptions every day.” “People are copying data between tools because the systems do not talk.”
Those symptoms are strategy inputs. They tell you where work is breaking, where cost hides, and where current tools have failed. AI strategy begins when the team stops asking where AI can fit and starts asking what work should change.
AI is not the differentiator. Redesigned work is.
For a while, saying “we have AI” sounded like a product strategy. That window is closing. As generic AI tools become widely available, access to AI stops being the source of advantage. The durable advantage comes from workflow design, domain expertise, proprietary data, operational discipline, and a clear understanding of customer constraints.
That explains why so many AI features feel thin. A vendor adds an AI widget to the corner of a product. A team embeds a chatbot into a workflow no one wanted to use in the first place. A product markets itself as agentic without changing the actual customer outcome. The AI model is visible. The value is not.
The stronger pattern hides the AI model behind a solved workflow. In insurance, for example, the valuable work is not “ChatGPT for underwriting.” The valuable work is reading documents, extracting context, identifying missing information, comparing details, preparing the file, and reducing the busy work before a regulated human decision. In that setting, the system does not need to replace the decision maker to create value. It needs to make the decision path faster, clearer, and more reliable.
That is the strategic shift. The question is not “where can we add AI?” The question is “what work can now be redesigned because machines can interpret language, documents, patterns, and context at scale?”
Pain is necessary, but not enough
Problem-first does not mean “pick a painful workflow and start building.” That is still too loose. A painful problem can fail as an AI initiative for at least five reasons.
First, the use case can be too small. Saving 26 minutes a day for one person may be useful, but it does not automatically create a financial case. CFOs will ask whether the time savings become lower cost, higher throughput, faster revenue, better retention, reduced risk, or avoided hiring.
Second, the use case can be too complex. Some workflows need too many integrations, too many permissions, too much judgment, or too much exception handling to make sense as an early project.
Third, the data can be unready. Generic AI tools do not fix broken enterprise data. If the required data is fragmented, low quality, trapped in legacy systems, or governed inconsistently, the AI system inherits the mess.
Fourth, the risk can be wrong. The more useful an AI system becomes, the more important access controls, monitoring, escalation paths, and human review become. A summarizer with no system access has one risk profile. An agent that can update records, send messages, or trigger transactions has another.
Fifth, the pilot can fail at production. A demo can work with clean examples and still fail when it meets real users, edge cases, security rules, latency requirements, audit needs, and workflow ownership.
This is where many AI strategies collapse. They mistake relevance for readiness. A problem-first approach removes one kind of failure: building something no one needs. It does not remove the harder failures of data readiness, integration, governance, change management, and measurement. That is why the strategy cannot stop at problem selection. It has to test whether the problem can become a production system.
The value-readiness gate
Before the next AI initiative gets budget, it should pass a simple test. Not a 60-page business case. Not a theoretical AI roadmap. A practical set of questions that force the team to move from AI activity to strategy.
Problem and value fit
1. What painful customer or business problem are we solving? If the answer starts with the technology, stop. A strong answer names the pain in operational terms: manual report review, delayed claims processing, duplicate data entry, slow customer follow-up, inventory mismatch, document comparison, exception routing, or hours spent searching across systems. The problem should be painful enough that someone already feels it.
2. Why can AI solve this better now than older tools or processes could? AI is not better for every problem. It is more likely to matter when the work involves unstructured data, language, pattern recognition across large datasets, judgment support, or context spread across many sources. If a rules engine, workflow tool, dashboard, or process fix solves the problem more simply, use that. AI is not the goal. Solving the problem with the right tool is.
3. Who experiences the value, and what KPI proves it? User delight is not enough. Time savings are not enough unless they convert into something the business can measure. The value might show up as lower cost per case, higher cases per employee, faster cycle time, fewer stockouts, reduced external agency spend, better first-contact resolution, lower risk, or avoided backfill. Different problems need different metrics.
This is where the CFO becomes useful, not obstructive. Finance forces the team to define whether the gain is real, measurable, and worth scaling. A useful pressure test is simple: if the value case converts time saved into financial return, who has already agreed that time saved will change headcount, throughput, spending, or capacity?
Workflow and operating fit
4. What workflow, task, or decision will actually change? A strategy-grade initiative changes work. It might reduce the number of steps in a claim review. It might prepare an underwriting packet before a human decision. It might route customer issues automatically when confidence is high and escalate uncertain cases when confidence is low. It might turn four hours of seller research into 15 minutes of account preparation. If the team cannot say what changes in the workflow, the initiative is still a concept.
5. What data, systems, permissions, and integrations are required? This question separates demos from deployable systems. A customer-service agent that only suggests responses has one set of requirements. A self-directed system that resolves issues across billing, shipping, and CRM systems needs governed data access, identity controls, audit trails, permissions, and integration with core systems. Many failed AI projects do not fail because the model is weak. They fail because the surrounding system is not ready.
Risk and production fit
6. What risks, guardrails, and human review points are needed? The right level of autonomy differs by workflow. Some tasks can be automated end to end. Some should be drafted by AI and approved by a person. Some should use AI only to focus human attention on the right issue. Some should not be automated at all. In regulated workflows, the best design often separates preparation from judgment: AI can read, summarize, compare, flag, and route, while a human still makes the final decision. Good strategy names that boundary before deployment.
7. What has to be true for this to move from demo to production? This is the question most pilots avoid. Production means ownership, support, controls, measurement, training, feedback loops, and exception handling. It means the system works when data is messy, when users push back, when the answer is uncertain, and when the workflow crosses departments. A pilot that cannot answer this question is not ready to scale. It may still be useful for learning, but it should not be mistaken for strategy.
The seven questions do not prove the project will succeed. They do something more immediate: they reveal whether the team is ready to spend real money, or only ready to produce another impressive demo.
The counterargument is real: infrastructure matters
Some technology leaders will object to the problem-first frame. They will say an AI strategy has to start with data infrastructure, governance, security, and platform investment. They are partly right. If the company’s data is siloed, untrusted, poorly governed, or trapped in decades-old systems, no problem-first workshop will magically create production value. Some organizations need serious work on data ownership, lineage, access controls, integration, and platform architecture before AI can scale.
But that does not make the argument infrastructure-first instead of problem-first. It means the two have to meet. Platform investment needs a portfolio of real problems to justify its shape. Problem selection needs infrastructure reality to avoid fantasy. The mistake is treating them as alternatives.
The same distinction applies to experimentation. Some organizations need early AI pilots to build literacy, test governance, and learn what the technology can do. That is legitimate capability building. But capability building has to be named as such. It is not the same as claiming the company has a value-producing AI strategy. Experimentation teaches. Strategy selects.
Before you fund the next AI initiative
A good AI strategy does not look like a list of tools to test. It looks like a set of operating choices. It names the painful workflows worth changing. It explains why AI changes the solution space. It defines the business value before the build. It decides where the human stays in the loop. It identifies the data and systems required. It names the risks and controls. It sets the threshold for production.
That kind of strategy can still move fast. Problem-first does not mean slow. It means directed. A team can pressure-test the value-readiness gate in a short sprint, then decide where deeper discovery is needed. The point is to stop funding initiatives that cannot explain why they should exist, not to bury AI work in analysis.
This is also how companies find stronger opportunities. Much of the highest-return work in enterprise AI will not be where budgets first went. Back-office automation, operational workflows, and domain-specific work often create more value than visible front-office experiments because the pain is clearer and the work is closer to cost, capacity, and throughput. That should not surprise anyone who has watched enterprise software closely. In large organizations, the most valuable technology frequently starts in the least glamorous workflow.
Before asking “how do we use AI?” ask “what painful work deserves to change?” Before calling something strategy, ask whether it can pass the proof-of-value test. An AI idea says “here is where AI fits.” An AI strategy says “here is the problem we are solving, why AI changes the answer, what work will change, how value will be measured, and what must be true for this to run in production.” That is the difference between looking serious about AI and doing serious work with it.