I was in a strategy session where a VP of Data told the room that generative AI would “eliminate the need for data analysts within two years.” The room nodded. Budget was reallocated. Three analyst positions were left unfilled. Eighteen months later, the company had fewer people answering data questions, no AI system replacing them, and a growing backlog of ad-hoc requests that the remaining team could not service.
Skepticism about AI is not the same as being anti-technology. It is the recognition that the gap between what AI can do in a demo and what AI can do in your specific operational context is wide, and that crossing that gap requires honest assessment rather than wishful thinking.
Why skepticism is strategic
Every data strategy document I have read in the last two years includes AI prominently. Most of them describe AI capabilities that the organization does not currently have the data quality, infrastructure, or organizational maturity to support. The strategy is built on a projection of what AI might be able to do, not on what AI demonstrably does today in that organization’s environment.
This creates a planning problem. When the strategy assumes AI will solve a category of problems, the organization stops investing in the human capacity to solve those problems. If the AI solution works as projected, this is efficient. If it does not — and the historical base rate for technology projections is not encouraging — the organization is left with neither the human capacity nor the working AI system.
Healthy skepticism acts as a hedge. It means continuing to invest in human analytical capability while exploring AI applications, rather than betting the analytical function on AI maturity that has not been demonstrated yet.
The demo-to-production gap
AI systems look impressive in demos because demos are controlled environments. The data is clean. The use case is selected for its compatibility with the model’s strengths. The evaluation criteria favor the model. The audience wants it to work.
Production is the opposite. The data has quality issues that nobody anticipated. The use cases include edge cases that the model was not trained on. The evaluation criteria include latency, cost, explainability, and integration with existing systems — none of which appear in the demo. And the users do not want to change their workflow to accommodate the AI system.
I have seen this gap close in some domains. Code generation, document summarization, and certain classification tasks are genuinely production-ready for many organizations. But the list of production-ready AI capabilities is much shorter than the list of capabilities that AI can demonstrate. A good data strategy distinguishes between the two lists and plans accordingly.
Three questions that cut through hype
When a team proposes an AI initiative, I ask three questions that separate viable projects from aspirational ones.
What is the current process, and how is it measured? If the team cannot describe the current decision-making process and its measurable quality, they are not ready to evaluate whether AI improves it. AI does not replace nothing. It replaces something. Understanding the something is a prerequisite.
What happens when the model is wrong? Every model is wrong sometimes. The question is whether the organization has a process for handling model errors. If a model misclassifies a customer complaint, what is the downstream impact? If a model generates an incorrect financial summary, who catches it? If there is no error-handling process, the organization is not ready for that model in production.
What does the organization stop investing in if this project succeeds? This is the question nobody asks, and it is the most important one. If the AI project is supposed to replace a manual process, does the organization plan to reduce headcount in that area? If not, the AI project is additive, not transformative — which is fine, but it should be scoped and budgeted as additive work, not sold as transformation.
The opportunity cost of premature adoption
There is a cost to adopting AI too early, and it is rarely discussed. It is the cost of the data quality work, infrastructure work, and organizational change work that does not happen because the organization is focused on AI.
I worked with a retailer that had significant data quality problems in their product catalog. Duplicate entries, inconsistent categorization, missing attributes. These problems were well-known and had a clear remediation path: invest in data governance, assign ownership, build validation pipelines. Instead, the company launched an AI project to automatically categorize products, reasoning that AI could handle the messy data.
The AI system could not handle the messy data. It produced categorizations that were directionally correct but not precise enough for the downstream systems that needed them. Meanwhile, the data quality work that would have made both the AI system and the existing processes more effective was deferred indefinitely.
The opportunity cost was not just the money spent on the AI project. It was the year of compounding data quality problems that went unaddressed while the organization chased an AI solution that required the very data quality improvements it was meant to bypass.
Constructive skepticism versus reflexive rejection
Skepticism is not the same as saying no to everything. It is the discipline of asking whether the proposed AI application has a clear decision to improve, a measurable baseline, a viable error-handling process, and a realistic assessment of the data quality required. Projects that pass these tests should be funded and pursued aggressively. Projects that do not should be paused until they can.
The organizations that will get the most value from AI are not the ones that adopt earliest. They are the ones that adopt most honestly — with clear-eyed assessments of what works, what does not, and what the actual conditions for success are. That honesty requires skepticism, and skepticism requires courage, because the internal pressure to be an “AI-first” organization is intense.
The heuristic I use: if the proposal would survive without the word “AI” in it — if the business case holds up as a process improvement or a decision quality improvement, and AI is simply the mechanism — then it is worth pursuing. If the entire business case depends on AI performing as promised, without a fallback, then it is a bet, not a strategy. Know the difference before you spend the money.