THIS WEEK: The 95% Problem Nobody Wants to Talk About
"Three stories worth your time this week, plus a deep dive on why almost every AI project your competitors announced last year is probably dead."
ARTIFICIAL INTELLIGENCE NEWS
🌎 The Big Guys Are Spending Like It's 1999
Meta just signed a multi-billion dollar deal with Nvidia to fill its 30 planned data centers with GPUs, including a 5-gigawatt facility in Louisiana. Amazon, Microsoft, and Google collectively spent $305 billion on CapEx last year, with roughly half going to GPU procurement. Meanwhile, mid-market companies are being told they need to "move fast on AI" by the same vendors selling them the shovels. The infrastructure arms race is real, but if you're running a $50M-$500M company, this isn't your fight. Your fight is making sure the data you already have is actually usable.
The takeaway: The infrastructure arms race is real, but if you're a $50M-$500M company, this isn't your fight. Your fight is making sure the data you already have is actually usable.
ARTIFICIAL INTELLIGENCE NEWS
Small Models Are Beating Big Models (When It Matters)
AT&T's Chief Data Officer said it publicly this month: fine-tuned small language models are matching large general-purpose models in accuracy for enterprise applications, at a fraction of the cost. Alibaba's Qwen 3.5 is running on commodity hardware and claiming parity with frontier models. The takeaway for your org: you probably don't need GPT-5. You probably need a smaller model trained on your own data. But that only works if your data is clean enough to train on. (See: the pattern.)
The takeaway: You probably don't need GPT-5. You probably need a smaller model trained on your own data. But that only works if your data is clean enough to train on.
ARTIFICIAL INTELLIGENCE NEWS
Everyone's Embedding AI Agents
Gartner Says 40% of Enterprise Apps Will Have AI Agents by Year-End Up from less than 5% in 2025. That's an 800% jump in one year. Before you panic about falling behind, read that stat carefully. "Embed AI agents" doesn't mean "deliver value with AI agents." Embedding is easy. Making it work with your messy data, inconsistent processes, and undertrained staff is the part nobody's advertising.
Takeaway: Ship drafts publicly. “Good enough” in an hour beats “perfect” in a week.
THE DEEP DIVE
95% of AI Projects Are Failing. The Reason Is Embarrassingly Simple.
Every major research institution is converging on the same number, and it's ugly.
MIT's Project NANDA found that roughly 95% of generative AI pilots fail to deliver measurable business impact. RAND Corporation's study, based on interviews with 65 data scientists and engineers, found that over 80% of AI projects never reach meaningful production. McKinsey's latest survey shows over 80% of organizations report no enterprise-wide financial impact from AI despite widespread adoption. S&P Global reported that the average organization scrapped 46% of AI proof-of-concepts before they ever reached production.
These aren't fringe researchers. These are the most authoritative voices in enterprise technology, all saying the same thing independently.
So what's killing these projects?
It's not the models. The models work. It's not the talent (though that's a problem too). And it's not budget. Companies are spending billions.
It's the data.
Informatica's 2025 CDO Insights survey found data quality and readiness is the number one obstacle, cited by 43% of respondents. Only 12% of organizations report having data of sufficient quality for AI. And 63% of organizations either don't have or aren't sure they have the right data management practices for AI, according to Gartner.
Read that again: 63% of companies aren't sure if their data is ready for AI. Meanwhile, they're buying AI tools.
That's like hiring a head chef before confirming you have a kitchen.
Gartner predicts that through 2026, organizations will abandon 60% of AI projects that aren't supported by AI-ready data. We're in that year right now. Look around at the AI initiatives in your organization. How many of them started with a data quality assessment? How many of them have a data governance framework supporting them? If the answer is "none," you're in the 95%.
The companies in the 5% that succeed share four traits:
They defined a specific business problem before selecting a technology.
They invested in data infrastructure before deploying models.
They redesigned workflows around AI, not just bolted AI onto existing processes.
They measured business outcomes, not model accuracy.
None of those are exciting. None of them make good conference keynotes. All of them are the difference between a press release and actual ROI.
The bottom line: The AI arms race isn't about who has the biggest model or the most GPUs. For mid-market companies, it's about who did the boring foundational work first. Data quality. Data governance. Process documentation. Clean integrations. The organizations doing that work right now, while their competitors chase demos, are the ones that will actually capture value from AI in 2026 and beyond.
The foundation comes first. Everything else is expensive theater.
🛠️ ONE THING TO DO THIS WEEK:
Pull up one AI initiative in your organization and ask three questions:
What data does it depend on?
Who owns that data?
When was the last time someone validated its accuracy?
If you can't answer all three in under five minutes, you have a data problem, not an AI opportunity.
