AI Fever, CFO Maths: What Breaks First?

ai-hype

January has a habit of doing this to people. The diaries are fresh, the budgets are under review, and someone on the exec team asks the question that lands like a cold spoon in hot tea:

Are we in an AI bubble?

You can almost hear the room split in two. One side thinks the hype is out of control. The other thinks it’s the next big platform shift and we’ll look foolish for hesitating. Both sides have a point. That’s the uncomfortable bit.

Here’s the thing: “bubble” talk is rarely about whether a technology works. It’s about how money, expectations, and delivery discipline get tangled up. And right now, those three are doing the tango.

Is it a bubble… or just froth?

When people say “AI bubble,” they often mean three different things, all mashed together:

1. Market bubble: valuations and stock prices run ahead of earnings.

2. Delivery bubble: organisations announce AI wins, but internally it’s still slide decks and pilots.

3. Product bubble: thousands of lookalike tools, most of which won’t survive once buyers get picky.

So yes, AI can be genuinely useful and still be surrounded by froth. That’s not a contradiction. It’s just how big tech waves arrive.

Even major investors are saying the quiet part out loud. Ray Dalio, for example, described the AI boom as being in an early “bubble” phase.  That doesn’t mean AI is fake. It means price and promise can sprint faster than outcomes.

And outcomes, as you know, are what the CFO asks about.

The money is loud, and it’s not subtle

Let’s talk numbers, since vibes don’t belong in board packs.

Global AI funding in 2025: Crunchbase data suggests $202.3bn went into the AI sector in 2025 (across infrastructure, foundation models, and apps), and that AI was close to half of global venture funding. 

GenAI VC alone: EY reports $87bn of GenAI VC investment for the first 11 months of 2025, up strongly year-on-year. 

Private AI investment hit $252.3bn in 2024, per Stanford’s AI Index (with GenAI funding called out at $33.9bn in 2024 private investment).

That’s… a lot of heat. And heat tends to rise.

But money is only half the story. The other half is the physical build-out chips, racks, cooling, land, permits, grid connections. This is where AI starts to feel less like software and more like heavy industry.

Goldman Sachs Research has forecast data centre electricity demand rising sharply by 2030 versus 2023 levels (one widely cited figure is ~165%).  If you’re a CIO, that shows up as lead times and pricing pressure. If you’re a CEO, it shows up as strategic risk: concentration, dependency, and cost creep.

You know what? This is the part people forget. AI isn’t only a clever model. It’s a supply chain.

OK, but are enterprises actually buying it?

Yes. And… not always in the way vendors want to talk about.

McKinsey’s 2025 survey reported 88% of respondents say their organisations use AI in at least one business function.  That’s mainstream adoption, not a niche experiment.

Menlo Ventures’ enterprise research claims enterprise GenAI spend hit $37bn in 2025, up from $11.5bn in 2024.

So, not imaginary money. Not just “innovation theatre”.

Yet, here comes the mild contradiction—a lot of that spend is still searching for shape. Licences get bought, sandboxes appear, a few teams ship something helpful… and then the scaling stalls.

Which leads to the more useful question for senior leaders:

If AI is real, why does it still feel like a bubble?

Where the bubble risk really sits: unit economics and “pilot purgatory”

Most AI bubble fears aren’t about whether a model can write code or summarise a document. We’ve all seen that demo. The fear is quieter:

What does it cost at scale?

Who carries the operational risk?

Does the value land in this quarter, or in “some future operating model”?

Training costs have been climbing fast. One analysis from Epoch AI argues frontier training costs have historically grown 2–3x per year, and points towards billion-dollar training runs within a few years if trends hold.

Even if you never train a model yourself, that dynamic matters. It shapes vendor pricing, capacity constraints, and negotiation leverage.

And then there’s the classic enterprise trap: pilot purgatory. You get ten proofs of concept, each with a small win, none with a measurable run-rate, and suddenly you’re running a mini zoo of tools. Everyone is “busy”. Nobody is accountable for outcomes.

A quick smell test I like (informal, but it works): if your AI programme has more demos than decommissioned legacy steps, you’re not changing work, you’re collecting toys.

A detour that matters: the grid, the permits, and the calendar

This is where I’ll take a small tangent, since some execs miss it until it bites.

AI timelines don’t only depend on engineers. They depend on property, procurement cycles, and local infrastructure. Data centres need land, cooling, and serious electricity supply. Grid upgrades don’t happen on sprint cadence. Permitting can drag. Supply constraints show up as “delivery risk” on project plans, then turn into “business risk” when a product launch slips.

Goldman Sachs has written about the data centre capacity build-out and scenarios where capacity catches up or stays tight, with occupancy staying high in base cases.  That’s not a niche concern. It’s a strategic planning input.

So when people whisper “bubble,” sometimes they mean: we’re spending like this will be easy, but the real-world constraints are not easy.

Regulation is not a footnote anymore

If you operate in, sell into, or process data tied to the EU, the EU AI Act timeline is now a board-level calendar item.

The European Commission’s published timeline sets out staged application, with general provisions and certain prohibitions applying from 2 February 2025, and general-purpose AI rules applying from 2 August 2025, with full roll-out foreseen by 2 August 2027.

This shifts AI from “innovation topic” to “governed capability”. It nudges you towards:

model and data documentation that doesn’t make auditors laugh,

clearer supplier contracts,

and AI literacy inside the business (not just inside IT).

The bubble risk here is subtle: teams rush features out, then realise compliance and trust work arrived late. That’s when programmes get paused, not because AI “failed,” but because governance was bolted on like an afterthought.

Even Gartner is basically saying: calm down

Gartner’s 2025 commentary puts generative AI into the Trough of Disillusionment on its hype cycle framing-meaning organisations are getting more realistic about limits and effort.

That’s healthy. Disillusionment is where serious delivery starts. It’s where the shiny “wow” gives way to the boring stuff: integration, access controls, monitoring, change management, and actual adoption.

Honestly, “boring” is what you want. Boring scales.

The IT myth graveyard: we’ve done this dance before

If you’ve led tech for long enough, you’ve lived through at least one big myth that aged badly. A few greatest hits:

“The paperless office is here.” Printers stayed. People printed emails. We moved the mess around.

“ERP will fix the business.” ERP fixed some processes and exposed others. Then everyone learned the joy of customisations and upgrades.

“Cloud is always cheaper.” It’s cheaper when you run it well. When you don’t, the bill turns into a monthly horror story.

“Big data will make decisions for us.” Data helped-after we cleaned it, governed it, and built trust in it.

“Blockchain will replace databases.” Some neat use cases, lots of noise, and plenty of projects that quietly stopped meeting.

“The metaverse will replace the office.” Nice idea for some scenarios. Most people still wanted a normal video call and a decent agenda.

AI fits this pattern, but with a twist: the tech is genuinely useful early. That’s why adoption is moving fast.  The risk is that usefulness gets mistaken for effortless value.

And that’s how bubbles form: easy wins get extrapolated into wild forecasts.

So what should senior leaders do in 2026?

Not panic. Not freeze. And definitely not spray money everywhere and hope.

A practical posture looks like this: treat AI as a portfolio with clear risk controls, not as a single “AI programme” with vague success criteria.

Here’s a compact list, no fluff, just what tends to work:

Pick 5-7 use cases that touch real workflows, not abstract “capabilities”. Tie each to a metric the CFO respects (time saved with adoption rates, reduced rework, fewer escalations, faster cycle time).

Run “cost per outcome” tracking, not vanity metrics. If a tool writes 10,000 emails but nobody sends them, that’s theatre.

Keep vendor strategy intentional. Foundation model choice, hosting choice, and data policy are now intertwined. If you can’t explain your dependency chain in two minutes, it’s too messy.

Build guardrails early: identity, data access, logging, and human review paths for high-impact decisions. This is where regulation and risk teams become allies, not blockers. 

Plan for capacity and lead times. If your roadmap assumes unlimited compute at stable prices, it’s a fantasy in a nice suit. 

Make change management non-negotiable. People don’t adopt AI because you bought licences. They adopt because it makes their week easier and doesn’t get them in trouble.

And one more that sounds trivial, but isn’t: kill projects. Kill the ones that don’t land. The bubble atmosphere grows when nothing ever dies and every initiative lives on as a “learning”. Learning is good. Zombie portfolios are not.

Bust concerns are real, yet the likely outcome is a reset

If you’re hoping for a neat headline, “AI bubble pops!” – reality tends to be messier. Markets can correct. Budgets can tighten. Some vendors will vanish. Some internal programmes will stall.

Reuters has captured that split mood: some investors worry spending won’t pay off as hoped, others think the cycle is still early and justified.

My view (and yes, take it as a view): the most probable path isn’t a clean bust. It’s a reset.

Expectations come down a notch.

Delivery discipline goes up a notch.

The winners look boring: strong data practices, clear controls, real workflow adoption.

The losers keep selling magic.

And in a funny way, that’s good news for serious organisations. If you can steer through the froth, keeping your feet on outcomes, governance, and unit costs, you don’t need the bubble to keep inflating. You just need AI to keep doing useful work, quietly, week after week.

That’s the kind of “hype” that survives.

Free Newsletter

Stay in touch. Subscribe to my free LinkedIn newsletter on strategy, technology and delivery. Read less. Know more. https://tinyurl.com/3bcbee2z