AI boom or a bubble waiting to burst?
Sky-high valuations, market jitters and trust issues raise fears of an AI bubble—but experts say discipline and fundamentals could prevent a dotcom-style collapse


Is Artificial Intelligence (AI) having its dotcom moment? If industry experts and stakeholders are to be believed, the latter is quite likely. And if that happens, every company will be affected, Sundar Pichai, CEO of Google, said in a recent interview. “I think no company is going to be immune, including us,” he said.
In late October, Nvidia briefly achieved the remarkable feat of becoming the first company to surpass a market capitalisation of $5 trillion. This serves as a stark reminder of the unprecedented concentration of this boom within the AI infrastructure sector. Alphabet, meanwhile, joined the $3 trillion club in September, buoyed by a favourable antitrust ruling that ruled out a breakup of the company, and surging investor confidence in its Gemini AI roadmap.
However, November’s stock sell‑offs wiped tens of billions off AI‑linked names, with AI software company Palantir Technologies suffering its worst month in two years (stock dropped approximately 16 percent in November) and Nvidia retreating after a record run (fell approximately 12.6 percent).
This has revived an old question: Is this AI exuberance rational, or are we replaying the dotcom bubble of 1999–2000?
In the late 1990s, internet companies saw valuations skyrocket before the bubble burst in 2000, wiping out billions and leaving scars on the economy. Today, the AI sector exhibits similar signs: Inflated valuations, aggressive capital deployment, and a race to dominate a technology whose long-term economics remain uncertain.
“The dotcom boom taught us that technological revolutions are real, but timelines are often overestimated. Investor psychology today shows similar traits: Fear of missing out, inflated expectations, and a belief in uninterrupted growth,” says Jaspreet Bindra, co-founder of AI&Beyond.
“During the dotcom boom, we built a lot during that period. Then we thought, ‘Oh, we’ve built too much.’ And three years later, we realised we hadn’t built enough,” Pat Casey, chief technology officer, ServiceNow told Forbes India during a recent conversation. He believes this is how the investment cycle works. “You’re always investing ahead of demand,” he says. “Sometimes, you’re too far ahead and there’s an overhang; sometimes you’re behind and capacity shrinks.” With regard to the AI bubble, he says, “No one knows whether we’re under or over the bubble. Personally, I think even if we are over-invested, we’ll catch up quickly because the demand is real.”
If the AI hype cools, Bindra believes that big tech’s trillion-dollar investments in infrastructure and chips will not disappear, “but the returns could take much longer to materialise”. He argues, “That may trigger a broader tech correction as markets shift from exuberance to fundamentals. AI is genuinely transformative, but even the most transformative technologies can be overvalued in the short run.”
Arun Chandrasekaran, distinguished VP analyst at Gartner, warns this dynamic can amplify bubble psychology: “Any pop in an AI bubble or market correction does not invalidate the value of AI progress. However, a downturn can breed scepticism and distrust among boards, CEOs and staff who might associate a market correction with the failure of the technology.”
That scepticism is already visible. CIOs are slowing deployments, venture investors are scrutinising fundamentals, and headlines about AI misfires feed a narrative of overreach. This erosion of trust matters because bubbles burst when confidence collapses—not just when valuations peak.
But beyond financial exuberance lies a deeper critique of the technology itself. Meta’s AI pioneer Yann LeCun argues the industry’s fixation on large language models (LLMs)—the engines behind ChatGPT, Google’s Gemini, and Meta’s Llama—is fundamentally flawed. “They are not a path to human-level intelligence,” LeCun has repeatedly stated. While LLMs are useful for tasks like summarisation and coding assistance, he warns they are “sucking the air out of the room”, diverting resources from alternative approaches that could lead to true breakthroughs.
LeCun advocates for “world models”—AI systems that integrate perception, reasoning, and real-world interaction. These models would learn like humans, through experience and understanding, rather than by predicting the next word in a sentence. His critique underscores a critical point: Scaling LLMs may deliver incremental improvements, but it won’t solve the harder problem of general intelligence. Yet, Big Tech’s billions are overwhelmingly flowing toward LLM-centric research, creating a monoculture that risks stalling innovation.
Nigel Green, CEO of financial advisory organisation deVere Group, underscores this need for discipline: “Exceptional results don’t remove the need for discipline. The AI ecosystem is growing fast, but fast growth doesn’t protect anyone from the consequences of over-extension… Belief is not always a strategy.”
He warns that corporations are committing vast sums to AI infrastructure while the path to real commercial returns “remains untested” in many industries. For investors and boards, Green says, the priority now is stress-testing assumptions about pricing power, supply security, and operational resilience: “The potential is profound. But the risk environment surrounding it is equally profound. Conviction must be matched with discipline. Otherwise, the gains could be uneven and shortlived.”
As Pichai says, “We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound. I expect AI to be the same.” The challenge is ensuring that the journey to that profound future doesn’t leave behind a trail of irrational exuberance and wasted opportunity.
First Published: Dec 17, 2025, 10:57
Subscribe Now