Data centres to hold more intellectual capacity than humans by 2028: Sam Altman
OpenAI CEO says true superintelligence could arrive soon, calls for IAEA-style global AI body


OpenAI CEO Sam Altman warned on Thursday that humanity could just be a couple of years away from early versions of true superintelligence, suggesting that by the end of 2028, the volume of intellectual capacity housed inside data centres could surpass that residing in human minds—representing a fundamental turning point in civilisation. Altman delivered these remarks during his keynote address on day four of the ongoing India AI Impact Summit.
If the trajectory holds, he said, a superintelligent system would be capable of outperforming human executives in running a major corporation and surpassing leading scientists in original research—a prospect he described as a near-term possibility. He also proposed a three-pillar framework for navigating this accelerating timeline, focusing on democratisation, societal resilience, and a combination of inclusive participation and intellectual humility. “Democratisation of AI is the only fair and safe path forward—centralisation of this technology in one company or one country could lead to ruin,” he said.
The desirable outcome couple of decades from now, he argued, must look like a world of liberty, democracy and widespread human agency—not a bargain in which totalitarian efficiency is traded for technological miracles. “Some people want effective totalitarianism in exchange for a cure for cancer. I don't think we should accept that trade-off, nor do I think we need to.”
The second pillar of his argument was what he called “AI resilience as a core safety strategy”. Going beyond the technical alignment challenges that typically dominate AI safety discourse, Altman argued that “societal resilience” must now be considered equally important. No single lab or system, he said, can deliver a good future unilaterally. “We need a society-wide approach about how we are going to defend against this,” he said.
Since the development of AI has already produced results that were not previously anticipated, he added that critical questions remain unresolved. For instance, how to think about superintelligence being aligned with authoritarian governments, how to manage AI-enabled new forms of warfare between states, and whether countries will need entirely new forms of social contracts. “We think it’s important to have more understanding and society-wide debate before we’re all surprised,” he said.
Altman also suggested that the world may urgently need “something like the IAEA” — the International Atomic Energy Agency—for international coordination of AI, with the specific capacity to respond rapidly as circumstances change.
On its economic consequences, he said that AI will not just make many goods and services cheaper, accelerate economic growth, and expand access to health care and education but will also disrupt current jobs. “It will be very hard to outwork a GPU in many ways,” he said.
He concluded his address by framing the technological growth as a critical choice between empowering the global population or allowing for the unprecedented concentration of power.
First Published: Feb 19, 2026, 15:31
Subscribe Now