We'll drive AI costs down far more than anyone thinks possible: Sam Altman
The OpenAI CEO on how AI will reshape today’s jobs, India’s rapid adoption of the technology, and why empowering people—not concentrating power—is key to the future of AI.


At a closed door Q&A with select editors—attended by Forbes India—OpenAI CEO Sam Altman offered a wide ranging view of where advanced AI is headed next. From models that can now discover new knowledge to the rising costs of frontier systems, the future of software, India’s place in the global AI stack, and the need to democratise powerful tools, Altman spoke candidly about both the opportunities and risks ahead. Edited excerpts:
There was a recent physics result that genuinely seemed to astonish many physicists. Even those who had been sceptical said, “All right, maybe AGI is pretty close.” Recently, we took part in an initiative called ‘First Proof’, which involved 10 research level mathematics problems that were publicly known to have no existing solutions. I believe our model was able to solve seven of them.
I’ve heard this has converted some of the biggest sceptics. I think this may be the most important evaluation remaining for assessing model intelligence and capability.
It is true that software is now far easier to create than ever before. I’m sure that will be quite bad for some software companies. But many software companies have a value proposition that is quite different. I think this is going to be the greatest era for new companies… we will see an explosion in new value creation.
But, of course, there are incredible small language models being built, and I suspect we’ll continue to see models for narrow applications being created for smaller and smaller amounts of money—and more and more companies doing that very well. The building energy in India is quite remarkable, and I’ve never seen such a relentless amount of energy attacking the entire stack anywhere else.
But everything I’ve studied in history suggests that concentrating all AI power in the hands of one company or one country—even in the name of safety—would be a disastrously bad thing to do.
We introduced a strategy, at least within our field, called iterative deployment. The idea is that we release AI early into the world, allowing people to become familiar with it and to use it even when it’s imperfect, even when it has flaws. That doesn’t mean we aren’t responsible in how we do it. It doesn’t mean we don’t begin conservatively. But it does mean we empower people to do things with the technology that we ourselves might not like.
It means we try to encourage a robust ecosystem to be built around the world. It means we accept the trade-off of empowering people and accepting that society will have to wrestle with something new, rather than trying to hold on to all the power ourselves and claiming we could guarantee this or that outcome.
Different countries will say, you know, “total ban on social media for young people”, “partial ban”, “no ban at all”, and we’ll observe how it goes over time. For AI, similar things will happen. Some people will say, “If content is used with an AI tool for assistance at all, it counts as AI content.” Other countries will say there’s no difference. Some will fall somewhere in the middle.
First Published: Mar 25, 2026, 14:26
Subscribe Now