We'll drive AI costs down far more than anyone thinks possible: Sam Altman
The OpenAI CEO on how AI will reshape today’s jobs, India’s rapid adoption of the technology, and why empowering people—not concentrating power—is key to the future of AI.


At a closed door Q&A with select editors—attended by Forbes India—OpenAI CEO Sam Altman offered a wide ranging view of where advanced AI is headed next. From models that can now discover new knowledge to the rising costs of frontier systems, the future of software, India’s place in the global AI stack, and the need to democratise powerful tools, Altman spoke candidly about both the opportunities and risks ahead. Edited excerpts:
The most important development, in my view, is the models’ ability to discover new knowledge. We can debate how the models perform on different evaluations, where they are strong, and where they are heading. But something that has been happening over the last few months is that the models are now discovering new knowledge.
There was a recent physics result that genuinely seemed to astonish many physicists. Even those who had been sceptical said, “All right, maybe AGI is pretty close.” Recently, we took part in an initiative called ‘First Proof’, which involved 10 research level mathematics problems that were publicly known to have no existing solutions. I believe our model was able to solve seven of them.
I’ve heard this has converted some of the biggest sceptics. I think this may be the most important evaluation remaining for assessing model intelligence and capability.
I think a lot of it will change. But people overreact to both the positive and the negative. I think people have forgotten that much of what makes a good company—it goes well beyond software. So if you have a database, available user data, whatever it may be—even if somebody else can write the software just as easily—it doesn’t necessarily mean it will become an effective competitor.
It is true that software is now far easier to create than ever before. I’m sure that will be quite bad for some software companies. But many software companies have a value proposition that is quite different. I think this is going to be the greatest era for new companies… we will see an explosion in new value creation.
The question was not whether I thought India or anyone else would compete with frontier models, but whether you could do it for $10 million. I didn’t think then—and I don’t think now— that you can make a frontier model for $10 million. In fact, if anything, I’d say that has become harder. The compute costs, the total complexity, the overall costs have all gone up.
But, of course, there are incredible small language models being built, and I suspect we’ll continue to see models for narrow applications being created for smaller and smaller amounts of money—and more and more companies doing that very well. The building energy in India is quite remarkable, and I’ve never seen such a relentless amount of energy attacking the entire stack anywhere else.
I do share the concern about the concentration of AI. Our stance is that the only path forward is to heavily democratise AI and to put these tools in the hands of people—even if that comes with some downsides, even if it means society has to wrestle with some big challenges.
But everything I’ve studied in history suggests that concentrating all AI power in the hands of one company or one country—even in the name of safety—would be a disastrously bad thing to do.
We introduced a strategy, at least within our field, called iterative deployment. The idea is that we release AI early into the world, allowing people to become familiar with it and to use it even when it’s imperfect, even when it has flaws. That doesn’t mean we aren’t responsible in how we do it. It doesn’t mean we don’t begin conservatively. But it does mean we empower people to do things with the technology that we ourselves might not like.
It means we try to encourage a robust ecosystem to be built around the world. It means we accept the trade-off of empowering people and accepting that society will have to wrestle with something new, rather than trying to hold on to all the power ourselves and claiming we could guarantee this or that outcome.
I think different countries are going to try different approaches, and then we’ll learn from what works and what doesn’t. I suspect we’ll move more towards global standards. But even then, it will never be exactly the same everywhere.
Different countries will say, you know, “total ban on social media for young people”, “partial ban”, “no ban at all”, and we’ll observe how it goes over time. For AI, similar things will happen. Some people will say, “If content is used with an AI tool for assistance at all, it counts as AI content.” Other countries will say there’s no difference. Some will fall somewhere in the middle.
I’ve been a bit confused about how much capital we’ll need, because revenue is growing so quickly that we may end up needing less equity capital than we originally expected. We’ve also been able to make more progress than I thought in funding partners who will help finance compute for us through non-traditional arrangements. I would say our thinking on that is continuing to evolve as the market develops. And three to five years out is extremely difficult to forecast.
I just saw a statistic the other day: The cost of getting a difficult, high quality answer out of our models has fallen more than a thousand-fold since two Decembers ago—so roughly 14 months. It is incredible. I don’t know if we can repeat that level of reduction in the next 14 months—I suspect we can’t. But I’m optimistic that we’re going to drive prices down far more than anyone thinks is possible, reasonable or likely. I think that will help the Global South too.
First Published: Mar 25, 2026, 14:26
Subscribe Now(This story appears in the Mar 20, 2026 issue of Forbes India. To visit our Archives, Click here.)