AI is cheaper than human labour: Sam Altman

The OpenAI chief on why AI will become cheaper than human labour, how jobs will evolve, India’s intense AI momentum, and the resource and learning trade offs societies must prepare for.

Last Updated: Mar 26, 2026, 11:04 IST5 min
Prefer us on Google
New
OpenAI CEO Sam Altman. Photo by Anna Moneymaker/Getty Images via AFP
OpenAI CEO Sam Altman. Photo by Anna Moneymaker/Getty Images via AFP
Advertisement

At a closed door conversation with select editors, OpenAI CEO Sam Altman spoke candidly about the accelerating economics of AI, the shifting nature of work, and the strategic choices countries will face as the technology becomes deeply embedded in daily life. From India’s rapid adoption of AI tools to questions of energy use, learning, and the future of personal agents, Altman offered a wide angle view of how the next phase of AI may unfold. Edited excerpts:

On AI being cheaper than human labour

It will absolutely be cheaper. The energy costs of inference today—per line of code—are already far, far lower than the energy costs of a person doing equivalent work. These energy cost analogies often get weird.

People usually compare the energy cost of a human at inference time—the moment they solve something—with the total training energy of an AI model. But a person also requires a huge amount of energy over their lifetime to “train”—to grow, to run their body and brain for decades, not to mention the evolutionary process that operated at vast scale to produce human intelligence in the first place.

So, these models are already surprisingly efficient per token at inference time, relative to the energy required for a human to generate a token of thought. I expect that efficiency to continue improving significantly. My view is that, per unit of intellectual capability, energy cost will not be the dominant factor—the models will be extremely efficient.

But because we will use such large volumes of AI, the global energy footprint will still matter.

On reskilling and jobs

In terms of jobs, I absolutely expect AI to have a big impact on the jobs people do today. For many jobs it will be a partial impact; some jobs will change entirely; and totally new jobs will be created. It wasn’t very long ago that the job that is now the most popular among American students entering university—being a YouTuber—simply didn’t exist. It’s a reminder that when new technologies emerge, new kinds of jobs emerge too.

Which is also why the reskilling question is so hard. I wouldn’t have known to tell anyone to train to be a YouTuber—and maybe I still wouldn’t—but right now we’re in a moment where it’s difficult to say what the best jobs will be 10 years from now. There are skills that will certainly matter: Resilience, adaptability and fluency with AI tools. When I was at university, everyone was told they needed to learn to code. That was good advice at the time; it’s probably not the best advice now. But I do think everyone needs to learn to become skilled at using AI tools—and that will be important.

On resource trade-offs

More of the resources required today go into inference rather than training, and that will continue in the future. In fact, I think that ratio may increase over time. So the bigger question won’t be whether a country should invest its water power or other resources into training frontier models—although that is a question—but rather how a country wants to balance the need or desire for local inference with the resource trade-offs, or whether they would prefer, for lack of a better word, to outsource that to another country.

On India as a large revenue market for OpenAI

What is happening in India with AI is truly remarkable. The country has a strong conviction to invest across the entire stack—from the infrastructure layer to the model layer and to the application layer on top. The rapid adoption of AI tools by people here is really striking.

It’s our fastest growing market for Codex; someone just told me it may become the largest Codex market soon. I don’t yet know what this will mean for the country in the long term, but I don’t know of any other country adopting AI with more vigour.

My sense is that, at the very least, we’ll see an incredible new generation of startups doing great work here. I think India has to be a revenue market. One thing that’s different about AI compared to previous internet services is that the cost of delivering these services is simply higher. So, to meet the volume of AI usage India will demand, we’ll have to find ways for it to be an attractive market as well.

On concerns around cognitive offloading

If we don’t adapt and if we don’t change how we teach children, that would be a real problem. When I was in junior high school, Google came out and my teachers panicked. They said, “There’s no point teaching history or anything else if you can just look up any fact instantly. Your brains are going to rot.”

They wanted us to promise not to use it because, in their view, it removed the reason to learn. All of us said, “This is ridiculous. As adults we’ll be able to use Google at work—so let’s use our brains for something else.” It took a little time, but the education system eventually adapted. It is important to learn how to think.

There are things I learnt—like how to write an essay—that I’m still glad I learnt the old-fashioned way, because they taught me something about how to think, and that remains useful. I suspect that if we make no changes to how we teach and assess students, then yes, they might end up doing too much cognitive offloading to these tools.

But the right answer seems to be to assume we are moving to this next level of technological capability—that people will have these tools—and then develop new ways to teach, challenge and evaluate them… assume the tools exist, but still require people to think, be creative and stretch their minds.

On AI investment returns

I think that depends on the group of people running each company. They will look at projected forward growth rates and decide how profitably, or how unprofitably, to operate in the short term.

On the battle over personal agent models

The models are not good yet. I think what will hold back personal agents is making them safe and secure enough that you can trust them with your personal data, without the risk of something like a prompt injection stealing all your information. So, I think as soon as we, or someone else—really, the field as a whole—can develop a solid framework that users can trust from a safety and privacy standpoint, I would expect rapid adoption of personal agents.

First Published: Mar 26, 2026, 11:39

Subscribe Now

(This story appears in the Mar 20, 2026 issue of Forbes India. To visit our Archives, Click here.)

Latest News

Advertisement