SaaS isn’t dead; The way we think about it will change: OpenAI’s Brad Lightcap

During his recent visit to India, OpenAI COO Lightcap spoke to Forbes India about the company’s profit strategy, and how AI could reshape SaaS

Last Updated: Mar 27, 2026, 14:31 IST10 min
Prefer us on Google
New
Brad Lightcap, chief operating officer of OpenAI.  Photo by David Paul Morris/Bloomberg via Getty Images
Brad Lightcap, chief operating officer of OpenAI. Photo by David Paul Morris/Bloomberg via Getty Images
Advertisement
In a Nutshell
  • OpenAI COO says SaaS will evolve, not disappear, with AI advances
  • AI platforms like Frontier aim to enhance enterprise workflows
  • Safety and privacy remain core priorities for OpenAI products

Q. So you’re the person driving OpenAI towards profits?
Well, one day.

Q. For-profit efforts started in October, when the restructuring took place. How are they going?

How we operate has not changed all that much. Our mission is still core to what we do. Our non-profit is still critical to the work. We’ve now got one of the most, if not the most, well-capitalised non-profit organisations in the world. As the technology becomes more capable—and you’re starting to see signs of that now—it’s going to be very interesting to see how we build some of the consumer and enterprise products we’re working on.

Q. You already had some notable successes in enterprise contracts. A lot of big names have joined.

The thing about enterprise for OpenAI is that we’ve actually been doing it for quite a long time. Before ChatGPT, we launched our API in June 2020, and then it went to general availability in the autumn of 2020. So really, since that point, we’ve always been trying to understand how AI is going to penetrate and impact businesses and work.

Back then, it was GPT 3, and the models were only really capable of very narrow tasks—maybe a 5 second or a 5 minute task—and so they were used for point solutions inside an organisation, and not actually all that useful for real knowledge work.

Now you have this shift towards reasoning models that has driven models capable of 5 hour tasks. That has entirely changed the landscape for what we can do in enterprise. So that’s been a big push for us in some of our recent product launches, things like OpenAI Frontier.

Obviously, a lot of the work we’ve done lately with ChatGPT Enterprise has really been about enabling the capabilities the models offer today—more agentic, long-horizon capabilities—with Codex obviously being a critical factor for that in software engineering. We think this is just the beginning of what will be a wave of transformation.

Q. You came on board in 2018, and you were with Sam Altman at Y Combinator. In 2018, could you see that OpenAI would become what it has become today?

We always debated how this was all going to go, and we never really had a good handle on what the timelines would be for the rate of progress.

It was always unclear whether we were going to make a lot of progress quickly and then there would be a very slow diffusion period, or whether there would be a very long cycle of progress and then a very fast diffusion period. One of the interesting things we’ve seen is both: The rate of progress has been relatively fast, but also the rate of diffusion has been relatively fast.

It’s been surprising. Some examples would be: Almost a billion people now use ChatGPT every week—it’s over 800 million users every week and growing very quickly; 100 million people just here in India, which is our second largest market. So it’s not just penetration in places like the US and Europe; it’s penetration globally.

We work with over a million business customers, and over 7 million people using ChatGPT at work on an active basis. The Codex growth has been phenomenal. Codex usage has grown 4x in the last month here in India alone. So the adoption of these tools is not happening over the course of years; it’s happening seemingly over the course of months, and now weeks. And we think that is only going to continue to compress and accelerate.

It’s also going to be a period of change. A lot of the work we do is with people—with enterprises large and small, with governments, with researchers and academia—to try and understand the impact these tools are going to have.

Q. With Frontier being a direct competitor to tools like Claude Cowork, how do you see it scaling in India, and could it disrupt the SaaS and IT services sectors?

No, quite the opposite. We are going to need a lot of help in the way we deploy and roll out Frontier.

Every business is unique, and every business has its own way of operating and its own systems, and obviously only they have the institutional knowledge of how they actually run their business and serve customers. AI is really powerful, but one of the things it lacks is the context that is specific to every business on earth.

Only the people who work at that business really have that context—they know the customers, the products, and the services. So Frontier for us is a platform that lets enterprises build agentic experiences, almost like AI teammates that are meant to work hand in hand with people in the organisation to do real work. Because you can’t do it without all the elements: You need the underlying knowledge of the enterprise, you need the embedded knowledge of the people doing the work, and then you need AI systems capable of doing tasks over very long horizons that can benefit from all that other knowledge.

So we see an important role for the entire integrated ecosystem to work with us to help deploy this globally. But I don’t think we’ll be able to do it by ourselves.

Q. And what about SaaS? Is it really “dead”, as some argue? Do you see it being displaced by AI platforms, or simply evolving?

I don’t think SaaS is dead. I think the way we think about software will start to change. If you think about what AI systems represent, they’re an entirely new interface for software.

But, at the same time, the systems we’ve built—today’s “software systems”—are incredibly important parts of the workflow. They house critical data. They capture critical elements of mission-critical business processes. So, I think that, in many ways, agentic AI will enhance our ability to use those systems and really derive value from them. Some of them will probably change over time, but the ecosystem adapts pretty quickly.

Q. Ads have become a big talking point. OpenAI was long seen as anti ads, but that seems to be shifting, as Altman’s tweet suggested. How important do you see ads becoming as a revenue model for you?

We don’t come at it from a business model perspective, and I don’t know that we’ve ever been anti ads. What we have realised is that we are pro people. This transformation has to be one that brings humanity along with it, not one that leaves humanity behind.

Increasingly, you’re getting models that are much more capable. These are models that will really turn into true assistants for people everywhere in the world. We want people anywhere in the world to be able to harness the power of those models to do things that elevate their lives, their businesses, and their goals.

We see it as a way to expand the pie; for people to be able to partake more in what’s happening and to offer a better quality of service. Now we are very clear about the principles we have around this.

There absolutely has to be a high level of user trust maintained throughout. We take privacy extremely seriously. So it will be an iterative process to get it right. But, on the whole, our hope is that expanding access is the right thing to do, and we see that as part of our mission.

Q. Do you foresee ads helping accelerate your journey towards profitability?

Our view is that if we build a great product that people love, and we can figure out a way for people to make use of the technology, there will be a market that forms and the business case will take care of itself.

So we almost never think about what we do from the perspective of: Is it profitable, is it not profitable. We do try to think about it as derived from our mission, which is: How do we make sure that as many people as possible have access; that the technology is safe; that it’s broadly beneficial; and that it has real-world impact that we can measure and understand. If that trajectory is positive and people like what we build, then the rest takes care of itself.

Q. Talking about use cases, I was speaking to the CEO of Hexaware, which is the seventh largest IT company in India, and he said they’re looking at AI not to replace human beings, but to do things that could not be done earlier. That is their approach to AI. What do you think of it—is that the way AI will develop?

This has obviously been the long assumed outcome of what AI will bring.

Certainly, it feels likely that jobs in the future will change. However, what we’re starting to see now at the margin is the type of change we maybe didn’t expect—which is this combination of people and AI together being able to do meaningfully more than a person could do by themselves.

Codex is the example I love here. The entire field of software engineering feels like it’s undergoing a change with tools like Codex, where you now have two things happening.

One is that you have sophisticated engineers using tools like Codex; they are capable of doing far more, far faster. It’s almost as if every engineer now has a team of software engineers working for them. You can imagine the kind of leverage you get as a business if you’re able to harness that power.

The second thing is the number of people in the world who are now, for the first time ever, building software products without any background in building software products.

When you think about what this means for the shape of change, you almost never price that in. What I mean by that is: It’s easy to say, “Well, if you had an AI software engineer, it would replace software engineering jobs.” What you don’t realise is:

A) People who are software engineers are doing far more software engineering, and
B) A whole group of people who have never done software engineering are now doing it for the first time.

So the shape of these things is always hard to predict. Technological change opens up entirely new ways for the world to work; ways that we’re never quite good at predicting. So I try to stay open minded.

Q. Safety and security concerns always come up with AI. What’s your view on this, and what specific steps is OpenAI taking to ensure safety and security across its products?

We take both safety and privacy very seriously. From a privacy perspective, we try to give users as much control as we can in terms of how their data is used, and to be really specific about where data is used and in what ways.

An example is that we recently rolled out ChatGPT Health as a product in beta. We’re specific about how we think about that category. When people communicate with an AI system about their health, it has to meet an extreme privacy standard, and we’re very explicit about how we build that into the product.

On the safety side, it is core to our mission that safety is thought through as we make progress in research.

We do a lot of research not just into core capability advancement, but also into what we call alignment and safety work—constantly testing and evaluating our models for capabilities we may not expect, emergent behaviours that may be unsafe, things the models become capable of doing that we need to have a societal conversation about as we adopt these systems.

A recent example is our latest model: GPT 5.3 Codex. We had to think through the fact that this is a model highly capable of writing code, and there’s a cybersecurity consideration there.

At the same time, we also think the same model can help fortify cyber systems and improve cybersecurity in a defensive way. So there’s a balance to these things, and we want to make sure the good applications are harnessed as much as anything else. It’s critical to our work, and it’s present in every internal conversation.

Q. Do you also run the startup fund?

I do not formally run it at this point.

Q. But you used to?

I’m affiliated with it, as I helped our team get it off the ground.

Q. What does the fund look for when deciding to invest in a startup, especially an Indian startup?

We love companies that are what we call AI-native companies that are rethinking an application, an industry, or a domain from first principles, in a way that’s true to: “If you had a powerful AI system that could make a change here, how would that product or company work?”

We’ve been fortunate to back companies that have been successful in doing that across a lot of domains. A focus area for the fund is companies working at the intersection of AI and scientific research. We like that multidisciplinary, cross cutting space where, for the first time, we’re seeing AI systems contribute to science.

First Published: Mar 27, 2026, 14:40

Subscribe Now

(This story appears in the Mar 20, 2026 issue of Forbes India. To visit our Archives, Click here.)

  • Home
  • /
  • Leadership
  • /
  • Saas-isnt-dead-the-way-we-think-about-it-will-change-openais-brad-lightcap

Latest News

Advertisement