The first is, we’re hyper-focussed on productive work. Every day I wake up asking myself, ‘is what we’re doing helping to make real work better?’ We think about legal reasoning, medical research, coding, physics, and so on. Would you trust an AI-trained tool primarily to please you, or to be a genuinely strong reasoning partner? We’re hyper-focussed on the latter.
The second part of our DNA is safety. From day one, we’ve invested a very large share of our resources into safety. We want the model to feel confident saying, “I don’t know the answer,” or “I see where your line of questioning is going and it may not be helpful”. It’s what your best friend or mentor would do. So having that capability built in gives us a strong foundation.
Third, we’re very curious about emergent behaviours, partly from a safety angle, but also because we ask, “How do we expose this?” Whatever tools we use internally for research, we want to make them available to everyone. That’s how things like Claude’s tools such as Cowork came to be.
Culturally, we’re driven by bottom-up innovation. People on the front lines in finance, legal, engineering, everywhere are constantly thinking about how increasingly capable AI can help with real work. Some of our best ideas have come from developers, yes, but equally from finance and legal teams.
Q. With all the chatter around Claude and “consciousness”, how do you respond to the question of whether it shows anything close to that?
There’s no good definition of consciousness, so we have to think carefully about what the word even means. We want Claude to be broadly applicable and able to do useful, productive work. We want it to recognise being asked to do wrong things and push back. We want to ensure it’s a great reasoning partner.
There isn’t a proper definition for consciousness, AGI [artificial general intelligence], or anything like that. What matters to us is that the system improves itself in ways that align with our values.
Q. Claude Cowork’s launch sparked a huge market reaction globally, especially in India, given its IT services strength. Did you anticipate that and how do you see the agentic AI workspace evolving from here?
I’m not an expert in any of the stock market dynamics. What I can say is that we remain hyper-focussed on productive work. Our mental model is: How does the AI become a great assistant for things you want to do? And how quickly can it help you get them done?
If you talk to people using Cowork and the plugins, they’re all saying: “Finally, I’m getting to do the work I’m actually trained for.” Before this, they were stuck in random bureaucracy, drafting documents, or reading the same things over and over again. Something that used to take 10 weeks is now getting done much faster.
Expectations have gone up. Most people say they’re busier, but in a good way.
Q. Vibe coding has taken off, especially with Claude Code. How fast is adoption growing, and do you see traditional typed-out coding fading away?
Vibe coding is more of a side effect; what we actually set out to do was problem-solve.
We imagined having something with the problem-solving skills of a great coder, and the reasoning abilities of a genius, and then asking, ‘What could that do?’ That’s the capability we’ve been trying to build and use internally, and we wanted to make that available to everyone. Vibe coding just happened to emerge as a side effect, and it took off.
February 4 was the first anniversary of launching Claude Code, and it’s been extraordinary. It’s the fastest growing software business in the history of business; it hit a $2.5-billion revenue run rate in less than a year. I wish I had a crystal ball for how things will evolve. Honestly, every three months, as capabilities improve, you realise: ‘Oh, there’s a new way to do this’.
Q. How do you plan to maintain Anthropic’s edge? Is your India focus primarily on developers rather than a B2C-led approach with Claude?
A large part of our strength, and what we’re concentrating on, is enterprise and B2B work. India is our second-largest country in terms of Claude usage. Second, India has one of the highest concentrations of technical talent. One in three developers who join GitHub are from India. And India is also the most optimistic about AI. Surveys show that around 75 percent of people in India believe AI will be beneficial. So, we’re thinking about long, institutional partnerships with people who can help scale AI and SaaS services.
We’re also thinking about private and public collaborations. For example, some of the initiatives like Aadalat.ai are about building an entire courtroom support system
that helps lawyers, judges, court staff, and citizens. It also addresses accessibility, which is crucial in a country of 1.4 billion people with 22 languages and over a thousand dialects.
Q. The Super Bowl ad recently went viral. If ads aren’t the way forward, what is the sustainable business model for a company like Anthropic?
We’re already sustainable. We already have a very viable business model. Let me start with why ads are generally a bad idea. If you go to your best friend for advice, imagine if they began by saying, ‘Let me sell you something’. That’s what ads introduce. They come with a lot of issues: Your data, how it’s used, stored, and monetised, and addictive behaviours. Over time, even products you once loved become filled with this kind of slop. That is not the future we want.
In contrast, look at everyday products you use, like your phone. They’re ubiquitous, and yet they support very successful business models through partnerships with telecom companies, hardware manufacturers, and others. The value delivered at the last mile supports the entire ecosystem. That’s how we think about it. We focus on delivering real value, and that value can pay for making the technology widely available.
Q. With regard to AI infrastructure, how does Anthropic think about efficiency and frugality?
For productive work to succeed, two things have to progress simultaneously: Intelligence has to improve, and the cost of intelligence has to come down. If you look back over the last two or three years, the kinds of tasks these systems can perform, and the cost at which they can perform them, have changed dramatically. Costs have dropped significantly.
So, these are two sides of the same coin. Efficiency and frugality aren’t just about cutting costs; they’re about reducing the overall cost per task by improving intelligence and making problem-solving faster.
We’re also very mindful about how we deploy: What chip architectures we use, which clouds we deploy on. It’s one of the most important things to get right if we want to scale responsibly.