We’re human beings—we bring things to the table that AI doesn’t: ServiceNow CTO

ServiceNow CTO Pat Casey on how agentic AI is reshaping workflows and hiring, why governance matters as AI agents start talking to each other, and the thoughts that keep him up at night

Last Updated: Dec 19, 2025, 15:42 IST11 min
Prefer us on Google
New
Pat Casey, CTO  ServiceNow
Pat Casey, CTO ServiceNow
Advertisement

Tech giant ServiceNow is accelerating its AI-first strategy with bold moves on multiple fronts. Fresh off its Zurich platform release—the company’s latest upgrade introducing governable, secure multi-agent AI at enterprise scale—ServiceNow has doubled down on innovation with acquisitions like Moveworks, an AI-powered conversational automation firm, and is reportedly in talks to acquire cybersecurity leader Armis.

At the same time, the company is investing heavily in talent development through ServiceNow University in India and partnerships with the All India Council for Technical Education (AICTE) and Nasscom FutureSkills Prime, aiming to train over a million learners in emerging tech and AI skills. India is central to this strategy. Hyderabad is ServiceNow’s single largest engineering site worldwide, with just under 4,000 employees. In fact, roughly half of the company’s global engineering workforce—between 46 percent and 53 percent—operates out of India, underscoring the country’s role in building next-generation enterprise workflows.

In a wide-ranging conversation, Pat Casey, chief technology officer at ServiceNow, talks about how agentic AI is reshaping workflows and hiring, and why governance matters as AI agents start talking to each other. He also shares candid views on the risks of over-trusting AI, and the wildcard technology that could upend everything: Quantum computing. Edited excerpts:

Q. From hiring to productivity, how has AI impacted your workforce globally, and in India?

First, we’ve made some big investments in AI productivity tools for our engineers (deployed to about 7,000 people), to make them more productive. Our internal numbers show that we’re anywhere between 10 and 18 percent more productive in this post-AI era compared to pre-AI. We measure that by looking at whether we’re producing code that gets to our customers, that’s the key metric. And that’s broadly similar to what other tier-one engineering firms are seeing. When you’re talking about 7,000 people being 10 percent more productive, that’s the equivalent of hiring over 700 people.

Q. In terms of hiring itself, especially with tech talent, has that changed? Are you looking for a different kind of people when hiring, not just in India but globally?

It has changed. Two things: First, I don’t care what position I’m hiring you for—even if it’s as a front-end engineer, where all you will do is build beautiful things on screen and great user interfaces—your ability to use AI productivity tools is now one of the skills I look for. Can you use those tools? We interview against that. In some cases, we give people tests on it.

From an engineer’s perspective, the craft of engineering involves a variety of tools you need to master to be good at your craft. If you’re a carpenter, you need to know how to use saws, angles and all that fun stuff. In this day and age, being able to use AI code assistance is part of the craft of engineering. So I want to make sure we’re hiring engineers who know how to use all the modern tools.

What I tell people is this: If you’re that rare human being who is so productive without using the tools, that’s fine—I care about your productivity. It’s not that you must use the tool, but I believe that by using the tools and the standards everyone else in the industry uses, you’ll maximise your chances.

Q. With AI tools and technologies evolving almost daily, how critical is continuous upskilling?

Yes, we’re self-interested, but we also genuinely think it’s good for the community. We’re really invested in training people to have ServiceNow skills—whether you apply them as a ServiceNow employee, work for a partner, or just work for a firm that uses our platform. We think it’s good for us and good for everybody.

We’ve got a global programme targeting training 3 million people worldwide. About a third of that—roughly a million people—we aim to train here in India on basic, functional ServiceNow-relevant skills. In addition, we have university outreach programmes where we structure coursework that helps set someone up to be successful as a ServiceNow employee, or equips them with skills the industry needs.

Interestingly, I find AI tool use among new grads is already there. They’ve figured it out—probably because they’re self-interested and lazy. And I say that in a good way: Most good engineers are self-interested and lazy. They realised they can get their coursework done faster using AI tools. To my mind, that’s not cheating—that’s learning how to do something efficiently, which is useful in the real world.

Also Read: GPT-5.1 and IndQA: How OpenAI is shaping the future of AI in India and beyond

Q. We seem to be in the agentic age now. How key is domain specificity? Are there certain key domains you’re focusing on in India, or where you’ve seen more uptake from customers?

In India, our marketplace really revolves around three key use cases. First, we do huge volumes of ITSM solutions—for example, Tech Mahindra runs about 150,000 people on ServiceNow ITSM.

Second, HR is commonly used, mostly for HR case management. We’re not going to run your payroll—that’s not our business—but we’ll help you handle cases like, “Why did my pay stub show $93 instead of $100? Where did the $7 go?” HR looks that up and explains it. That kind of case management is a key use case for us.

The third is customer service. In India, the biggest part of our business is ITSM, but we’re seeing strong growth in HR and customer service as well. Customer service is a marketplace with some big players, but it’s being heavily disrupted by AI. That’s causing organisations to say, “Just because we’ve done something the same way for 20 years doesn’t mean it’s still the best way.” That opens up conversations for us—conversations that probably wouldn’t have happened pre-AI, when people thought, “It’s fine the way it is.” We’re not always the selected vendor, but those discussions are happening now.

Q. What defines ServiceNow’s competitive edge, especially in this agentic era?

First, I think objectively our agentic workflows, reasoning models and ability to deliver accurate answers are very strong compared to the industry. If you have a specific reasoning problem and want to build a solution, I believe you can get that solution live with higher quality on ServiceNow than with any competitor. That’s one key differentiator.

Second, and more fundamentally, we’re a workflow system. If you’re a business, a government, or any large entity with workflows, many of those workflows already run on ServiceNow. That makes deployment dramatically easier. If you already run a workflow on ServiceNow with human agents, we can simply assign that task to an AI worker. The AI worker will do its best job or kick it back to a human if needed. If the workflow is already in ServiceNow, going live with our agentic capabilities is far easier than bolting on a third-party solution.

Because we’re a workflow platform, we also have the infrastructure to support notifications, approvals, assignments—everything you need for large-scale workflows. That’s why we do everything from citizen services to HR to insurance underwriting. If it’s a workflow, if it’s a process, ServiceNow is good at it.

Q. How do you measure the return on investment for customers adopting GenAI-powered workflows?

On the product side, we actually have a solution for this called AI Control Tower. We saw customers coming to us saying, “I’ve got some AI from you, some I built myself, some from competitors—is it working? Can you give us a product to help measure that?” So AI Control Tower does three things: It inventories what you have, lets you define value metrics for each, and then reports against those metrics.

There are two main models customers use. For first-generation solutions—where AI helps you do your job—people typically calculate time savings per activity. For example, if drafting an email saves five minutes, and we drafted a million emails, that’s five million minutes saved. That’s the state of the art for most vendors and customers.

For newer agentic workflows, people look at the economic value of the task. If it costs $35 to work a case and the AI handled 20,000 cases, that’s $35 times 20,000.

The third model is deflection. The usual approach is: If I didn’t deflect it, what would it have cost to run the case? Then calculate the percentage deflected. For example, pre-AI you deflected 70 percent of inbound requests; post-AI you’re deflecting 90 percent. That 20 percent delta is attributed to AI, and you cost out those avoided calls.

And yes, we’re in a phase where lots of people deployed AI very fast. We’re seeing customers say, “I tried 10 things—three are home runs, five are fine, two we shouldn’t have bothered with.” That’s fair in early days, and AI Control Tower helps identify what’s working so you can do more of that.

Q. You run about 90 percent of your workloads on physical hardware instead of public cloud. Why stick to that model in an era dominated by hyperscalers?

We’re not necessarily sticking to it. Right now, north of 90 percent of our workload runs on physical data centres, but our medium-term strategy is to shift to about 50-50—half on hyperscalers, half on our own data centres. We have good relationships with Microsoft, Amazon and Google, and our goal is to get to 50-50 by 2030. It’s a gradual ramp.

As for why we’re where we are now—honestly, tradition. When ServiceNow was founded, hyperscalers didn’t exist. I always say we’re the last SaaS company launched pre-hyperscaler, so we had to buy servers, get data centre space and hire people who knew how to cable them. At this point, we’re good at it. If I were founding ServiceNow today, I wouldn’t start there—but having started there, we have the institutional expertise to do it well. So shifting to 50-50 makes more sense than going 100 percent cloud.

Q. When it comes to agent-to-agent interactions, is there a fear—or have you seen instances—of hallucinations and biases creeping in?

Hallucination is definitely an aspect of these models, though it’s much better than it was 24 months ago. Let me give you an example: If you asked early versions of ChatGPT, “Did Gandhi get a Nobel Prize?” it would confidently say yes—but in reality, he died before he could receive it. Whether you call that a hallucination or just an error, it was wrong and very sure of itself.

Today, that’s less common. If models were wrong 10 percent of the time before, they’re wrong maybe 2–3 percent now, and trending down. But anything above 0 percent is still a problem. Our view is that human beings can be wrong too, so if you deploy agentic technology, you need the same checks and balances you’d have for a human. If 2 percent of the time an AI gives a wrong answer, what do you do? The same thing you’d do if a human was wrong—escalate, review, correct.

The key is not to over-trust or under-trust AI. Over-trust creates problems; under-trust means you fall behind, because this is where productivity gains are coming from.

Q. What are some thoughts that keep you up at night?

AI is a great enabler for productivity—but it’s also a great enabler for bad actors’ productivity. We’ve already seen cases, and you’ve probably read about some in the press, where bad actors use AI for espionage, attacks, impersonation, or social engineering. It’s a low-cost way to scam someone—even my mother. So I worry about the social impact of amplifying bad actors because, unfortunately, bad people exist.

On the technology side, as an industry, we’re all in this together. If we under-invest, I don’t want to be here three years from now saying, “We’ve got a global data centre shortage and not enough GPUs.” If we over-invest, I don’t want to explain to the board why we spent $3 billion on something we won’t need for years. Finding that balance is hard. Every senior tech executive is having the same discussion. We have data, we’re smart people, we extrapolate, but this isn’t chess—it’s not a perfect information game. Five years ago, the pre-AI world was more predictable. Today, there’s a lot of change, and leaders are making decisions with imperfect information. We’ll make some great calls—and some boneheaded ones.

There are AI maximalists who believe the whole world will change overnight and some super-intelligent entity will run everything for us. I think that’s too much. We’re human beings—we bring things to the table that AI doesn’t.

Q. We’re in the world of GenAI right now. But what other emerging tech excites you?

Excites might be the wrong word—maybe fascinates. Quantum computing. We’ve been talking about it for 30 years. Every year someone says, ‘This is the year of quantum.’ It hasn’t landed yet, but if it does, it will change so many aspects of technology and the world. The most obvious example: E-commerce. Modern key exchange is based on factoring being hard. If quantum makes factoring easy, we’ll need a new way to secure online transactions. People will still want to buy things, but the underlying security model will have to change.

If quantum lands, it will radically transform things. It’s a wild card—if it hits in a particular year, it could upset the entire apple cart. As an engineer, it’s fascinating. Einstein said, “God doesn’t play dice.” I’m more in the Einstein bucket—quantum feels like spooky action at a distance. Is it real? The pragmatist in me says yes—the question is when.

First Published: Dec 19, 2025, 17:01

Subscribe Now
  • Home
  • /
  • Ai-tracker
  • /
  • Were-human-beings-we-bring-things-to-the-table-that-ai-doesnt-servicenow-cto
Advertisement