W Power 2024

Why artificial intelligence isn't a sure thing to increase productivity

As companies adopt artificial intelligence to increase efficiency, are their employees skilled enough to use those technologies effectively? Prithwiraj Choudhury looks to the US Patent and Trademark Office for a case study

Published: Mar 23, 2018 11:29:37 AM IST
Updated: Mar 23, 2018 11:30:22 AM IST

Why artificial intelligence isn't a sure thing to increase productivityImage: Shutterstock

Thinking about the fast-approaching era of artificial intelligence, employers rejoice in the increases to productivity such tools could bring, while workers are more likely to calculate the time left before R2-D2 takes over their jobs.

“Jacques Bughin and co-researchers estimate that in the future, 50 percent of all tasks currently done by humans could be done by machine learning and artificial intelligence,” says Prithwiraj (Raj) Choudhury, assistant professor at Harvard Business School. Overall, that could translate into a bump in global productivity by 1 percent or more.

But it turns out that long before robots replace workers en masse, if ever, workers will be using AI-based tools to do work, as is already seen with radiologists who employ such tools to interpret X-rays and lawyers who turn to machine learning to dig out past cases that set a precedent for legal arguments.

Choudhury realized there was scant research available on the skills needed by workers to use artificial intelligence-based tools to their full promise. And that’s a key piece of information to have as companies consider investing what consulting firm Accenture estimates will be $35 trillion into cognitive technologies in the United States by 2035. Just adding AI tools does not automatically increase productivity if the people using them can’t use the technology correctly.

“AI tools might be good at predictions, but, if they are not used properly, there is no value in investing in such tools,” Choudhury says.

Choudhury aims to fill that gap with a new working paper, Different Strokes for Different Folks: Experimental Evidence on Complementarities Between Human Capital and Machine Learning. The paper, written with Evan Starr and Rajshree Agarwal of the University of Maryland, suggests that firms must think carefully about the skills they’ll need to hire for or train for in employees if they are going to get the most bang for the buck from their new AI.

Choudhury has spent his career researching human capital, looking inside companies such as Microsoft, Infosys, and McKinsey to analyze what makes knowledge workers most productive. A few years ago, he began looking at the United States Patent and Trademark Office (USPTO), which has used innovative practices around employees working remotely.

“I found the US patent office fascinating,” Choudhury says. “It is not only a large organization with more than 10,000 people, but also an organization that shapes the innovation system. What they do matters for the entire US economy.”

In the course of writing a Harvard Business School case on the patent office, he discovered the agency was implementing a sophisticated new machine learning program called Sigma-AI in an attempt to cut the time necessary to review patent applications.

Patent examiners can use Sigma-AI to make sure applications propose truly novel ideas, and not designs or techniques previously used in other patents—known as prior art. “That means searching through hundreds of thousands of documents,” says Choudhury.

The office aims to provide at least an initial answer to applicants within 10 months. With an increase in patent application by nearly 20 percent in five years, however, there is currently a backlog of a half-million applications, resulting in delays of an additional six months or more.

In the past, employees have used a Google-like Boolean search tool in order to identify prior art, hunting for specific keywords to pull up past cases. The new machine learning tool automates this process, Choudhury says. “The document is fed into this tool, and then it spits out what it thinks would be the relevant documents for an examiner to look at.”
Is a computer science background necessary?

Choudhury and his fellow researchers were interested in finding out whether having a background in computer science and engineering (CS&E) would improve patent workers’ ability to use the artificial intelligence-based tool in order to make them more productive.

In order to ensure that the amount of prior experience working in the office wouldn’t skew results, the researchers “recruited” patent examiners who would be a completely blank slate: MBA students from HBS. For the experiment, they gave each of 221 students a patent application with five relatively obscure claims for which prior art existed. Half of the students were assigned randomly to use the Boolean search tool and half to use the machine learning tool.

Furthermore, they gave half of each group access to expert advice to help them better craft their searches. That advice, to a degree surprising to the researchers, turned out to be crucial to students getting the right answer.

“Without the advice, no one gets the silver bullet—it doesn’t matter if you use the Boolean or machine learning,” Choudhury says. “That’s a validation of human expertise of a real patent examiner that is formed from years of experience.” Chalk one up for humanity.

For those who did get the advice, the researchers found that worker productivity rose or fell depending on their background. Those with CS&E experience did better with the machine learning tool, while those without CS&E experience did better with the Boolean tool.

For this experiment, the researchers did not look at which tool was better; however, that’s beside the point, says Choudhury. The reality is that many companies are already adopting AI technology in the hopes that it will improve productivity. Yet, says Choudhury, “in the vast majority of situations, it will be used by people without computer science experience.”

That’s akin to asking someone with a humanities background to be able to use macros in Excel—they may figure it out eventually, but they won’t be as productive as someone with a background in statistics. If firms do not compensate for the lack of computer science experience in employees, they risk failure of the very technology they’ve adopted to improve their operations.

“If someone’s past experience has been entirely in the world of older technology, and suddenly a machine learning tool is thrust upon them, they will be less productive, even if the tool is a great tool,” Choudhury says.

That’s not to say that companies need to necessarily hire computer scientists. It may be that with extensive training, employees without such backgrounds can learn to use machine learning tools efficiently. Choudhury is currently preparing to run a more ambitious experiment this fall with 1,000 subjects, giving those without CS&E experience hands-on training to see if it improves their abilities.

“We will see if in the second stage, these people will catch up and the productivity gap narrows,” Choudhury says.

[This article was provided with permission from Harvard Business School Working Knowledge.]

Post Your Comment
Required
Required, will not be published
All comments are moderated