India's Top 100 Digital Stars 2023

With $1 billion from Microsoft, an AI Lab wants to mimic the brain

OpenAI, created by Sam Altman with Elon Musk, will spend most of that $1 billion on computing power, and under the new contract, Microsoft will eventually become the lab's sole source of computing power

By Cade Metz
Published: Jul 23, 2019

With <img billion from Microsoft, an AI Lab wants to mimic the brainSam Altman, left who manages the company OpenAI, and Satya Nadella, the chief executive of Microsoft, which is investing $1 billion in OpenAI, at the Microsoft Campus in Redmond, Wash., July 15, 2019. Altman's 100-employee company wants to create a machine that can do anything the human brain can do. Skeptics wonder if it is possible. (Ian C. Bates/The New York Times)

SAN FRANCISCO — As the waitress approached the table, Sam Altman held up his phone. That made it easier to see the dollar amount typed into an investment contract he had spent the last 30 days negotiating with Microsoft.

“$1,000,000,000,” it read.

The investment from Microsoft, signed early this month and announced Monday, signals a new direction for Altman’s research lab.

In March, Altman stepped down from his daily duties as the head of Y Combinator, the startup “accelerator” that catapulted him into the Silicon Valley elite. Now, at 34, he is the chief executive of OpenAI, the artificial intelligence lab he helped create in 2015 with Elon Musk, the billionaire chief executive of the electric carmaker Tesla.

Musk left the lab last year to concentrate on his own AI ambitions at Tesla. Since then, Altman has remade OpenAI, founded as a nonprofit, into a for-profit company so it could more aggressively pursue financing. Now he has landed a marquee investor to help it chase an outrageously lofty goal.

He and his team of researchers hope to build artificial general intelligence, or AGI, a machine that can do anything the human brain can do.

AGI still has a whiff of science fiction. But in their agreement, Microsoft and OpenAI discuss the possibility with the same matter-of-fact language they might apply to any other technology they hope to build, whether it’s a cloud-computing service or a new kind of robotic arm.

“My goal in running OpenAI is to successfully create broadly beneficial AGI,” Altman said in a recent interview. “And this partnership is the most important milestone so far on that path.”

In recent years, a small but fervent community of artificial intelligence researchers have set their sights on AGI, and they are backed by some of the wealthiest companies in the world. DeepMind, a top lab owned by Google’s parent company, says it is chasing the same goal.

Most experts believe AGI will not arrive for decades or even centuries — if it arrives at all. Even Altman admits OpenAI may never get there. But the race is on nonetheless.

In a joint phone interview with Altman, Microsoft’s chief executive, Satya Nadella, later compared AGI to his company’s efforts to build a quantum computer, a machine that would be exponentially faster than today’s machines. “Whether it’s our pursuit of quantum computing or it’s a pursuit of AGI, I think you need these high-ambition North Stars,” he said.

Altman’s 100-employee company recently built a system that could beat the world’s best players at a video game called Dota 2. Just a few years ago, this kind of thing did not seem possible.

Dota 2 is a game in which each player must navigate a complex, three-dimensional environment along with several other players, coordinating a careful balance between attack and defense. In other words, it requires old-fashioned teamwork, and that is a difficult skill for machines to master.

OpenAI mastered Dota 2 thanks to a mathematical technique called reinforcement learning, which allows machines to learn tasks by extreme trial and error. By playing the game over and over again, automated pieces of software, called agents, learned which strategies are successful.

The agents learned those skills over the course of several months, racking up more than 45,000 years of game play. That required enormous amounts of raw computing power. OpenAI spent millions of dollars renting access to tens of thousands of computer chips inside cloud computing services run by companies like Google and Amazon.

Eventually, Altman and his colleagues believe, they can build AGI in a similar way. If they can gather enough data to describe everything humans deal with on a daily basis — and if they have enough computing power to analyze all that data — they believe they can rebuild human intelligence.

Altman painted the deal with Microsoft as a step in this direction. As Microsoft invests in OpenAI, the tech giant will also work on building new kinds of computing systems that can help the lab analyze increasingly large amounts of information.

“This is about really having that tight feedback cycle between a high-ambition pursuit of AGI and what is our core business, which is building the world’s computer,” Nadella said.

That work will likely include computer chips designed specifically for training artificial intelligence systems. Like Google, Amazon and dozens of startups across the globe, Microsoft is already exploring this new kind of chip.

Because AGI is not yet possible, OpenAI is starting with narrower projects. It built a system recently that tries to understand natural language. The technology could feed everything from digital assistants like Alexa and Google Home to software that automatically analyzes documents inside law firms, hospitals and other businesses.

The question is how seriously we should take the idea of artificial general intelligence. Like others in the tech industry, Altman often talks as if its future is inevitable.

“I think that AGI will be the most important technological development in human history,” he said during the interview with Nadella. Altman alluded to concerns from people like Musk that AGI could spin outside our control. “Figuring out a way to do that is going to be one of the most important societal challenges we face.”

But a game like Dota 2 is a far cry from the complexities of the real world.

Artificial intelligence has improved in significant ways in recent years, thanks to many of the technologies cultivated at places like DeepMind and OpenAI. There are systems that can recognize images, identify spoken words, and translate between languages with an accuracy that was not possible just a few years ago. But this does not mean that AGI is near or even that it is possible.

“We are no closer to AGI than we have ever been,” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, an influential research lab in Seattle.

Geoffrey Hinton, the Google researcher who recently won the Turing Award — often called the Nobel Prize of computing — for his contributions to artificial intelligence over the past several years, was recently asked about the race to AGI.

“It’s too big a problem,” he said. “I’d much rather focus on something where you can figure out how you might solve it.” The other question with AGI, he added, is: Why do we need it?

©2019 New York Times News Service

Post Your Comment
Required, will not be published
All comments are moderated