How can governments capitalize on AI's benefits while minimizing its dangers? New research examines several policies—and identifies a promising approach
Policy wonks have had their eye on neural-network algorithms—colloquially known as “artificial intelligence,” or AI—since at least 2016, when Google’s Deep Mind famously used the technology to beat the world’s best human player at the ancient game of Go.
In 2023, soon after OpenAI released its powerful ChatGPT app to the public, AI regulation went from niche concern to front-page news. The European Union hastily revised its AI Act to account for the sudden appearance of software that could generate college essays, computer code, and misinformation with equal effortlessness. OpenAI CEO Sam Altman beseeched Congress to rein in his industry ahead of the 2024 presidential election. And the Biden administration issued an executive order requiring AI developers to share “safety test results and other critical information with the U.S. government.”
While observing this flurry of regulatory activity, Kellogg finance professor Sergio Rebelo and his coauthors, João Guerreiro of UCLA and Pedro Teles of Portuguese Catholic University, noted that the new frameworks each “tended to emphasize one solution.” For example, that solution might revolve around banning certain high-risk AI applications outright (as in the EU’s revised AI Act); requiring tech companies to safety-test their AI software and report the results to the government (as Biden’s executive order called for); or holding AI developers legally liable for harms caused by their algorithms (as per the AI Liability Directive, a proposed law in the EU). The researchers wondered which, if any, of these approaches was likely to be successful.
“We wanted to try to think about these concerns in a systematic way by creating an economic model that would capture what the main issues are while screening out the details of how a specific algorithm works,” says Rebelo. In other words, by using mathematics to intentionally simplify reality, the researchers wanted to bring the key problems facing AI regulators into clearer focus. The reason for this simplification, he adds, is that the specific social effects of any particular AI algorithm are simply too uncertain to anticipate in any realistic detail. “We really are in a brave new world,” he says. “We don’t really know what our algorithms might bring.”
By building this inherent uncertainty about AI into their model, the researchers were able to clarify—in broad strokes—which regulatory approaches are most likely to work. The bad news: according to their model, none of the approaches currently being considered by the U.S. and EU fits the bill. At least, not alone. But in a certain combination? “That can get close to an optimal solution,” Rebelo says.
[This article has been republished, with permission, from Kellogg Insight, the faculty research & ideas magazine of Kellogg School of Management at Northwestern University]