W Power 2024

Without shared values and goals, tech regulations won't work

The world needs to harmonise goals and values to rein in the potential harm technology can cause

Published: May 15, 2023 01:03:14 PM IST
Updated: May 15, 2023 01:35:35 PM IST

Without shared values and goals, tech regulations won't workGiven the diversity of political and economic goals, values and agendas among global powers today, regulatory harmonisation is not only illusory, but it may also be dangerous. Image: Shutterstock

Regulatory harmonisation—the practice through which regulators align policies and procedures across markets—has been a trend since the end of World War II. It is heralded as a tool that enhances trade, ensures product safety, fosters innovation, and even increases mutual dependence, thus, promoting world peace. The European Union is an evolving example of what can be achieved through harmonisation. It also lays bare its limits.

For it is no longer clear that harmonisation is always desirable, or indeed realistic, in the realm of technology even as tech regulations grow worldwide. Some of the biggest names in tech argue that technological progress should be paused. Countries are now imposing restrictions on one another’s innovations. The United States, for example, prohibits semiconductor chipmakers from selling advanced chips to China; Italy, among other countries, has blocked access to ChatGPT; and in China, which has long shut out Western tech platforms, access to the chatbot is reportedly banned.

We live in a world shaped by rising nationalism and widening inequalities. China and the West are on a potential collision course of decoupling and conflict while Russia is waging a brutal war against Ukraine, to name but two fault lines dividing us.

This dynamic and increasingly volatile context presses us to address a critical question: How can we build a digital world that is safe and beneficial for all? Is it enough to call for, say, China and the US to adopt the EU’s rules on digital services and artificial intelligence, while China and the EU adopt American financial regulations?

We don’t think so. We argue that calls for regulatory harmonisation to “tackle collectively” the risks posed by technology are misguided if the goals of the intended regulation and the values that are key to successful implementation are not examined.

The reason is simple: Given the diversity of political and economic goals, values and agendas among global powers today, regulatory harmonisation is not only illusory, but it may also be dangerous.

Instead of striving for regulatory harmonisation at the global level, we need to enrich the conversation to first include alignment of goals and values, and whether and how technology helps to manage—or exacerbate—differences.

Any continued push for harmonisation without agreement on goals and values will prove counterproductive and risky. This is the debate that needs to take place regarding global technology—and it needs to take place now. Let us elaborate on why.

The growing dangers of technology

Despite its promises, technology is a double-edged sword. We face dire consequences if we don’t get global technology regulations right. It’s no surprise that the World Economic Forum’s global risk report for 2023 warns that technology will “exacerbate inequalities” and cybersecurity threats will “remain a constant concern” for the future. Meanwhile, the United Nations Human Rights Office reports that new technologies—specifically spyware, surveillance technology, biometrics, and AI—“are being misused worldwide to restrict and violate human rights”.

Leading tech figures such as Elon Musk and former Google CEO Eric Schmidt are convinced that humanity’s very survival is at stake, with the latter warning that AI poses a threat as dangerous as nuclear war. Instead of striving for regulatory harmonisation at the global level, we need to enrich the conversation.

Also read: 6 AI governance principles to help enterprises cope with risk in the fast-moving world

The second risk of harmonising regulations pertains to implementation. International regulators often work together to craft similar guidelines and technical requirements, but not all jurisdictions achieve the same desired outcomes. All too often, companies and organisations lobby for terms that serve their interests. There is a lack of shared goals and values required for collective commitment to the regulation. Enforcement and implementation also tend to be uneven across regions, countries, and even among regions in the same country. In this aspect, one could look to Switzerland as one example of effective regulation. The Swiss government delegates most regulatory authority to the cantons. At the local level, goals and values are more easily shared and understood, hence people are less likely to violate or circumvent laws.

Conversely, lawbreakers rationalise their actions by accusing regulators of lacking an understanding of their goals or their ways of working. Take the financial sector, for example. Prudential regulation aims to ensure the stability of both financial institutions and the economy at large by mandating control mechanisms for risk management at a macro level. Yet, bankers repeatedly come up with creative ways to increase their financial gains—personal or corporate—while concealing risks, partially or completely, outside their balance sheet.

Also read: Crypto needs more rules and better enforcement, regulators warn

The global financial crisis and, more recently, the Silicon Valley Bank collapse and the demise of Credit Suisse are stark examples of how well-intended regulations can fail. They also reflect the perennial gap between the intention and spirit of laws and their impact on different actors, each driven by their goals and values.

A narrow focus on rules defeats the purpose

There is another, perhaps bigger, problem with aligning regulations: Laws can be copied, but the copy leaves the spirit behind. Worse, harmonised regulation may become a lawyers’ game of meeting the letter of the law while pursuing goals that violate its spirit.

Here’s how this problem could play out on the global stage: Nations adopt the regulations of the others to spur trade and investments, only to drop those rules once they have sufficient size and clout. In other words, it is plausible that nations turn their backs on international cooperation after becoming major economic and geopolitical powers, and use the mutual dependency engendered by that cooperation against their former partners.

Also read: Are cookies a friend or a foe for internet users?

If that happens, regulatory harmonisation will have created a new and fragile global power balance. This could lead to unpredictable and frightening potential consequences, including the weaponisation of AI systems as Trojan horses.

Align regulatory goals and values before rules

To mitigate these problems and ensure that regulations are effective across diverse markets, we need to foster trust and commitment by agreeing on the values and goals that will drive the laws and regulations as well as the implementation of those laws and regulations. To start the process with rules is going about it backwards.

We should never forget that regulations are only mechanisms or instruments—a means to an end, so it is only logical to start by discussing and agreeing on the end. If people believe in what the regulations are trying to achieve and the values that underpin them, they will be much more likely to comply with and trust the regulations. And regulators can trust the people in return. This principle holds across the board—for governments, multilateral organisations and companies.

Also read: The Indian Algorithmic Services: When AI gets to decide who gets welfare

Goals set clear perimeters for what regulations are meant to achieve—they are fundamental to effective governance. For example, the EU’s Digital Services Act aims to protect online users from disinformation, and harmful or illegal content, and to increase oversight of online platforms while fostering innovation. These goals are not country or region specific; hence it shouldn’t surprise us that all EU countries adopted the Act, which is never an easy feat for the bloc.

Values capture the main underlying drivers of behaviours, both of the regulators and the regulated. They must align with goals if the goals are to be achieved. For technology, values may range from privacy and freedom of expression to innovation and safety. The OECD AI Principles serve as a good example.

Unmanaged diversity is dangerous

A century ago, British mathematician and philosopher Bertrand Russell, amid China’s civil war, extolled in The Problem of China what he saw as Chinese virtues: Respect for both individual dignity and public opinion, a love for science and education, and an aptitude for patience and compromise.

Also read: The new realities of globalization

Russell cautioned the West against expecting China to bend to their will—advice that is eerily relevant today in the context of global cooperation in regulating tech. “If intercourse between Western nations and China is to be fruitful, we must cease to regard ourselves as missionaries of a superior civilisation.”

He also posed a question: “If China does copy the model set by all foreign nations with which she has dealings, what will become of all of us?”

The question is remarkably prescient. As Schmidt argues in a recent commentary on technology and geopolitics in Foreign Affairs, we are locked in a global competition not only among nations but also systems. “At stake is nothing less than the future of free societies, open markets, democratic government, and the broader world order,” he writes.

Schmidt’s comments reflect the emergence of unilateralism across the globe which, in our view, is ill-suited to deal with the threat posed by AI. Instead, the world led by the US and China—and perhaps the EU—ought to engage each other in defining shared goals and values to counteract a threat that is second only to climate change to the survival of humankind on this planet.

Also read: Three ways digitalisation changes corporate responsibility

Schmidt is right that diversity, despite being celebrated in recent years, can be dangerous if not managed well. By diversity, we mean wide differences in values and goals in tech regulation. The answer to averting a tech-driven armageddon is neither a pause in technological innovation nor regulatory harmonisation in isolation. Instead, alignment of and commitment to global goals and values will be the paramount drivers of cooperation and effective regulatory implementation.

The United Nations was formed towards the end of World War II. Growing divergence of goals and values among UN members today poses a grave risk to the organisation’s mission, as it has become a forum for states to fuel nationalism and further their own goals.

We should not wait for a tech-driven crisis to acknowledge the need to align our goals and values. We should do so proactively, by establishing —for a start—new tech-specific global organisations where such alignment can be built. It will make the world a safer place.

About the authors:
Theodoros Evgeniou is a Professor of Decision Sciences and Technology Management at INSEAD. He has been working on machine learning and AI for more than 25 years.
Ludo Van der Heyden is the INSEAD Chaired Professor of Corporate Governance and Emeritus Professor of Technology and Operations Management. He is the founder of the INSEAD Corporate Governance Centre. Professor Van der Heyden is also Chairman of a software company in natural resource estimation and is a regular adviser to boards and leadership teams across the world.

This article was first published in INSEAD Knowledge.

[This article is republished courtesy of INSEAD Knowledge
http://knowledge.insead.edu, the portal to the latest business insights and views of The Business School of the World. Copyright INSEAD 2023]

Post Your Comment
Required
Required, will not be published
All comments are moderated