W Power 2024

AI and human ethics: On a collision course

AI systems can give rise to issues related to discrimination, personal freedoms, and accountability, but there can be ways to correct them

Jasodhara Banerjee
Published: Aug 2, 2021 11:46:29 AM IST
Updated: Nov 17, 2021 02:33:53 PM IST

AI and human ethics: On a collision courseAI systems can give rise to issues related to discrimination, personal freedoms, and accountability

Illustration: Chaitanya Surpur 

As the use of artificial intelligence (AI) becomes increasingly popular among private companies and government agencies, there are growing concerns over a plethora of ethical issues rising from its use. These concerns range from various kinds of biases, including those based on race and gender, transparency, privacy and personal freedoms. There are still more concerns related to the gathering, storing, security, usage and governance of data—data being the founding block on which an AI system is built.

To better understand the root of these issues, we must, therefore, look at these founding blocks of AI. Let’s look at mechanisms to predict weather, to see how data helps. If, today, there are accurate systems to predict a cyclone that is forming over an ocean, it is because these systems have been fed data about various weather parameters gathered over many decades. The volume and range of this data enables the system to make increasingly precise predictions about the speed at which the cyclone is moving, when and where it is expected to make landfall, the velocity of wind and the volume of rain. If there was inadequate or poor quality data to begin with, the prediction mechanism could not be built.

Similarly, in AI systems, algorithms—the set of steps that a computer follows to solve a problem—are fed data, in order to solve problems. The solution that the algorithm will come up with depends solely on the data it has received; it does not, cannot, consider possibilities outside of that fixed dataset. So if an algorithm receives data only about white, middle-aged men who may or may not develop diabetes, it does not even know of the existence of non-white, young women who might also develop diabetes. Now imagine if such an AI system was developed in the US or in China, and was deployed in India to predict the rate of diabetes in any city or state.

“We get training datasets, and we try to learn from them and try to make inferences from it about the future,” says Carsten Maple, professor of Cyber Systems Engineering at the University of Warwick’s Cyber Security Centre (CSC), and a Fellow of the Alan Turing Institute in the UK. “The problem is, when we get this training data, we don’t know how representative it is of the whole set. So, if all we saw are white faces and think we have a complete set, then the machine will also see the same thing. This is what Timnit [Gebru, a computer scientist who quit her position as technical co-lead of the Ethical Artificial Intelligence Team at Google last December] had pointed out. Companies and governments should be able to say whether or not the datasets are representative.”

Maple’s mention of Gebru refers to the ethical issues of AI and companies that use it, which was highlighted by her departure from Google and consequent backlash against the company. Gebru, an alumnus of Stanford University specialising in algorithmic bias and data mining, was co-leader of a group that studied the social and ethical ramifications of AI. She is known for co-writing a ground-breaking research paper that showed how facial recognition technologies are less accurate while identifying people of colour and women, and subsequently discriminate against them. Of Ethiopian origin, Gebru also co-founded Black in AI, a group that champions diversity within the tech industry.

AI and human ethics: On a collision course
Last year, Gebru had submitted an internal research paper at Google that surveyed the drawbacks of ‘large language models’, which are key to the company’s business of search engines. The drawbacks included large environmental and financial costs, rise of dangerous biases, the inability to learn underlying concepts, and their potential to deceive people. The paper did not go down well within the company, and she was asked to retract it or remove her name from its list of authors, along with those of other team members.

Although the events and circumstances surrounding her departure from Google remain disputed, Gebru left the company last December, following which almost 2,700 Google employees and more than 4,300 academics and others signed a letter condemning her alleged firing. What this episode highlighted was the powerful and ubiquitous, and yet unregulated, nature of research in AI, and the risks to those who study its possible social harms.

AI describes a whole suite of technologies, and the most significant impact it has is in terms of efficiency, says Jessica Fjeld, assistant director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society, at the Harvard Law School in the US. “For instance, in any form of social service, if there are a thousand different people who are processing applications for social benefits, then there will be a thousand different approaches to the same set of rules. If these people are replaced with a single AI system, it will be as if one person can do the work of all the people. Applicants who are waiting for benefits will experience a faster and smoother process, but whatever biases existed among those who were assessing the applications will now be amplified a thousand times.”

AI and human ethics: On a collision courseAn April 2020 paper titled ‘Ethics of Artificial Intelligence and Robotics’ published by Stanford University identifies the main ethical debates around AI as those pertaining to: Privacy and surveillance, manipulation of behaviour, opacity of AI systems, bias in decision-making systems, human-robot interaction, automation and employment, autonomous systems, machine ethics, artificial moral agents and singularity.

Fjeld adds that implementation of AI systems can affect basic human rights, and can impact personal freedoms such as the right to privacy (through harvesting of personal data), the right to freedom of speech and expression (through moderation and censoring of content and opinions expressed online), and the right to free movement and assembly (through facial recognition cameras). Therefore, it is imperative that companies implementing these technologies should at least respect these freedoms, if not protect them like governments do.

Given the enormous significance that datasets hold in the designing of AI tools, it is essential to ensure they are as inclusive as possible. “We do have approaches that can be taken [to make data more inclusive], including things like generating synthetic data,” says Anjali Mazumder, AI and Justice and Human Rights Theme Lead at The Alan Turing Institute. “The question is, what are we trying to do with the dataset? So, if we trying to determine a public service that would create a differentiated outcome, then we have to try to improve the representativeness of the data.”

Mazumder gives the example of biobanks—they store biological samples for use in research—in countries like the US and the UK, which have a lack of representation from minority groups. This impacts the tools that are based on this data, making them less accurate for certain populations.

Generating synthetic data, too, has its own pitfalls. For one, it does not do away with issues of privacy, because the synthetic data is created from an actual dataset, and is not truly anonymous; secondly, if the original dataset is not truly representative, then the synthetic data created from it will also not be truly representative. Consequently, any biases that the original dataset has, will also be reflected in the synthetic dataset.

Maple adds that there are ways in which more representative synthetic datasets can be created by using available statistics; for instance, from statistics about gender, race, age and income distributions in a given geography. However, he adds, “There has been a real lack of interest in some companies about whether the data they are using is representative of the people they are trying to serve.”


According to Fjeld, biases can be minimised in three broad ways. First, it has to be the right dataset, and we have to make sure that it means what we think it means. Second, there are better and worse datasets for different purposes, and we have to account for the biases that they might show. Third, in addition to choosing the right dataset, and understating its advantages and disadvantages, we should know that there are humans involved in the process; they should test the process for not just speed and accuracy, but also for bias-free outputs. And since the systems learn iteratively, there should also be audits while these systems are in place.

In a paper titled ‘Governing AI safety through independent audits’, recently published in the Nature journal, Carsten and other researchers propose an independent audit of AI systems, which would embody the ‘AAA’ governance principles of prospective risk assessments, operation audit trails and system adherence to jurisdictional requirements.

The paper adds that such a system “is intended to help pre-empt, track and manage safety risk while encouraging public trust in highly automated systems. Such an audit would furnish managers, manufacturers, lawmakers and insurers with operational data, expectations and an operational baseline for highly automated systems so they can enable human responsibility and control”.

Having auditing checks in place is not just about protecting the rights and freedoms of individuals, but also protecting companies themselves from possible reputational, financial and legal risks. For instance, in July 2019, the Los Angeles city attorney’s office filed a lawsuit against a subsidiary of IBM for deceptively mining the private location data of users of the Weather Channel app and selling it to advertising and marketing companies. In November the same year, an investigation was started into Goldman Sachs for using an AI algorithm that discriminated against women by giving them lower credit limits than men on their Apple cards.

In a paper titled ‘A Practical Guide to Building Ethical AI’ published last October in the Harvard Business Review, Reid Blackman, founder and CEO of Virtue, an ethical risk consultancy, highlighted why AI ethics is a business necessity and not merely an academic curiosity. As more companies employ AI tools to make business and policy decisions, Blackman suggested the following ways in which they can navigate ethical quandaries and mitigate risks: Identifying existing infrastructure, that a data and AI ethics programme can leverage; creating a data and AI ethical risk framework that is tailored to specific industries; change how we think about ethics by taking cues from the successes in the health care sector, which deals with issues of privacy, self-determination, and informed consent; optimising guidance and tools for product managers to make better decisions; building organisational awareness; incentivising employees to play a role in identifying AI ethical risks; and monitoring impacts and engaging stakeholders.

Post Your Comment
Required
Required, will not be published
All comments are moderated