The myth of algorithmic transparency

Critiques of AI decision-making systems often highlight the systemic impact of algorithmic biases. In a world where algorithms determine who has access to opportunities and information, they may perpetuate discrimination in who receives healthcare, legal protection, or government employment

Published: Aug 25, 2022 03:45:36 PM IST
Updated: Aug 25, 2022 04:01:11 PM IST

Computer scientists attempt to portray algorithmic innovation as a boon to businesses, however, like all good things, they come with flaws and hidden costs
Image: ShutterstockComputer scientists attempt to portray algorithmic innovation as a boon to businesses, however, like all good things, they come with flaws and hidden costs Image: Shutterstock

Introducing algorithms into existing business and governance mechanisms can result in substantial efficiency gains. Algorithms are getting smarter every day and teaching themselves to make quick and efficient decisions by themselves, leaving humans ‘out of the loop’. Wall Street Journal conducted a recent experiment on Tiktok to determine the power of their AI. They introduced hundreds of bot accounts to the platform, each with diverse interests that were undisclosed to the algorithm. The bots were programmed to pause and repeat certain kinds of videos; within an hour, Tiktok AI had used the behaviour patterns of each bot to determine quite precisely what each account had been programmed to engage with and was serving content aligned with those topics.

However, with substantial opportunity comes foundational challenges around algorithmic fairness, accountability, and trustworthiness (FAT). Computer scientists attempt to portray algorithmic innovation as a boon to businesses. However, like all good things, they come with flaws and hidden costs. For example, the Guardian reported that a ‘good morning’ message posted by a Palestinian man was translated by Facebook to ‘attack them’ in Hebrew, resulting in his arrest. The Guardian reported that Google photos tagged a photograph of African American people as ‘gorillas’. Critiques of AI decision-making systems often highlight the systemic impact of algorithmic biases. In a world where algorithms determine who has access to opportunities and information, they may perpetuate discrimination in who receives healthcare, legal protection, or government employment. For example, the AI-based COMPAS system used by judges across the US courted controversy when its code revealed biases that resulted in more severe punishments meted out to African American criminals than to their white counterparts with similar criminal backgrounds. For positive impact, algorithmic innovation must be complemented by trustworthiness.

This makes FAT seem paramount when algorithmic decision-making systems directly impact human lives. Sharing code on public platforms may seem like a step in the right direction, but scholars believe it may be ineffective and give unscrupulous humans another excuse to ‘hide behind the computer’ when explaining away questionable decisions. On the one hand, AI algorithms are expensive, and companies keep them a secret for competitive advantage and in some cases, more sensitive issues like national security. Making code publicly available can open up such code to hacking and manipulation. On the other hand, algorithms are incomprehensible to non-experts. Much like looking under the bonnet of a car makes nobody an auto mechanic, inspecting complex code makes it no easier to explain how a model prioritises specific attributes when making decisions. As algorithms encounter new data, they constantly train themselves and modify their structure and behaviour, making an in-depth study of model rationale nearly impossible.

While scholars debate heavily on the pros and cons of algorithmic transparency, some industry experts argue that transparency offers false hope because it is unsustainable and renders unintelligible the politics underlying organisational decisions. They argue that transparency means nothing when only code is available without training data, and companies are not bound to share training data private to them. The outcome of algorithmic decision-making is influenced by training data, much like how the behaviour of a child is influenced by her environment. When training data is inconclusive, biased, or misguided, algorithmic decisions may be incorrect, illogical, or unfair. Therefore transparency for its own sake becomes useless because it makes nobody more accountable, fair, or wise.

Also read: Responsible AI: Putting humans at the centre of artificial intelligence

Read More

This raises philosophical questions: Is it more important to identify why wrong decisions were made or grapple with unfair results? If we are unable to define the thousands of input variables used by a model to make decisions, how do we understand these processes? Should we be more worried about FAT in the process or the outcome? While these questions continue to be debated by scholars, we must begin to think about how well we articulate our values as organisations. Sure we believe in fairness, but are we able to define it clearly in different business contexts? We want justice in our organisations, but do we implement systems that make certain groups suffer for hiring and promotions? We believe in efficiency using AI, but do we clearly define the boundaries between algorithms and humans in organisational decision-making? Much like policymakers, we as business professionals must clearly articulate what values drive our organisation, whom to hold accountable for their execution and the penalties for non-compliance. We cannot rely on others to clarify what drives us: In the absence of clear organisational direction, engineers may build algorithms that perpetrate their biases while letting organisations suffer its consequences.

 

Anjana Karumathil is an Associate Professor of Practice at IIM Kozhikode. She has a PhD in organisational behaviour from IIM Bangalore, an MBA from the University of Strathclyde, and an undergraduate degree in engineering from the National Institute of Technology, Calicut. She has 15 years of industry experience across organisations including Deloitte Consulting & TCSL.

X