W Power 2024

Responsible AI: Putting humans at the centre of artificial intelligence

Organisations need to adopt basic principles for responsible artificial intelligence to gain competitive advantage; they must also design systems that can effectively deal with biases

Published: Aug 5, 2021 11:19:57 AM IST
Updated: Aug 5, 2021 01:17:24 PM IST

Responsible AI: Putting humans at the centre of artificial intelligence

With huge advancements in data handling, computational power and an increased need to handle complexity, real-life use cases leveraging artificial intelligence (AI) have seen a sudden upsurge over the past few years. AI has certainly increased the financial performance of companies, customer experiences and quality of products.

Responsible AI: Putting humans at the centre of artificial intelligence

At the same time, the need to ensure responsible use of AI has also increased. Organisations are recognising the need to develop and operate AI systems with fairness, without any racial, gender or other biases, and to take care of safety, privacy and society at large. These elements are giving rise to one of the most important debates in the world of AI today—how to ensure ‘responsible AI’ or RAI.

Private organisations, governments and international bodies are coming together to measure and analyse the technical and societal impact of AI systems, and are drafting principles and regulations to curb these biases. Most companies are yet to achieve RAI adoption. In this article, we investigate the tangible actions that companies should take while implementing RAI programs.

AI Biases

With an increase in the number of AI-based use cases, we are also witnessing the bias that these systems can show in decision-making. For example, there have been instances where AI-based hiring systems have given preference to men over women for technical roles like software development. There have also been instances when AI-based health care systems have tagged white people with high-risk scores, attributed to AI systems deducing results using cost of healthcare as an input and considering dark skin-toned people incapable of paying for high standard healthcare.

These risks are even higher for a country like India, where we have diverse cultures and the AI-systems may inherently imbibe these regional or caste biases. Companies across the globe have recognised the need to develop and operate AI systems with fairness and without any biases, while ensuring the safety and privacy of the society at large.

Principles and Regulations for Responsible AI

In April 2021, the European Union (EU) published a draft law to regulate AI. The Artificial Intelligence Act calls for a proportionate, risk-based approach, based on the degree of risk involved in any use case. It also requires AI systems to be trained on high quality data.

Similar efforts are underway in other nations, including India, the US and Singapore.

While regulations will take time and more dialogue to get implemented, organisations need to adopt basic principles for RAI to gain competitive advantage. Competitive advantage resides at the intersection of data science, technology, people and deep business expertise. This potential can be realised only when AI is woven into processes and ways of working—all done responsibly and with humans at the core. These principles need to encapsulate:

• Accountability

• Fairness and equity

• Transparency and explainability

• Safety, security and robustness

• Data and privacy governance

• Social and environmental impact

Most important, organisations must commit to design systems that put humans at the centre of AI, empowering and preserving the authority and well-being of those who develop, deploy and use these systems. This central principle will bind together all the other principles, and will ensure true RAI.

We are in the early stages of AI revolution. In this ever-evolving era of AI, there is still a long road ahead for companies to establish processes and bodies which define, implement and track principles of RAI.

RAI maturity: Current State

BCG collected and analysed data from 1,000 large organisations to evaluate their progress in implementing RAI programs. It then categorised organisations into four distinct stages of maturity:

1. Lagging (14 percent): Starting to implement RAI with a focus on data and privacy

2. Developing (34 percent): Expanding across remaining RAI dimensions and initiating RAI policies

3. Advanced (31 percent): Improving data and privacy, but lagging behind in human-related activities

4. Leading (21 percent): Performing at a high level across all RAI dimensions

A striking aspect revealed by this exercise was that perception is far removed from reality—most firms are overestimating the level of RAI maturity (55 percent of all organisations are less advanced than they believe). Another noteworthy aspect is that most of the C-suite and board of directors are concerned about the risks posed by a lapse in AI systems. Risk mitigation was observed to be the second-most-sought-after reason to adopt RAI; the first being achieving business benefits.

Organisations should treat RAI as an opportunity to strengthen relationships with stakeholders—benefiting customers and society at large while achieving business benefits in parallel.

Responsible AI: Putting humans at the centre of artificial intelligenceWhat next? Action Items for Organisations

As organisations are moving towards implementing RAI programs, they need to understand the indispensable need to define metrics to track the success of these programs. The metrics should cut across multiple dimensions of the organisation. To start with, adopting an AI program is no less than bringing a cultural transformation—active involvement of leadership in communicating and participating in RAI programs is essential. Second, the organisations should measure the program adoption coverage, for example, percentage of use-cases covered by RAI programs. Thirdly, organisations should emphasise on training the workforce on RAI principles, tools and execution. Last, but not the least, there should be metrics designed to measure the efficacy of RAI programs, for example, are the measures able to flag AI failures effectively or asses the dollar savings as a result of RAI adoption.


We are riding on early stages of the maturity curve of AI and few companies have understood the process to define and track RAI practices. Many companies need to embark on the journey of adopting these practices and many others need to take defined actions towards becoming leaders in RAI maturity adoption. Both AI systems and thereby, RAI programs, are evolving rapidly. Certainly, this is going to be an exciting space to watch out for in the next few years.

About writers: Sumit Sarawgi is the Managing Director and Senior Partner at BCG ; and Sumit Arya is Gamma Lead Data Scientist at BCG.

Post Your Comment
Required, will not be published
All comments are moderated