Prashant Mehta is the Group Vice-President & Global Service Line Lead - Systems Integration & Data, at Publicis.Sapient.
Artificial Intelligence (AI) is a source of both enthusiasm and skepticism, albeit in different measures. With humans and machines joining forces now more than ever before, AI is no longer confined to innovation labs and is being hailed for its immense transformational possibilities. However, businesses need to overcome certain challenges before they can realise the true potential of this emerging technology. The key lies in leveraging the right opportunities in AI.
Organisations involved in AI cannot demonstrate clearly why it does and what it does. No wonder AI is a “black box”. People are skeptical about it, as they fail to understand how it makes decisions. Provability – the level of mathematical certainty behind AI predictions – remains a grey area for organisations. There’s no way they can prove or guarantee that the reasoning behind the AI system’s decision-making is clear. The solution lies in making AI explainable, provable, and transparent. Organisations must embrace Explainable AI as a best practice.
Data privacy and security
Most AI applications rely on huge volumes of data to learn and make intelligent decisions. Machine Learning systems feast on data – often sensitive and personal in nature – to learn from them and enhance themselves. This makes it vulnerable to serious issues like data breach and identity theft. Here is some good news; the increasing awareness among consumers about the growing number of machine-made decisions using their own personal data, has prompted the European Union (EU) to implement the General Data Protection Regulation (GDPR), designed to ensure the protection of personal data. Besides, an emerging method – ‘Federated Learning’ – is all set to disrupt the AI paradigm. It will empower data scientists to develop AI without compromising users’ data security and confidentiality.
An inherent problem with AI systems is that they are only as good – or as bad – as the data they are trained on. Bad data is often laced with racial, gender, communal or ethnic biases. Proprietary algorithms are used to determine who’s called for a job interview, who’s granted bail, or whose loan is sanctioned. If the bias lurking in the algorithms that make vital decisions goes unrecognised, it could lead to unethical and unfair consequences. For instance, Google Photos service uses AI to identify people, objects and scenes. But there’s a risk of it displaying wrong results, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
In the future, such biases will probably be more accentuated, as many AI systems will continue to be trained using bad data. Hence, the need of the hour is to train these systems with unbiased data and develop algorithms that can be easily explained. Microsoft is developing a tool that can automatically identify bias in a series of AI algorithms. It’s a significant step towards automating the detection of unfairness that may find their way into Machine Learning. It’s a great opportunity for businesses to leverage AI without inadvertently discriminating against a specific group of people. You can also use approaches like “Path specific Counterfactual fairness” by DeepMind researchers Silvia Chiappa and Thomas Gillam to remove biases.
It is true that organisations have access to more data today than ever before. However, datasets that are relevant for AI applications to learn are indeed rare. The most powerful AI machines are the ones that are trained on supervised learning. This training requires labeled data – data that is organised to make it ingestible for machines to learn. Labeled data is limited. In the not-so-distant future, the automated creation of increasingly complex algorithms, largely driven by deep learning, will only aggravate the problem. There’s a ray of hope though. As a trend that’s fast catching up, organisations are investing in design methodologies, trying to figure out how to make AI models learn despite the scarcity of labeled data. ‘Transfer Learning’, ‘Unsupervised/Semi-Supervised Learning’, ‘Active Learning’, and so on are just a few examples of the next-generation AI algorithms that can help resolve this.
The way ahead
Accumulating data is just the first step for organisations towards building effective marketing campaigns. However, they must be able to interpret the numbers and identify relationships within them. This calls for differentiating between correlation and causality. The future belongs to organisations who can blend the predicting capabilities of AI-driven machines with the prowess of human intuition and judgment.
The author is a Group Vice-President & Global Service Line Lead - Systems Integration & Data, at Publicis.Sapient.