Published: Jan 31, 2020 10:18:33 AM IST
Technology has witnessed a sea-change over the past decade. As things stand, almost every corporate today is transforming digitally, powered by technology; and at the heart of this technology is AI.
Given this backdrop of access to considerably more data on a very wide spectrum of variables, and the ability of AI to impact lives in a positive or adverse way, a definite need for more responsible AI practices is being felt as the need of the hour. Yet until now, regulation of AI practices is still at a nascent stage.
At Forbes India’s AI Innovation Round Table - Envisioning Innovation for an Alternative Future – hosted in December 2019, the discussion centered around various AI related themes and pertinent issues, including regulation of AI towards making it more fair, mindful of privacy, secure and protected.
The panel comprised of thought leaders Anup Purohit, Group President and CIO, Yes Bank; Sudipta Ghosh, Partner – Data & Analytics Leader, PwC; Sudip Mazumder, Deputy Head – Digital L&T Next; Peter Gartenberg, MD & President, Indian Subcontinent, Blue Prism; Dr Subodh Deolekar, Lead Research Engineer + Assistant Professor, Redx WeSchool + Research & Business Analytics; Shreesh Dubey, Senior VP, Product Manager, iCertis; Prasad Rajappan, Founder and MD, Zing HR; Rangarajan Vasudevan, Founder/CEO, The Data Team and Nitin Agarwal, Group CIO and CTO, Edelweiss.
Setting the tone for the discussion, Nitin Agrawal pointed out that willingly or unwillingly everyone uses technology in some way or the other and FinTech players have been providing a range of AI solutions to corporates, for everything from KYC to fraud detection. “We've divided our AI initiatives into multiple domains, such as customer lifecycle, risk engine analytics, pricing of insurance products, fraud analytics, process optimisation, etc.” he shared. Able to appreciate various multitudes of roles that technology could play, due to his blend of experience in both engineering and IT, Sudip Mazumder said, “As far as L&T is concerned, we have a huge IT set up and are still digitalising.”
As Microsoft has been a partner in the AI journeys of many corporates, Aparna explained the three pillars of AI from Microsoft’s perspective - innovation, empowerment, and responsible usage of AI. With respect to innovation, she admitted, “We cannot sell something and expect our clients to go away and use it. We are in a place where we're all learning together.” She went on to emphasise that while application of AI can be very powerful, to empower human ingenuity one needs to ensure ethical and fair practices while working with data. This is where individuals and organisations need to invest time towards learning and skilling themselves on platform innovation and practices.
Concurring that responsibility has become a necessity, Rangarajan Vasudevan suggested that since regulatory guidelines were not yet in place, building AI accountability standards is extremely important.
Moving on to what could go wrong with AI, Peter Gartenberg said, “The big issue that we keep seeing in applied-AI scenarios is scope for bias, which gets built into algorithms. This is really a problematic area. Confirmation bias can creep into AI, and this is apparent in a lot of social media apps, which results in the amplification of negatives, sometimes, simply because the AI behind it is bias to attracting viewers. This could cause a lot of social damage. So, we like to focus on things that we think are augmenting people's capabilities and thereby contributing to various activities of social good.”
Taking the discussion towards the ethics of AI, Nitin Agrawal said, “It's very important to have a framework that manages accidents from AI, which can occur at various levels. Uncontrolled AI can really break down systems. So, a governance framework to manage the possibility of AI accidents has to be developed.” His concern was that if we allow uncontrolled experiments and mass scale deployment, it could result in either mass scale reversals or a situation from which recovering would be difficult.
A workable solution, according to him, would be if industry bodies work with industry-specific regulators. He explained why mass scale universal regulation would not be possible in this area, saying, “For instance, self-driving cars will require a very different set of regulations as compared to AI in stock market trading and the creation of financial instruments.”
Giving the rest of the panel something to think about, Sudip Mazumder wondered if regulation may actually throttle innovation. “If you look at AI, it is not a science, it's an art,” he said. “The last part, which involves coding, is a science. But unless you imagine, how do you code?” He was also concerned that there was a fine line between regulation and overregulation.
His second concern was with regard to the concept of an industry-by-industry regulator, as there was an overlap of the impact of AI on various segments. “Globally, we can see that the social impact is getting out of hand and political scenarios have been impacted seamlessly by the social impact of AI. Every country is now facing that challenge,” he said.
Coming back to setting standards within industries to self-regulate the use of AI, Rangarajan Vasudevan said, “There's a lot of work that needs to happen, in terms of tempering, tailoring and putting boxes around this new technology. And, I think we can also be responsible by making sure that our solutions actually have those kinds of boundary boxes around them. But a lot of it is left to enterprises that have to be mature enough to decide what to actually pick and use.”
Peter Gartenberg suggested that governments could play a big role by bringing about transparency in the identities of the users of data. “There’s a lot of discussion around blockchain being able to give people control over their own data via their own identity. That's an area to which applying regulation would actually create more innovation.”
Lending his perspective, Rangarajan Vasudevan said, “I think there are two sides of the coin. One is the education of individuals. It's very important for individuals to understand when they choose technology for free, what they are giving in return. On the other hand, organizations have to earn the trust of the customers. This is especially true in the Indian context, where regulation pertaining to privacy is nascent. This trust has to be extremely transparent to end users in terms of how their data is being used for their own benefit.”
The participants then began to share their insights on how AI had changed the way they worked or envisioned work. Anup Purohit opined, “A collaboration between humans and AI is very important for it to actually prosper. So, for instance, in banking, the use of AI cannot be 100%. While there are areas of banking, which are completely automated using AI, there are significant areas that are manual or what we call ‘in assisted mode’ in banking.” He shared that his bank uses machine learning algorithms to come up with the NBA (next best actions), to assist relationship managers and other personnel in various ways.
Sudipta Ghosh talked about the benefits that could accrue from using AI in mundane tasks, like HR appraisals and even gauging the suitability of candidates for various verticals, etc. He even discussed the potential of using AI for succession planning. Nitin Agrawal observed that there was really no area, which was escaping the impact of AI. Peter Gartenberg stated that his company measured the success of any AI solution against the value that it would generate for customers. Often, success was benchmarked against customer experiences that his company’s solutions created for its customer’s customers.
Coming from the education sector Subodh Deolekar shared an unusual experience as companies were his customers and students were products. “We try to fine-tune these products by empowering them with AI and machine learning techniques because eventually, they will be responsible for implementing AI analytics.” He also felt that training was required for existing employees to reduce the gap between industry knowledge and what was available in the education sector. “Most companies will agree that we need to train their employees so they can actually start implementing AI and machine learning.”
Coming from an engineering company whose employees had to comply with numerous specifications from various manuals, Sudip Mazumder said, “Why do I need a book today when there is an LP engine with a voice-bot and chat-bots, etc. We actually created a bot that can answer questions on the contents of the IHC booklet, interactively. It could eliminate the need for such books completely.”
Aparna talked about the use of AI in the products we use every day, like PowerPoint and Excel and how features like Ideas and Translator are infusing intelligence and enhancing one’s experience.
Prasad Rajappan explained the benefits of using AI enabled power equipment and the analytics that were possible.
Shreesh Dubey brought the discussion back to the ethical dilemma related to AI and how the power of AI should be managed so that it did not get out of control. He revealed that his company saw lots of opportunity and potential in creating a contract to this effect.
Sudip Mazumder pointed out that AI was touching the lives of the masses and facilitating progress. Regulation should not stop that. “We need governance to take out the rogue element. But we'll have to work out a path that is good for the masses. AI is not supposed to be in the Ivory tower; it should be put to use for the masses and they should benefit from it,” he elaborated.
“Today, AI definitely has a lot more potential to help solve problems,” agreed Shreesh Dubey. “So, where should you start with control?” He believed that regulation should start where the benefits of AI actually reach individuals. “I think industry and regulation must work to ensure that things do not go out of control.”
Bringing the discussion to a close, Anup Purohit pointed out that everybody was on the same page that AI should be allowed to progress and bring prosperity. “AI is going to be a way of life and human and machine collaboration is going to be the key for its success,” he said. He noted five ‘T’s, which emerged as important facets during the discussion – Task (a mixed undertaking by both humans and technology), Trust (humans need to trust the machine, which should be built keeping humans in mind), Technology (which must be flexible and evolving), Talent (the need for reskilling and AI education of freshers) and finally, Teams (which will be smaller and more focused).