Published: Jan 24, 2020 05:19:35 PM IST
The rise of AI - and its increasing acceptance in various facets of our life - brings with itself the question of machine ethics and machine morality. There is a severe lack of clarity around AI ethics, but at the same time that there is no unified approach that can be used to create a regulatory framework. To begin with, how do we define AI? While there are outlines of what AI is, what it does, and what it should do, there is no standard definition of AI. As the technology is still evolving, its definitions too are correspondingly dynamic. If we haven’t yet been able to come to an agreement about how to define AI, how do we create systems to regulate and monitor the technology? Is it the technology we want to regulate, or how the technology is built, and eventually used? How do we boil down arbitrary principles to shape them into a framework of AI ethics?
Microsoft is leading the way in terms of the implementation of a robust, responsible governance process around AI. They have recognized six principles that should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. While these themes, at a glance, seem to provide a large-enough umbrella, for now, there will eventually be discrepancies as the AI sphere grows. These themes circle back to strengthening our social fabric, empowering people and doing so responsibly, but how do we ensure compliance? What about accountability? And who do we hold accountable?
Experts believe that the process of formulating ethics and regulations must begin from how we train AI. Ethical practices must be embedded in the collection, usage, and security of data that helps AI learn and evolve as well as the biases we unintentionally instill in the technology. It must be shaped into a progress-oriented tool that augments human capabilities, in a morally sound environment, for the greater common good. In order to make AI responsible, we, as creators and teachers of AI need to adopt responsible practices and hold ourselves accountable for instilling the right qualities in the technology.
There is an increasing push for research into responsible AI, calling for ethics panels to be set up, to monitor current practices and recommend the way forward to ensure AI drives meaningful innovation that empowers people in a responsible manner. At the same time, regulators must understand the deeper implications of a knee-jerk reaction to the problem at hand. Without proper context, AI ethics regulations could be at the risk of being crafted from a paranoia-based point of view, which could be detrimental to the technology’s growth and advancement in the right direction - as an incredibly powerful tool for innovation and empowerment.