The link between gender, equality, human rights and digital privacy became clear to Ivana Bartoletti very early on in her career, which is why she decided to bring together legal knowledge of human rights and the technical knowledge of information security and coding. She has a master’s in human rights law from London Metropolitan University and started her career working in the human rights space, before focussing on responsible technology, AI and digital privacy. Bartoletti, global chief privacy officer at technology company Wipro, speaks with Forbes India about why we need people from different backgrounds working in artificial intelligence (AI) and privacy because it’s increasingly becoming not just about the technology, but also about a part of ourselves as individuals. While AI can be transformational in improving accessibility, there is need to correct biases in AI, and companies, government and consumers each have their role to play. Edited excerpts:
Q. You’ve made a transition from a human rights background to working with privacy issues, and you say that privacy is linked to equality. Can you elaborate?
I started in the human rights field, very much sort of women’s rights really. And something about data really interested me, especially around the power of data collection. There was a time when we thought data was neutral and it can inform us about the world. But in reality, data is not neutral. It is a picture of society, right? And even what you decide to collect and what you decide not to collect is a choice that you make. So to an extent, the link between equality, gender, human rights and the digital became quite clear to me. For example, if you take the medical field, a lot of women say that when they go to the doctor, what they experience is often not recognised. That is because a lot of data collected about diseases, especially heart diseases, has been collected by men. So to me it became quite clear that there is a strong link between the power around data collection, and gender and equality. That’s where I started to study all of it, bringing together the legal knowledge of human rights and the technical knowledge of information security and coding. That’s how both things merged together.
Q. The conversation around digital rights and privacy has changed a lot over the past few years. Now, there’s also AI that’s come into the picture in a big way. How do we start making sense of how all of this is affecting our lives?
AI does offer a great opportunity and it is really important that we bear this in mind. The fact that we can identify correlations between things that we wouldn’t be able to identify without the technical capabilities, I find that incredible. The fact that you have tools like generative AI that can summarise human knowledge. There is a huge potential to bring knowledge, expertise and advantages to people who could not access that before. This also means that with this amount of data that we collect, companies have a huge responsibility, and citizens and consumers can have a greater role in deciding about the data. This is important for a country like India, for example, because they defuse the economy. Bringing education and health care to places that are more rural. You could have support to women and vulnerable people in a way that you did not have before. So that is all positive. And I would like to see a generation of women and younger people championing AI for growth, accessibility and democratisation. There are also risks of bias that come with AI, risks of putting into software a lot of the inequalities that we have in society now.
Also read: The AI risks we should be worried about
Q. In India, the conversation around AI has become more ‘mainstream’ because of platforms like ChatGPT. There’s also a lot of fear creeping in about how AI will take our jobs sooner or later, or how much we can trust it. What roles do companies and the government have to play here?
You say correctly that ChatGPT is the first time that people are seeing AI in their daily lives in this way. It’s all been in films and movies before, like robots or machines taking over. Now, on the one hand you have all tech people who say AI is going to destroy jobs and the world as we know it. And a lot of people would feel, ‘Is that correct?’ and say that we need to push the pause button on AI. Catastrophic appeals to stop AI is not helping and it is unlikely that AI will lead to the demise of the world. What we know is that there are risks that we have to deal with now. It’s time to push the accelerator on the governance and the guidance and rules.
This governance comes at many levels. Companies have a duty to create an environment of trust around what they produce and what they sell and what they use in-house for their own employees. And I am absolutely confident that organisations that are able to bring together innovation in AI with some governance are going to have a competitive advantage globally. Because consumers are understanding that these technologies are not neutral and have to be used well. So companies have responsibilities in developing a generative AI strategy. We’ve done it at Wipro–How are you going to develop and deploy this technology? How are you going to make sure that the output is fair? How are you going to choose your fairness? How are you making sure that there is privacy by design in everything that you do, that you’re not breaching copyright when you’re using artificial intelligence systems?
Government has the crucial role of understanding where the risks in these technologies are and how they can be mitigated, how they can foster and reward good behaviour for companies. This is happening in different ways in different countries. Everywhere governments are thinking about how we have to harness the value of these technologies and what rules do we put into place. Guidelines, support for research is important. Supporting companies to understand how to bring together privacy and AI is a crucial role for governments.
Q. Are companies understanding the nuances of how AI can play out in the daily course of work and decision-making? What are the conversations you’re having at Wipro?
[At Wipro] we have a taskforce on generative AI from a privacy and ethics standpoint, which I lead. We have established policies and governance around the use of generative AI, how to use it, deploy it, what to do and what not to do. We are also looking at, for example, guidelines to support for developers in fairness and machine learning, how to identify bias, how to cleanse or massage the data so that when you put it into an AI system, it doesn’t have biases. That’s the work I do in my company. But I can tell you that privacy laws or guidelines around the world have convergences around transparency of systems and fairness. There’s also increased research on how to respond to these issues with technology.
Then there is also the development of new talent in this field. When I started working in privacy, we were cheering for privacy on the sidelines. Now, people talk about these issues at their kitchen tables. So there are also a growing number of professions around privacy. Privacy before was just the lawyers, now it is very much at the intersection of law and technology. AI is not just about technology. So we need a new generation of people coming into this field from different backgrounds to build this profession. Because the more diverse the people are in this field, the better it would be.
Also read: Navigating the future and ethics of Generative AI
Q. How do people protect themselves against compromised data, particularly health care data? In India, there was a breach in our vaccination portal, CoWin. What are the implications of such data leaks?
People have seen globally the importance of data security. There is a general awareness that data matters and that it’s not just a piece of information but about us as individuals. It cannot be separated from us. So the first thing is awareness. Unlike before, we are understanding that data is not something we can give out. It’s like a part of your body. Second is being cautious of when you share data, who you share it with and only share what is necessary to share. Third, consumers have to demand that businesses keep their information safe. That also puts the onus on companies to do the right thing. If I am a consumer, I would choose to buy from a company only if I trust the way they handle my data. All of this is going to change the culture moving forward and it’s a matter of companies doing the right thing, and also consumers, citizens, understanding that we live increasingly in a world that is both physical and digital at the same time.
Q. Your initiative called ‘Women Leading in AI’ talks about why it’s important to have women in decision-making or leadership roles when it comes to shaping the future of AI. Could you talk more about that, and other intersectionalities that we can focus on?
We need more women of colour in coding and programming, that’s for sure. But that’s not enough. We also need more women around policy in AI, because you can have the best AI system in the world, but ultimately, you can still use it for the wrong thing. The fact that you’ve got the capability to develop something doesn’t mean you’ve got to do it.
You’ve got to deal with AI in terms of risks and countries are doing this. India is doing this. But also, we need more women in company leadership, deciding what we’re going to produce, before deciding how. For example, AI can be absolutely brilliant for help with climate change, but AI also causes pollution. So what we are going to produce is important. I would want to see more women, more diversity, in leadership because I want to see more ethical choices about which product we are going to invest in, and what is the strategy around AI.