India developing unique AI regulation model: Ashwini Vaishnaw
The minister for electronics and information technology has listed three reasons why India needed its own foundation model
India has adopted “a techno-legal approach” to regulating artificial intelligence (AI) which, according to Ashwini Vaishnaw, minister for electronics and information technology, could provide a model to the world for safe development of the transformational and disruptive technology. This will be different from the purely legal approach being taken in some other countries.
“The world is looking at India as a potential leader in developing this new model, in contrast to the purely legal approach followed in some geographies. India believes that AI safety is best ensured through developing technologies that help keep AI safe. Technical solutions will augment legal provisions. A purely legal approach to protecting society from the harms that might come with the misuse of AI will likely not be as effective as a techno-legal approach,” the minister told Forbes India in an interview.
Asked whether Indians had grown accustomed to using global AI models such as ChatGPT and Llama, presenting a challenge for adoption of India's own foundational model, he said this was just the first chapter of AI. “A lot more is going to come in the coming years. So, I would not want to close that door by saying, ‘Oh, somebody has created something, so why create a foundational model of our own?’”
The minister listed three reasons why India needed its own foundation model. First, the country’s cultural heritage and linguistic strengths must be reflected in the models. Second, the biases that exist in many parts of the world need to be kept out. Third, these technologies are already gaining strategic importance and therefore it is important for India to have its own models for strategic strength.
Safety is one of the pillars of India’s AI mission, under which teams have been empanelled to develop technologies for safety, such as detecting deepfakes, biases and unlearning models. “This approach allows us to tap into a large talent pool and develop solutions for AI safety,” Vaishnaw said.