In an exclusive interview, the minister discusses why India needs its own foundational model, outlines a roadmap for applying AI at scale, highlights progress on the semiconductor mission, explains why the world is looking at India as the next semiconductor nation, and more
Ashwini Vaishnaw, Minister for Railways, Information & Broadcasting, and Electronics & IT
Image: Amit Dave / Reuters
A day after India’s cabinet approved a new semiconductor plant—a joint venture between HCL Group and Taiwan's Foxconn—Minister for Railways, Information & Broadcasting, and Electronics & IT Ashwini Vaishnaw sat down for an interaction with Forbes India. He spoke about two major developments expected before the end of this year: Producing the first made-in-India chip and building the country’s foundational AI (artificial intelligence) model. He says many countries want to regulate AI using only the legal system. On the contrary, India is considering regulating it in a techno-legal way, which means having technological solutions to ensure AI remains within safety parameters. Edited excerpts:
Q. Can you walk us through the progress of India's AI mission, including the development of foundational models?
The AI mission is comprehensive, with multiple parts. Our first objective was to provide high-quality GPU compute power to startups, students, academia and researchers. To achieve this, we created a PPP (public-private partnership) model, empanelling GPU providers and offering their services to those who need them. We have secured 18,000 GPUs in the first round and received applications for over 15,000 GPUs in the second round, exceeding our initial target of 10,000 GPUs.
The second part of our mission involves developing foundational models. We have developed models with 2 billion and 7 billion parameters, as well as smaller models for specific problem sets. Building on this experience, India AI mission approved Sarvam, a startup that will work on building India’s foundational model. Sarvam has made some good progress and is taking a unique approach to developing the model.
The third important part is the AI Safety Institute, where we are taking a techno-legal approach to regulate AI. We have empanelled teams to develop technologies for AI safety, such as detecting deepfakes, biases and unlearning models. This approach allows us to tap into a large talent pool and develop solutions for AI safety.