Image: Scott Eklund / Red Box Pictures
Q. What kind of work is being done at Microsoft Research in Bengaluru?
This lab is on fire. The portfolio here spans work on making our cloud services more relevant to customers, to thinking out-of-the-box while addressing the problems of huge wealth disparity, or developing technologies that can help people with special needs.
There is a room here called ‘the room of enablement’, where demonstrations and presentations are made. The most impressive work there was a project to build tools to let visually-impaired people become good programmers; the designers are working on audio interfaces.
Right down the hall there is another room where work on languages is on. In India, when people speak in English, they effortlessly switch words with the equivalent ones in their native tongues, and so on. The technical term in programming for such situations is called code switching, and speech recognition systems using Microsoft’s Cortana or Amazon’s Alexa or Google Home or Apple’s Siri aren’t made to do this very well, as they are focussed on one language—English.
I saw some fabulous deep studies in this area coming out of this lab. It’s safe to say this lab has essentially defined a rising field of the use of a mix of languages all at once and how to build systems to understand it.
Q. How far are you from programming with thought?
First of all, there’s been a great deal of work towards what is called ‘intentional programming’ of very high-level specifications, which is to start not with thought, but [spoken] natural language expression of what you’d like the program to do. The idea is to look at this as a language translation problem.
The other direction is program synthesis, which is being done in this lab: This is the idea that you watch somebody give examples. Like I want to make a list of names in an Excel column and then put their initials in the next column, and right away write the code to do this.
“ As human beings, we often feel like we are a single intellect. In reality, we are made of many competencies coordinated in a beautiful way.
However, this may not be enough because the examples may be ‘weakly constraining’ [meaning they may not provide all the information the program needs]. Therefore, we could come up with a creative behind-the-scenes search of many programs and then use machine learning to learn how to prioritise which program is the right one [to build the list].
The lab here in Bengaluru is doing work on reasoning out the larger conversation about the program itself. The first step is, you want to see an example from a human being. Second phase is to take lots of data—examples from many human beings from many years—and figure out what they mean by just a few examples. The third part is for the machine to be able to say, ‘You know, can I come back to you and pose a question about whether you mean this or not?’ This is active learning. Q. Are there entirely new paradigms of building artificial intelligence—computers that can teach themselves?
One of the pillars of Microsoft Research in artificial intelligence (AI) is work towards more general AI. How do we address, what I would call, the existing mysteries of human intellect that we’ve not been able to solve, or make good progress on in computer science. One of these is what you just alluded to, that is the ability of even toddlers to learn by just watching without labels.
With machine learning, supervised learning is having huge volumes of data, all labelled. Like, say, 100 symptoms and indications of a particular disease; and the algorithm crunches it and pops out a classifier that can relatively narrowly diagnose a disease.
There is another area of promise called ‘reinforcement learning’, made famous by the Alpha Go people. It’s more like you take actions in the world, and you receive thousands of signals that are rewards or punishments. Sometimes, the rewards are actually at the end, which you want to game and defeat 4,000 moves earlier. These are models where a smart system knows how to take a set of steps.
A third area is when we don’t have enough data; [for these situations] we want to run scenarios trillions of times, and learn them. Some psychologists believe that human beings learn rich game-like scenarios in their minds, which generates data. One of the magical mysteries of unsupervised learning is that it’s actually supervised, but through simulations.
Another mystery is the notion of common sense reasoning, which is actually different for different people. If you really look at it, our common sense understanding of the world around us—like how gravity holds us down to the ground or how liquid takes the form of its container—is really massive, and what we learn in universities is just spit-polish on top. That knowledge is missing in our computing systems.
We often say, ‘Wouldn’t it be great to imbue our computer systems with the common sense of a five-year-old, and it would be brilliant in its ability?’ One of the visions in this movement towards a more general AI is called integrative AI: How do we bring together all this expertise we’re developing—machine learning, vision, translation, image recognition—into one system, and build a symphony of an intellect.
As human beings, we often feel like we are unitary, like we are a single intellect. In reality, we are made of many competencies somehow coordinated in a beautiful way. And this is one of the goals of our many projects to build systems that have the symphony of intellect, well-coordinated, so it appears fluid and it can do many diverse things.
I started with your question of ‘how do we go from this batch of data to classifiers’, and, to be honest what we get out of that at the end of those systems is this very narrow intelligence. They are almost what we call naïve savants, even idiot savants. They are brilliant, but they are extremely narrow. To understand the mystery of the breadth of human intellect, I think our great grandkids will have systems that have that breadth.
(This story appears in the 16 March, 2018 issue of Forbes India. To visit our Archives, click here.)