AlterEgo is a non-invasive, wearable, peripheral neural interface that allows users to “converse in natural language with machines, artificial intelligence (AI) assistants, services, and other people without any voice. Image: ShutterstockI
n 2018, Arnav Kapur, a Delhi-born student of Massachusetts Institute of Technology (MIT), developed a device that has the potential to change the relation between man and machine. ‘AlterEgo’, as per MIT, is a non-invasive, wearable, peripheral neural interface that allows users to “converse in natural language with machines, artificial intelligence (AI) assistants, services, and other people without any voice—without opening their mouth, and without externally observable movements—simply by articulating words internally”.
In a recent video interview of Kapur which is doing the rounds on the internet, he is seen with the device placed behind his ear and answering difficult questions such as “the largest city in Bulgaria and its population” in the blink of an eye. The answer came post a Google search which Kapur conducted via his brain, without using speech or typing it on a search bar.
According to MIT, the primary focus of this project is to help support communication for people with speech disorders, including conditions like ALS (amyotrophic lateral sclerosis) and MS (multiple sclerosis). “Beyond that, the system has the potential to seamlessly integrate humans and computers—such that computing, the internet and AI would weave into our daily life as a ‘second self’ and augment our cognition and abilities.”
How does AlterEgo work?
The wearable device records neural signals as and when a person hears or thinks of words. The information is then transmitted to machines and the internet to find answers/solutions to the information sent. Without using any speech, typing keywords, or any visible actions, the user is able to send and receive information discreetly. The system provides feedback through audio using bone conduction, creating a closed-loop interface. This gives the feeling of speaking internally to oneself during human-computer interaction, without interfering with the user's normal auditory experience. Also read: All work and some play: The playful side of AI
The device is also capable of doing tasks such as ordering a pizza without using any app/phone. Kapur says the idea behind the device is for a user to have the entire internet in their head—eventually becoming an expert on any subject.
Man behind the machine
Kapur is an avid learner of all the disciplines that offer unique perspectives, which can attempt to find solutions to problems in the world. After his schooling in Delhi, Kapur went to the US to pursue his masters and PhD at MIT, where he is currently studying Media Arts and Sciences at the MIT Media Lab. He boasts a remarkable array of achievements, having created a 3D printable drone, a gene expression measurement platform, and a device called Drishti to aid the visually impaired. Additionally, he contributed to the development of a lunar rover designed for moon missions and image transmission to Earth. He also collaborated on a cutting-edge art installation displayed at the Tate Modern in London and the alt-AI conference in New York. Also read: The fear from AI is not new: Ajai Chowdhry
According to MIT, Arnav’s work explores whether AI and computing could instead be woven into the human experience as a direct extension of our cognition, rather than via external devices. In this way, computers would extend human ability multifold, instead of diminishing or replacing humans from our environment.