Engineers and doctors from IIT-Jodhpur and AIIMS-Jodhpur recently collaborated to develop 'Talking Gloves'. It uses the principles of artificial intelligence (AI) and machine learning (ML) and is expected to hit the markets next year.
In what could be a significant breakthrough for persons with speech disability, doctors and engineers from the All India Institute of Medical Sciences (AIIMS) Jodhpur and Indian Institute of Technology (IIT) Jodhpur, respectively, have developed low-cost ‘Talking Gloves’.
Priced at less than Rs 5,000, this device uses the principles of artificial intelligence (AI) and machine learning (ML) to automatically generate speech that is “language independent” and facilitates conversations between persons who are mute and those who aren’t. What the device essentially does is help a person with a speech disability to convert hand gestures into texts or pre-recorded voices. This allows that person to communicate their message independently.
Leading the charge in the design and development of Talking Gloves is Professor Sumit Kalra, Assistant Professor at the Department of Computer Science and Engineering, IIT Jodhpur.
He was ably assisted by fellow innovators Dr Arpit Khandelwal from IIT Jodhpur, and Dr Nithin Prakash Nair (Senior Resident, Otorhinolaryngology or ENT), Dr Amit Goyal (Professor and Head, Department of Otorhinolaryngology) and Dr Abhinav Dixit (Professor, Department of Physiology), from AIIMS Jodhpur.
They have even obtained a patent for this innovation.
“Various preordain situations result in a disease or injury to people and deprive them of their natural ability to communicate verbally…Sign language is a mode of communication for patients who are affected since birth… [However] In recent years, the technological advancement in the field of electro-medical devices for life support, implantable biomedical devices, and wearable medical devices have successfully provided artificial abilities to the afflicted people related to any kind of disability or impairment,” notes their 2019 patent application.
In this regard, the patented innovation from researchers at IIT Jodhpur and AIIMS Jodhpur is a significant step towards advancement in this field.
“The language-independent speech generation device will bring the people back to the mainstream in today’s global era without any language barrier. Users of the device only need to learn once and they will be able to verbally communicate in any language with their knowledge. Additionally, the device can be customised to produce a voice similar to the original voice of the patients which makes it appear more natural while using the device,” said Prof Sumit Kalra, in a press release issued by IIT Jodhpur earlier this week.
The initial push for this innovation came from Dr Amit Goyal and Dr Abhinav Dixit who have long thought about solving the problem of rehabilitation for patients suffering from the loss of vocal ability. They observed that speech rehabilitation is a difficult process and many times the person goes through a lot of stress. “The success rate [of speech rehabilitation] is also as low as 33 per cent. Even with existing methods and digital devices, the produced voice quality is also not good. This motivated them to look out for a suitable solution,” says Prof Kalra, speaking to The Better India.
Dr Goyal and Dr Dixit, along with Dr Nithin Prakash Nair, met with Professor Sumit Kalra and Professor Arpit Khadelwal from IIT Jodhpur to discuss possible solutions in October 2018. Following long hours of brainstorming sessions, they come up with the current solution.
“It took eight months to develop the first prototype and demonstrate the feasibility. The patent application was filed in September 2019. The major challenge was to come up with a script that is language-independent and a suitable set of sensors and methods to generate the speech,” recalls Prof Kalra.
How Does it Work?
Electrical signals are generated by a first set of sensors in the developed device, wearable on a combination of a thumb, finger(s), and/or a wrist of a user’s dominant hand. These electrical signals are produced by the combination of fingers, thumb, hand and wrist movements. Similarly, electrical signals are also generated by the second set of sensors on the other hand.
These electrical signals are received at a signal processing unit. “The magnitude of the received electrical signals is compared with a plurality of predefined combinations of magnitudes stored in a memory by using the signal processing unit,” notes the patent application.
By using AI and ML algorithms, these combinations of signals are translated into phonetics corresponding to at least one consonant and a vowel.
“In an example implementation, the consonant and the vowel can be from Hindi language phonetics. A phonetic is assigned to the received electrical signals based on the comparison. An audio signal is generated by an audio transmitter corresponding to the assigned phonetic and based on trained data associated with vocal characteristics stored in a machine learning unit. The generation of audio signals according to the phonetics having combination of vowels and consonants leads to the generation of speech and enables the mute people to audibly communicate with others. The speech synthesis technique of the present subject matter uses phonetics, and therefore the speech generation is independent of any language,” it adds.
“Finger movements from both hands are used to map to phonetic syllables and given as input to the speech generation module to produce the sound. The speech generation module can be customised to select the accent based on the user preference. For prototype development, we have taken regular gloves available in the market and fit very lightweight sensors on them. In general, any regular gloves material can be used for this purpose,” Prof Kalra says.
Speaking of the specific advantages the ‘Talking Gloves’ have for the differently-abled, Kalra notes, “The existing methods to generate speech are either symbolic or language-dependent. Symbolic methods have limitations in terms of their scope of what can be spoken out and language-dependent methods are limited by language alone. With the proposed method, one can speak any language with any combination of words without any such limitations.”
The team is further working to enhance the features such as durability, weight, responsiveness, and ease-of-use, of the developed device. The developed product will be commercialised through a startup incubated by IIT Jodhpur. Potential users and customers can expect the first version of these Talking Gloves to launch in the market by the end of 2022.
(Edited by Yoshita Rao)