HomeNewsIndian Student Builds an AI Model to Translate Sign Language into...

Indian Student Builds an AI Model to Translate Sign Language into English in Real-Time

Artificial Intelligence (AI) has been used to develop a variety of translation models to improve communication between users and overcome language barriers across regions. Companies such as Google and Facebook use AI to develop advanced translation models for their services. Now, a third-year engineering student from India has built an AI model that can detect American Sign Language (ASL) and translate them into English in real-time.

Indian Student Develops AI-based ASL Detector

Priyanjali Gupta, a student of Vellore Institute of Technology (VIT), shared a video on her LinkedIn profile, which shows a demo of an AI-based ASL detector. Although the AI ​​model can detect and translate sign languages ​​into English in real-time, it supports only a few words and phrases at the moment. These include hello, please, thanks, I love you, yes and no.

Gupta built the model by taking advantage of the Tensorflow object detection API and using transfer learning through a pre-trained model called ssd_mobilenet. This means that she was able to reuse existing code to fit her ASL detector model. Also, it’s worth mentioning that the AI ​​model doesn’t actually translate ASL into English. Instead, it identifies an object, in this case, the pointer, and then determines how similar it is to the pre-programmed objects in its database.

In an interview with Interesting Engineering, Gupta said that his biggest inspiration for creating such AI models is his mother who nudges him to “do something” after joining an engineering course at VIT. “It taunted me. But it made me think about what I could do with my knowledge and skills. One fine day, in the midst of a conversation with Alexa, the idea of ​​inclusive technology struck me. Started a set,” she told the publication.

Gupta also credits YouTuber and data scientist Nicolas Renot’s video from 2020, which details the development of the AI-based ASL detector in his statement.

Although Gupta’s post on LinkedIn garnered many positive responses and praise from the community, one AI-Vision engineer pointed out that the transfer learning method used in his model is “trained by other experts” and is “the best thing to do in AI.” It is an easy task.” Gupta accepted this statement and wrote that “Building a deep learning model just to detect signals is indeed a difficult problem but not impossible.”

“Currently I’m just an amateur student, but I’m learning and I’m sure that sooner or later our open-source community, which is far more experienced and learned than me, will find a solution, and maybe we have only hints. There can be deep learning models for languages,” she adds.

You can visit Priyanjali’s GitHub page to learn more about the AI ​​model and access relevant resources for the project.

Also Read: Why are all the tyres black colored? What happens if it’s of different colours?


Leave A Reply

Please enter your comment!
Please enter your name here