AR could play a role in helping people understand sign language
Students at New York have developed an app that uses AR and machine learning to translate sign language into spoken words, and vice versa. Unfortunately, many deaf people still feel that they live fairly isolated lives, with things like appointments and going out often massively complicated due to the lack of general knowledge of sign language throughout the hearing population.
The ARSL app uses computer vision models to translate sign language into spoken language and the other way around in real time. The app prototype has gained funding as part of the Verizon Connected Futures challenge, which was set up to form a pipeline between NYC universities and the company in order to promote next gen technology.
The ARSL team is headed by Zhongheng Li, who was inspired by his experiences of growing up with two deaf parents. Speaking of his mother, who moved to the US from Hong Kong, Li noted her frustration at the fact that there was no universal sign language
You can see the team’s presentation video here.
A lot to learn
There are, in fact, hundreds of different sign languages used all over the world. This can often make the act of translating between sign languages a difficult task, let alone translating from signs to spoken languages.
The ARSL app is currently limited to a single use case of making an appointment at a medical clinic. The task of expanding the functionality out is a complex one. Sign language is an intricate system of movement and facial expressions, with the words often coming in a different order than they would in English. Add into the equation the individual subtleties that each person brings to signing, and you have a lot for a machine attempting to learn and translate to handle.
Li and his team hope that the funding they’ve received will help them add a range of other scenarios to the app, with communicating at small medical clinics being at the top of the list. Larger medical facilities and hospitals are likely to have sign language interpreters on hand at all times. It seems, though, that we are still some way off from a full sign language translation app.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.
- » NexTech AR collaborates with Poplar over new brand advertising initiative
- » NexTech to expand its AR services in learning and development industry
- » AR to overtake VR as dominant platform for healthcare, finds GlobalData
- » Mojo Vision announces breakthrough in ‘invisible computing’ smart contact lenses
- » Accenture and Qualcomm collaborate to pilot extended reality product for event planning