Skip to main content

SLAIT’s real-time sign language translation promises more accessible online communication

Sign language is used by millions of people around the world, but unlike Spanish, Mandarin or even Latin, there’s no automatic translation available for those who can’t use it. SLAIT claims the first such tool available for general use, which can translate around 200 words and simple sentences to start — using nothing but an ordinary computer and webcam.

People with hearing impairments, or other conditions that make vocal speech difficult, number in the hundreds of millions, rely on the same common tech tools as the hearing population. But while emails and text chat are useful and of course very common now, they aren’t a replacement for face-to-face communication, and unfortunately there’s no easy way for signing to be turned into written or spoken words, so this remains a significant barrier.

We’ve seen attempts at automatic sign language (usually American/ASL) translation for years and years: in 2012 Microsoft awarded its Imagine Cup to a student team that tracked hand movements with gloves; in 2018 I wrote about SignAll, which has been working on a sign language translation booth using multiple cameras to give 3D positioning; and in 2019 I noted that a new hand-tracking algorithm called MediaPipe, from Google’s AI labs, could lead to advances in sign detection. Turns out that’s more or less exactly what happened.

SLAIT is a startup built out of research done at the Aachen University of Applied Sciences in Germany, where co-founder Antonio Domènech built a small ASL recognition engine using MediaPipe and custom neural networks. Having proved the basic notion, Domènech was joined by co-founders Evgeny Fomin and William Vicars to start the company; they then moved on to building a system that could recognize first 100, and now 200 individual ASL gestures and some simple sentences. The translation occurs offline, and in near real time on any relatively recent phone or computer.

Animation showing ASL signs being translated to text, and spoken words being transcribed to text back.

They plan to make it available for educational and development work, expanding their dataset so they can improve the model before attempting any more significant consumer applications.

Of course, the development of the current model was not at all simple, though it was achieved in remarkably little time by a small team. MediaPipe offered an effective, open-source method for tracking hand and finger positions, sure, but the crucial component for any strong machine learning model is data, in this case video data (since it would be interpreting video) of ASL in use — and there simply isn’t a lot of that available.

As they recently explained in a presentation for the DeafIT conference, the first team evaluated using an older Microsoft database, but found that a newer Australian academic database had more and better quality data, allowing for the creation of a model that is 92 percent accurate at identifying any of 200 signs in real time. They have augmented this with sign language videos from social media (with permission, of course) and government speeches that have sign language interpreters — but they still need more.

Animated image of a woman saying "deaf understand hearing" in ASL.

A GIF showing one of the prototypes in action — the consumer product won’t have a wireframe, obviously.Image Credits: Slait.ai

Their intention is to make the platform available to the deaf and ASL learner communities, who hopefully won’t mind their use of the system being turned to its improvement.

And naturally it could prove an invaluable tool in its present state, since the company’s translation model, even as a work in progress, is still potentially transformative for many people. With the amount of video calls going on these days and likely for the rest of eternity, accessibility is being left behind — only some platforms offer automatic captioning, transcription, summaries, and certainly none recognize sign language. But with SLAIT’s tool people could sign normally and participate in a video call naturally rather than using the neglected chat function.

“In the short term, we’ve proven that 200 word models are accessible and our results are getting better every day,” said SLAIT’s Evgeny Fomin. “In the medium term, we plan to release a consumer facing app to track sign language. However, there is a lot of work to do to reach a comprehensive library of all sign language gestures. We are committed to making this future state a reality. Our mission is to radically improve accessibility for the Deaf and hard of hearing communities.”

From left, Evgeny Fomin, Dominic Domènech, and Bill Vicars.Image Credits: Slait.ai

He cautioned that it will not be totally complete — just as translation and transcription in or to any language is only an approximation, the point is to provide practical results for millions of people, and a few hundred words goes a long way toward doing so. As data pours in, new words can be added to the vocabulary, and new multi-gesture phrases as well, and performance for the core set will improve.

Right now the company is seeking initial funding to get its prototype out and grow the team beyond the founding crew. Fomin said they have received some interest but want to make sure they connect with an investor who really understands the plan and vision.

When the engine itself has been built up to be more reliable by the addition of more data and the refining of the machine learning models, the team will look into further development and integration of the app with other products and services. For now the product is more of a proof of concept, but what a proof it is — with a bit more work SLAIT will have leapfrogged the industry and provided something that deaf and hearing people both have been wanting for decades.



from Startups – TechCrunch https://ift.tt/3nlGn5G

Comments

Popular posts from this blog

Axeleo Capital raises $51 million fund

Axeleo Capital has raised a $51 million fund (€45 million). Axeleo first started with an accelerator focused on enterprise startups. The firm is now all grown up with an acceleration program and a full-fledged VC fund. The accelerator is now called Axeleo Scale , while the fund is called Axeleo Capital . And it’s important to mention both parts of the business as they work hand in hand. Axeleo picks up around 10 startups per year and help them reach the Series A stage. If they’re doing well over the 12 to 18 months of the program, Axeleo funds those startups using its VC fund. Limited partners behind the company’s first fund include Bpifrance through the French Tech Accélération program, the Auvergne-Rhône-Alpes region, Vinci Energies, Crédit Agricole, BNP Paribas, Caisse d’Épargne Rhône-Alpes as well as various business angels and family offices. The firm is also partnering with Hi Inov, the holding company of the Dentressangle family. Axeleo will take care of the early stage in...

TikTok’s rivals in India struggle to cash in on its ban

For years, India has served as the largest open battleground for Silicon Valley and Chinese firms searching for their next billion users. With more than 400 million WhatsApp users , India is already the largest market for the Facebook-owned service. The social juggernaut’s big blue app also reaches more than 300 million users in the country. Google is estimated to reach just as many users in India, with YouTube closely rivaling WhatsApp for the most popular smartphone app in the country. Several major giants from China, like Alibaba and Tencent (which a decade ago shut doors for most foreign firms), also count India as their largest overseas market. At its peak, Alibaba’s UC Web gave Google’s Chrome a run for its money. And then there is TikTok, which also identified India as its biggest market outside of China . Though the aggressive arrival of foreign firms in India helped accelerate the growth of the local ecosystem, their capital and expertise also created a level of competit...