Google DeepMind announces SignGemma: AI for Sign Language

A step towards inclusive Artificial Intelligence

Google DeepMind announced SignGemma, its most capable model yet for translating sign language into spoken text. This open model will join the Gemma model family later this year, a step towards inclusive artificial intelligence.

SignGemma is designed to translate various sign languages into spoken language text. While the model has been trained to be massively multilingual, it has been primarily tested and optimised for American Sign Language (ASL) and English.

Earlier, DeepMind launched the Gemma 3n model, which empowers developers to generate intelligent text from audio, images, video, and written input. This enables the creation of live and interactive applications that respond to what users see and hear in real time and the development of sophisticated audio-based tools for speech recognition, translation, and voice-controlled experiences.

Further, Google, in collaboration with Georgia Tech and the Wild Dolphin Project, launched DolphinGemma, an AI model developed to analyse and generate dolphin vocalisation. The model is trained on decades of underwater video and audio data from WDP’s long-term study of Atlantic spotted dolphins in the Bahamas.

MedGemma, a recent addition to the Gemma 3 family, is a specialised collection of models designed to advance medical AI applications. It supports tasks such as clinical reasoning and the analysis of medical images, helping accelerate innovation at the intersection of healthcare and artificial intelligence.

In February this year, NVIDIA, the American Society for Deaf Children and creative agency Hello Monday launched Signs, an interactive web platform built to support ASL learning and the development of accessible AI applications.

These efforts could significantly improve accessibility for individuals who use sign language as their primary mode of communication. By facilitating smoother and faster translations of sign language into spoken or written text, it could also enable better participation in various aspects of daily life, including work, education and social interactions.

📣 Want to advertise in AIM? Book here

Picture of Merin Susan John
Merin Susan John
Merin Susan John is a journalist at Analytics India Magazine, reporting on the intersection of AI and human capital. She can be reached at merin.john@aimmediahouse.com
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed