The new Google AI tool translates the sign language in the text, currently with launch at the end of the year in the test phase
Google has unveiled its most advanced AI model S, Signgemma to translate Sign Language in Google I/O 2025. This tool is currently tested and open for developers and selected users, which are accompanied by comprehensive availability expected by the end of the year.
Listen to the story

In short
- Google Signgemma AI supports real -time sign language translation
- Currently this AI Tool works best with American Sign Language and English
- Google claims that the Sign Gima is the most capable AI model to explain the sign language
A symbolic language is necessary for many people who have speech loss. They use it to communicate with the people around them, but regularly many people do not understand it. Now, AI is going to help here too. Google is working on an AI model, which is called Singma which will translate the sign language into text. The company says that it is the most capable artificial intelligence model to date, designed to translate the sign language in the text spoken. This new AI model is currently in its test phase, and is slapped for public launch by the end of the year.
Google first unveiled the Signema during the keynote speaker at Google I/O, where the GEMMA product manager Gas Martins described it as the company’s “most competent sign language undersanding model”. Martins stated that, unlike the previous efforts in sign language translation, Cinegema stands out for its open model approach and focuses on giving users accurate, real -time translation. While the tool is trained to handle various sign languages, Google says that the model currently performs best with American Sign Language (ASL) and English.
Martins said, “We are thrilled to declare Signama, our groundbreaking open model for sign language understanding, is set to release later this year.” “This is the most capable sign language understanding model ever, and we can’t wait to take this foundation and build this foundation for developers and deaf and hard-off-hearing communities.”
Google highlighted the fact that with this tool, the company aims to bridge the communication interval for millions of deaf and hard-off-hearing individuals worldwide.
Meanwhile, to ensure that the equipment is both effective and respect of its user base, Google is taking a collaborative approach to its development. The company has increased an open invitation to developers, researchers and members of global deaf and members of hearing communities that provide participation and response in the initial testing.
“We are thrilled to declare Sainagma, our groundbreaking open model for sign language understanding,” reads the official post from Deepmind on X. “Your unique experiences, insights and requirements are important because we launch and prepare for it, to make Sinegema as useful and impressive as possible.”
Signgemma begins when Google focuses too much when Google expands its AI portfolio. In Google I/O 2025, accessibility took the center stage with the announcement of several new AI-operated features, designed to make technology more inclusive for all. One of the highlights was the expansion of the integration of Gemini AI with the talkback of Android, which would now provide users AI-borne details for images and allow them to ask for follow-up questions about what is on their screen.
Google has also introduced Chrome updates, including automated optical character recognition (OCR) for scanned PDFs, making screen reader users capable of accessing, searching and interacting with text in documents that were previously inaccessible. For students, a new accessibility tool called a new accessibility tool on Chromebooks, which allows users to control their device with facial gestures and head movements.