Daily CSR
Daily CSR

Daily CSR
Daily news about corporate social responsibility, ethics and sustainability

Lenovo’s AI Revolution: Bridging Communication Gaps for the Deaf and Hard of Hearing


At Lenovo’s Tech World event, an innovative demonstration took place. A software developer named Gabriel, using Libras, the official Brazilian Sign Language, warmly greeted Lenovo’s chairman and CEO, Yuanqing Yang, known as YY. Despite YY not knowing Libras, he was able to understand Gabriel perfectly, thanks to AI technology. A camera tracked Gabriel’s hand movements, and an AI engine provided real-time text and voice translations, effectively eliminating the language barrier.

This interaction highlighted a revolutionary accessibility solution developed by Lenovo researchers, which has the potential to significantly impact the lives of many, including the 2.3 million individuals in Brazil who are deaf or hard of hearing.

Hildebrando Lima, Lenovo’s director of research and development in Brazil, expressed that this scalable solution represents a new AI-powered paradigm for accessibility and inclusion. The technology is designed to facilitate interactions in situations where a sign language interpreter might not be available, such as in retail spaces or hospitals, thereby promoting autonomy and connection.

Behind the scenes at the Tech World event, Lenovo edge servers supplied the necessary computing power to run the AI and interpret the complex data captured as Gabriel signed his greeting. While cloud computation is an option, edge servers offer superior speed and reliability at the exact location where the AI is needed.

The demonstration served as a proof of concept, particularly Gabriel’s AI voice, which was chosen by his family from 13 custom options. However, the underlying technology is quite advanced, having been in development for four years. Numerous deaf and hard of hearing individuals using Libras have already contributed thousands of hours of anonymized video data to build the training set and advance the AI.

The idea for this core accessibility R&D initiative originated from an internal team discussion at Lenovo in 2019, when a software developer proficient in Libras highlighted several everyday accessibility challenges and urged Lenovo to do more to enhance independence and quality of life for the deaf community.

“As a company, we are committed to delivering smarter technology for all, and that means prioritizing inclusivity and considering the diversity of our customers and communities,” Lima said. “We embraced the challenge.”
The Lenovo team in Brazil embarked on a project to create a real-time translation chat tool. This tool allows deaf or hard of hearing individuals to sign to a device’s camera, and an algorithm instantly translates the Libras into written or spoken Portuguese text. With the widespread use of generative AI and multilingual datasets, translations can now be made into numerous other languages.
However, real-time video capture and translation between languages involve a massive amount of data, including the individual gestures for each word and the syntax of each sentence. Just as spoken languages like English have distinct regional accents, movements and styles can vary among individuals within Libras.
Lima, from Lenovo, highlighted the numerous challenges associated with video capture alone, such as skin color, background color, lighting, clothing, the speed of the signer’s gestures, hand positions relative to the body, and the varying depth perception capabilities of different cameras.
To address the data challenge, Lenovo partnered with Brazilian innovation center CESAR, pooling their expertise in capturing and cataloging video to establish the groundwork for the AI. Together, Lenovo and CESAR have compiled a dataset of thousands of Libras videos to train the core algorithm to recognize and contextualize individual gestures. Lenovo then spearheaded the development of the revolutionary AI at the core of the solution.
The AI is capable of recognizing hand positions and the digital articulation points of the signer’s fingers. After processing these movements and gestures, the AI can accurately identify the flow of a sentence and quickly convert the sign language into text.
The team also worked with Lenovo’s Product Diversity Office (PDO), which aims to ensure that Lenovo products are accessible to everyone, regardless of their physical attributes or abilities. The PDO’s inclusive design experts helped identify potential areas of concern, such as skin tone, hair style, corrective lenses, and limb differences, and ensured that product testing took these factors into account.
At a recent internal event in Brazil focused on inclusion in Lenovo workspaces, a member of the Lenovo R&D team learned about a deaf individual who had struggled to communicate with her parents throughout her childhood. She faced significant challenges and relied heavily on sign-language interpreters, who were not always available, especially at home.
“Imagine being unable to talk easily to your friends or parents for your entire childhood, or with your colleagues at work,” said Lima.
“It’s the kind of intimate, family, education, and workplace inclusion scenario where this solution can change so much.”
The Lenovo R&D team clarified that the solution is not designed to replace the need for more people to learn Libras or other sign languages. Instead, it aims to bridge existing communication gaps. Furthermore, the AI could potentially be used to expedite sign language learning, using computer vision to monitor the accuracy of gestures and guide users in making adjustments. If implemented on wearable technology or through augmented reality, individuals could have immersive learning experiences with the AI serving as a mentor.

The R&D team at Lenovo collaborated with Lenovo’s Infrastructure Solutions Group to identify an edge computing solution. While relying solely on the cloud and consequently high-speed internet is feasible in some cases, it is not always the most reliable option. For instance, potential users in a hospital or airport, where time is crucial, would not want to depend on an unpredictable connection. Edge computing aligns with Lenovo’s pocket-to-cloud portfolio, which brings AI to the source of the data and into the hands of users.

The next phase of the project involves scaling beyond internal testing. More data points will be required to launch a real-time sign language translation interface on a larger scale. The team is considering self-learning algorithms and other technologies to speed up development, particularly as the user base and datasets expand.

Lenovo is also investigating how to customize the translation solution for specific industry sectors, such as finance or retail. The datasets can be fine-tuned and optimized to deliver an optimal user experience. As the solution evolves and inspires more inclusive technology, the over 430 million deaf and hard of hearing individuals worldwide may experience the significant potential of AI.