How Signia Improved Machine Learning in Hearing Aids

Machine learning isn't quite what it seems. While the words bring up ideas of artificial intelligence and expensive technology, the reality is far simpler. It's also a driving force in the improvement of hearing aids.

'Machine learning' is a popular buzzword, especially in tech circles. It feels high-tech and full of potential, and immediately conjures up ideas of artificial intelligence and androids. While AI still feels futuristic, we've had machine learning for a long time. In fact, it shapes the way many of us use technology, from our computers to our hearing aids.

Humble beginnings

One of the first machine learning programs was designed to do one simple task: play chess. It did not have to play chess well, at least not at first. Over time, it played enough games of chess to 'learn' which moves worked. This learning curve led the machine to become increasingly good at the game, later going on to beat the world's best chess player.

This same concept has been applied to other things, including checkers and online multiplayer games. There are hundreds of programs and experiments that involve training a machine to play a game, and this research into AI has helped innovators apply machine learning to nearly everything - including hearing aids.

The rise of machine learning in hearing aids

Machine learning was included in hearing aids as early as 2006, with a hearing aid model called the Centra. Over time, this hearing aid was programmed to learn its user's preferences and tailor the default gain to these behaviors. The gain could still be changed by the wearer, but the hearing aid was designed to orient itself naturally.

This reduced the likelihood that the hearing aid would need adjustment, which created a more comfortable and seamless listening experience for the wearer. While the idea was simple, the Centra provided important research into how machine learning could benefit hearing aid manufacturers and wearers alike. Hard of hearing wearers could enjoy a more lifelike sound, and hearing aid developers could gain valuable insight into how machine learning improved their devices.

From there, the original concept has been re-worked, tweaked, and refined into something better. The artificial intelligence used in hearing aids is quicker, more efficient, and picks up on wearer preferences faster. Machine learning has also paved the way for other improvements to hearing aid technology, including Signia’s Own Voice Processing (OVP).

The power of machine learning

2017 brought an entirely new feature to the table. Many people with hearing loss cite the sound of their own voice as a downside to using hearing aids. Signia addressed this issue by developing OVP for a more natural-sounding own voice.

In order to function properly, the artificial intelligence involved in OVP has to "learn" the wearer's voice. While machine learning could take weeks in previous hearing aid models, the voice recognition program in OVP takes mere seconds.

The revolutionary solution works by utilizing bilateral data sharing, known as Ultra HD e2e wireless communication. The two hearing aids work together to create a scan of the wearer's head, which can be used to identify when the person is speaking. From there, the algorithm is able to separate the person's voice from all other sounds around them, including other voices, and process it instantaneously to create a more natural-sounding own voice.

Before OVP, many hard of hearing people declined hearing aids because their own voice sounded unnatural and odd, but OVP has made it possible for them to enjoy speaking again.

Machine learning is ubiquitous in our computers, phones, and hearing aids. As our devices get smarter, we benefit more from using them.

Go to the top