When spending time outdoors, many hearing aid wearers still struggle with a problem they did not have before wearing their hearing aids: the sound of fluctuating and annoying wind noise caused by the turbulent airflow around the hearing aid microphones. Besides mechanical housing optimizations, most hearing aids already have some type of a dedicated wind noise reduction algorithm. The challenge for such algorithms is to detect wind noise accurately and speedily and to reduce its annoyance.
As already indicated in the philosophy of the AX platform, we want our hearing aid processing to move towards augmented processing: by shaping all signals in the hearing aid wearer’s surroundings, thereby allowing the wearer to focus on the relevant ones. When exposed to wind noise, even when reduced in level, the random fluctuations in the noise will tend to dominate our perception, making the noise noticeable and annoying. According to Zwicker’s psychoacoustic annoyance model, the more fluctuating the noise is, the more annoying it will be perceived.
Taking on the above-described challenge, the upgraded eWindScreen now introduces a precise wind smoothing algorithm that reacts faster to the fluctuations in the wind noise. Thereby, the upgraded eWindScreen not only limits the gain for wind noise but also reduces the fluctuations at each ear. This smooths the wind noise signal in both ears, and as a result, the wind noise is not only softer but also not dominated by such fluctuations anymore. Perceptually this reduction in fluctuation strength keeps the wind noise in the background where it loses its capacity to draw the attention of the hearing aid wearer and thus becomes less annoying. This allows the wearer to better engage in outdoor communication or other outdoor activities.
AX Soundscape Processing works in partnership with the Augmented Focus™ split processing to analyze the acoustic environment and optimize sound accordingly. With this improvement to AX Soundscape Processing, AX hearing aids leverage Own Voice Processing 2.0 in a new way to calculate even more precise levels of support for external voices at every moment when the wearer is actively communicating in demanding environments. Note that to benefit from this upgrade, OVP 2.0 must be trained in Connexx. The most important need hearing aid wearers have to address and continuously focus on is the ability to engage in conversations with other people in noisy environments. Like everyone else, hearing aid wearers also want to actively contribute to a conversation and not only listen passively. Until now hearing aids have primarily used information about the environment to detail out sound sources and attenuate the noise and focus on the people speaking. In situations where both the hearing aid wearer and others are speaking in a noisy environment, the automatic steering in the hearing aid may be disturbed by the fact that the sound of the wearer’s own voice is louder than the sound of other voices, and, as a result, provide non-optimal support to the wearer by not maintaining consistently optimal microphone directionality. In addition to uncomfortable or distracting sound fluctuations, this may also in some cases lead to a drop in speech understanding.
Signia's strong focus on mastering the full acoustic scene, taking all sound sources into consideration, inspired the inclusion of own voice detection in the analysis of the entire sound scene. Through extended use of the own voice detection function of Signia’s unique Own Voice Processing 2.0, the application and focus of all the speech enhancement and noise attenuation algorithms can be adapted more precisely to the given communication situation.
Own Voice Processing 2.0 separates own voice from background noise and optimizes the processing of own voice without disturbing the processing of the background. However, in the improved AX Soundscape Processing, Own Voice Processing 2.0 is not only improving the wearer’s perception of own voice. It is now used as an additional sensor, contributing to the analysis of the entire sound scene. Knowing whether it is the wearer or someone else who is talking allows the right amount of support – from the advanced AX noise reduction and directionality systems – to be provided when the wearer engages actively in conversations in demanding sound environments. Thus, the own voice detection helps to provide the optimal processing of not only the wearer’s own voice but of the entire sound environment, including the voices that the wearer wants to attend to. As a result, the wearer will always experience a stable and optimal support, which allows them to engage in conversations in challenging sound environments.