Automatic Steering: The Director of the binax Soundtrack

AudiologyOnline, June 5, 2015

Learning Objectives

  • The participant will be able to describe the function of automatic steering in hearing aids.
  • The participant will be able to name the different acoustic situations differentiated by binax hearing aids.
  • The participant will be able to describe the functionality of Spatial Configurator.


The number of bilateral fittings has been growing continuously worldwide, according to EuroTrak 2012 and JapanTrak 2012 (Hougaard, Ruf, & Egger, 2013). This should not be unexpected, as there are many advantages present when using two hearing aids (see Mueller, Ricketts, & Bentler, 2014, for summary). One of these benefits is binaural summation, which increases loudness perceptions by 2-6 dB when an individual is fitted bilaterally. This allows the user to reduce the gain for each individual hearing aid. With bilateral fittings, wearers also experience better environmental balance and sound quality. Localization and spatial perception abilities of the momentary acoustic situation are improved, as well as the signal to noise ratio in noisy situations (Beilin & Herbig, 2014).

In addition to these well-known benefits of bilateral instruments, with the Siemens e2e wireless systems, which have been commercially available for over a decade, the hearing instruments can synchronize user interventions (such as program selection and volume adjustment), and can keep a bilateral balance of several features (such as directional system, automatic situation detection) and their effects on the signal processing.

But modern hearing aid technology can do even more. The “binax” platform, the next generation from Siemens, creates a real binaural audio link between the two hearing instruments. This e2e wireless 3.0 system has a significantly higher transmission capacity which makes it possible to exchange not only control signals, but audio signals too, without a significant increase in battery drain (Lotter, Schatzle, & Fischer, 2014).

Utilizing this improved signal processing system, binax significantly reduces two of the biggest wearer problems: understanding speech in background noise and listening outside in wind. Built upon the experience and success of the 48 channel Siemens micon platform, binax features enhanced directivity to the front (Narrow Directionality), true directionality to either side or the back (Spatial SpeechFocus) and improved wind noise reduction (eWindScreen binaural) with localization cues maintained (Herbig, 2014).

binax Features

Narrow Directionality

Monaural (i.e. traditional) directional technology in hearing aids utilizes the inputs of two microphones, providing an emphasis to sounds coming from the frontal direction, compared to the back. The new Narrow Directionality is using inputs from 4 microphones at both sides (Figure 1).


Figure 1. e2e wireless 3.0 is able to link the microphones in both hearing instruments in a bilateral fitting. In this way, both hearing aids utilize the input of 4 microphones.

By using the additional signals from the other side for directional processing, a more precise mapping from the environment can be achieved (Kamkar-Parsi, Fischer, & Aubreville, 2014), resulting a narrower focus beamwidth (Figure 2).


Figure 2. Compared with standard monaural directional microphone, Narrow Directionality has a narrower focus beam so that sounds outside of what is immediately in front of the wearer can be attenuated.

The user benefit of the improved directionality for understanding speech in background noise different situations has been tested and proven. In fact, two independent clinical studies have shown that hearing impaired wearers fit with binax Narrow Directionality may perform even better than their normal hearing counterparts in certain demanding environments (Powers & Froehlich, 2014). A couple of years ago we could not have imagined that the hearing instrument technology would not only approximate, but even outperform normal hearing.

Spatial SpeechFocus

e2e Wireless 3.0 makes possible not only features which can focus better to the frontal direction, but also allows for beamforming to the back or to either side of the hearing aid user. Spatial SpeechFocus automatically offers true directionality to the side when speech is detected as coming directly from either side (Figure 3). This means that regardless of where the speaker is located in relation to the user, Spatial SpeechFocus provides directionality towards the speaker and attenuates noise from the other directions. This is particularly useful for situations where the wearer cannot turn his head to face the direction of the non-frontal sound source (e.g. driving a car and talking with passengers), but needs properly functioning noise isolation and listening focus to any of the four directions around (Kamkar-Parsi, et al., 2014).


Figure 3. Spatial SpeechFocus offers true directionality towards whichever side speech comes from without compromising spatial and localization cues.

eWindScreen binaural

If wind noise appears at any side, it is attenuated immediately by using the less noisy contralateral side input (Figure 4). With the fine resolution offered by the 48 signal processing channels, it is enough to intervene only in the channels which are affected by the wind noise so speech understanding is improved while spatial hearing is preserved in windy conditions (Berghorn, Wilson, & Pischel, 2014).


Figure 4. eWindScreen binaural works by streaming the noise affected frequency channel(s) from the side with less wind to the side with stronger wind. The effect is immediate reduction of wind noise without compromising speech intelligibility or the ability to localize.

Automatic Steering

These binaural features (as well as several monaural ones) provide the real user benefit in many acoustic environments. It is a fairly realistic expectation from a pair of modern hearing instruments that they will emphasize the user preferred sound sources all the time, in all circumstances, and to do all processing without the need for any user interventions by implementing adaptive and intelligent algorithms. It would be of course necessary for such a higher level of automatic control to continuously monitor the environment around the user and steer every feature in order to maintain user satisfaction. This adaptation needs to be smart enough to always transmit the sounds of interest, to keep the less important components in a comfortably reduced but audible level, and to maintain the localization cues.

But what are these sounds of interest? What are the user expectations? Can we create some simple rules? The questions are not as simple as we might think, because ideally, we would like the hearing aid technology to perform similarly to how our normal hearing system works. There might be changes in the acoustical scenery, like alternating types of background noise, multiple sound sources from different directions, etc. all the time. And of course, the target that we want to hear can be very different. It can be that we want to listen to a talker within a restaurant as clear and sharp as possible, but then other times, we just want to enjoy the background sounds in the forest or the lively atmosphere of a crowded market place. The automatic binax steering algorithm needs to control all the features in 48 channels in order to emphasize the information the user wants to hear in the best and most comfortable manner (Figure 5).


Figure 5. Some moments from a day of a hearing aid user, with different targets, background noises, multiple sound sources, etc. There are different user expectations in every situation that need to be met continuously by the activated features. The automatic steering provides not only six possible feature sets according to the acoustic situation detection, but further fine tuning of the features are applied continuously by the monitoring sub-systems and contralateral information to meet user expectations all the time.

The binax hearing aids can detect six general situation classes: Quiet, Noise, Speech in Quiet, Speech in Noise, Car, and Music. The devices monitor several environmental inputs (such as overall sound level, background noise type, location of speaker, etc.) to decide which situation class to use. As the binax platform has multichannel processing with a very high number of channels, the environmental signals are monitored with fine frequency resolution, which results in a precise assessment of the attributes of the given environment. Every situation class has its own user expectation philosophy, like in the “Speech in Noise” users need – in general – good speech understanding with reduced environmental noises.

But the automatic steering of binax is much more complex than the pure classification approach used in most of hearing aids up to now. There are numerous variations within a situation class, as shown in Figure 5.  The “office conversation at 11:15 AM,” the “business lunch at 12:30 PM,” and the “meet friends at 10:00 PM” are typical scenarios for speech in noise, yet the desire for directionality is very different. In this way, after recognizing the main situation class, the automatic steering has not been ready yet. Several monitoring subsystems are utilized to fine tune the features in order to adapt to the numerous variations within a situation class. These subsystems control the beamwidth of the directionality, the level of the noise reduction, the activation of binaural features, etc. In addition, the information from the contralateral hearing aid (control signals and audio signals) is used to improve the effectiveness of the environmental monitoring and enables the binaural features. Together, this complex system provides not only six possible feature sets to be used, but thousands of different feature combinations to provide a high-level of user benefit in every, slightly or significantly different acoustic situation.

The “Speech in Noise” Situation with binax

To understand more how the Automatic Steering works in a challenging situation, let’s take a closer look at the Speech in Noise situation. In this situation, not only static environmental noises, but even sudden, transient noises as well as noises coming from other speakers might reduce speech understanding. If the user arrives to a noisy party, the acoustic situation detection recognizes the Speech in Noise situation class, so the features are first set to an optimized starting position for understanding speech in noise, and the monitoring subsystems then start to continuously fine tune these settings.

In less noisy conditions, the monaural multichannel adaptive automatic microphone system operates in both hearing aids to emphasize the frontal speaker by attenuating sounds from other directions. In addition, to reduce the interfering speech, the subsystem for Directional Speech Enhancement, a directional noise reduction system, detects the incoming signals from several directions, and actively reduces every signal which is not coming from the front (see Ramirez, Jons, & Powers, 2013, for a summary). If the sound environment becomes more challenging (e.g. more people arrive to the party and everybody is getting louder and closer to each other), the subsystems recognize that the standard directional technology cannot provide enough speech emphasis anymore, so the binaural Narrow Directionality starts to fade in. As the Narrow Directionality has numerous options, the intensity of the frontal focus and the binaurally involved number of channels are also controlled by a monitoring subsystem according to the actual sound environment.

In parallel, further noise reduction features (such as eWindScreen binaural and stationary noise reduction) are used and controlled automatically, based on inputs from the acoustical environment.  This processing then provides continuous sound comfort and improved speech understanding in all variations of the Speech in Noise situation.

The “Car” Situation with binax

Another example of the complexity of the Automatic Steering system is the “Car” situation. Based on spectral and temporal analysis of the input signal, the binax hearing instruments can detect acoustically if the user is traveling in a car. Herbig et al. (2014) reported 96% correct identification for the detection of the Car situation by Siemens hearing instruments. This situation is unique, as it usually is not possible for the driver to face to his passengers when listening to them. Moreover, most of the disturbing signals come not from the back of the driver, but from the front, which will not be attenuated by standard directional technologies.

To meet these special conditions, the Spatial SpeechFocus is activated automatically in the Car situation. Similar to other situation classes, a dedicated monitoring subsystem within the Car situation coordinates this binaural feature. Depending on the origin of the speech signal, the Spatial SpeechFocus detects the direction of the dominant speaker and activates the desired directionality (front, back or either side). By the coordinated automatic steering of the two hearing aids, the user will always be able to hear the speaker in the noisy car. Further subsystems control the amount and type of noise reduction in every channel to provide appropriate continuous sound comfort.

Fast Recognition & Unnoticeable Transition

In order to assure listener comfort, the recognition of the need for a different feature set should be as fast as possible, but on the other hand, the transitioning should be slow and gradual enough so that the user cannot hear any artifacts associated with the changes. The timing of the transition between two states had been optimized individually for every feature type in order to keep the changes imperceptible. For example, the amount of noise reduction is adjusted within milliseconds, but completely fading from one microphone mode to the other might take a few seconds.

User Benefits of Automatic Steering

The real user benefits of the integrated binaural features recently was investigated.  Thirty participants (mean age 70.5 years) were involved in the study. They had, on average, a typical downward sloping hearing loss ranging from 30 dB in the lows to 70 dB in the highs. The users were fitted bilaterally, and evaluated binax instruments with Automatic Steering for 14-28 days in their real life situations. The binaural features, however, were only activated for 50% of the trial period. All the subjects were requested to use the hearing aids in the following noisy situations:

  • Conversation in a loud and crowded restaurant or bar with at least four other guests.
  • Conversation as driver or front seat passenger in a car.

At the end of the home trials, the subjects rated their satisfaction with subjective speech intelligibility for these challenging listening situations (Figure 6).


Figure 6. The satisfaction with subjective speech intelligibility of the binax hearing aids without and with activated binaural features, using the Automatic Steering. The binaural features provide a remarkable improvement in speech intelligibility.

Even though the test subjects had different hearing loss, life style and user expectations, in their noisy situations, a substantial improvement was noted with the binaural features activated for the tested scenarios. In the “loud and crowded restaurant” the average improvement in satisfaction with speech intelligibility was 12%.

The Role of the Hearing Care Professional

It is important to mention the role of the hearing care professional (HCP) in this full process. During “First Fit” the HCP can choose between traditional fitting formulas as well as the binax fit formula, which is optimized for the binax instruments. The selected fitting strategy gives the basis for the gain settings of every situation.

The HCP has several opportunities to tune the parameters of the automatic steering system (Figure 7). For example, he or she can determine the effective strengths of the noise reduction, which – technically – is reflected in the maximum and minimum attenuation levels that will be applied depending on the particular acoustic situation. The HCP also can influence the available directional microphone strategies which are accessed by the automatic steering. For example, with a pediatric fitting, it may be desirable to deactivate the Narrow Directionality option, so that the child hears more from the surrounding environment.


Figure 7. The HCP has several points of intervention in the Connexx 7 fitting software to control and adjust the advanced automatic steering system, such as Microphone Modes, eWindScreen binaural or situation dependent gain offset (Sound Equalizer).

The individual gain offsets for every situation can be checked and modified by the HCP in the Sound Equalizer. For example, if the user is generally satisfied with the Universal program (and so with the automatic steering), but needs to have more bass for music listening, the HCP can change the individual gain offset of the Music situation, without modifying anything in the other situations.

The everyday use statistics of the recognized classes are continuously logged into the memory of the hearing aid. Via datalogging, the hearing care professional can obtain accurate and objective information about the circumstances of hearing aid wearing, which can be a good starting point for further adjustments during follow-up visits.

Especially during the first fitting, there is no need to set up dedicated programs for the user on a “just in case” basis. This only presents an unnecessary challenge for the users, as they then need to remember them, and what exactly each special program is supposed to do. The binax platform provides a new user control possibility (Spatial Configurator) in order to override the functionality of the hearing aids within the Universal program, without any extra effort by the HCP.

The Role of the User

Despite several adaptive functions available on today’s hearing aids, approximately 50% of users still want to possess control over their hearing aids. This in fact is one of the top complaints they have in terms of hearing aid features (Kochkin, 2012). Given that they spend their time in specific situations with the hearing aids, it’s reasonable to expect that they know the best how and what they want to hear. Owning control over the sounds produced by the hearing aids gives the users a greater sense of security and more comfortable application.

With the launch of the binax platform, a new control tool was created for users (in addition to the standard VC and program change): the Spatial Configurator, which enables the user to manually manage the directional behavior of the hearing aid in a given environment. There might be some ambiguous situations where the user now and then wants to listen differently from how he normally does (intentional listening). Although such acoustically ambiguous situations are not frequent, it is important to ensure that the user is able to listen to his momentarily preferred sounds, even though the automatic directional system would select a different strategy in that situation. The Spatial Configurator provides user control of the following attributes of the directional system: Span and Direction (Figure 8).


Figure 8. In ambiguous situations the user might want to override the automatically selected microphone strategy in order to listen to what he intends. Spatial Configurator provides an easy to use user control both for span and direction.

Spatial Configurator

Spatial Configurator Span

The term “Span” refers to the width of the focusing area around the hearing aid. For example, the user walks into the park to relax, but a couple in front of him is talking with each other, which is automatically enhanced by the directional microphones. The user can override this frontal operation with the Spatial Configurator Span, by widening the sensitivity angle of the hearing instruments.

There are two options to control the Span. If the user has the easyTek wireless accessory and the easyTek App, it provides an easy-to-use interface with useful visual information about the selected Span level (Figure 9). Another option to control the Spatial Configurator Span is to assign it to the rocker switch of the hearing aid. In this way the user can simply “zoom in” or “zoom out” in the sound field with the upper or lower switch.


Figure 9. Spatial Configurator Span control via the easyTek App. The user can override the automatic steering and dictate his listening preferences.

Controlling the Span is not just a simple manipulation of the directional system. There are several other features adjusted simultaneously in the background to create a real “zoom in/zoom out” experience. For example, the noise reduction is increased and the non-speech related amplification is reduced when the user zooms to the front, to be able to more effectively pick up the sound of a frontal speaker, even in a noisy environment.

In a recent study, 36 subjects were asked to test the Spatial Configurator Span in several acoustically ambiguous situations. Similarly to the findings of Kochkin, 50% of the subjects liked to have a manual control (Kochkin, 2012). These subjects rated how useful they found this new user control feature (Figure 10).


Figure 10. The Spatial Configurator had been tested by users in order to measure the benefit and usability of this new user control option.

They found the Spatial Configurator helpful to manually control the focus area of the binax devices. Using the rocker switch to manually control the Span had been rated in 86% to be clear and simple.

Spatial Configurator Direction

Since the most common use case for Spatial SpeechFocus is when the wearer is driving and cannot turn to face the speaker, in the Universal program, it is only activated when a Car situation is detected. But of course, there are other use cases for this feature, such as when the wearer is walking with someone and carrying on a conversation at the same time (focus to the side), or if the wearer is wheelchair bound and needs to hear the caretaker pushing the wheelchair (focus to back). Spatial Configurator Direction allows the wearer to dictate the focus of the directional microphones, whether it is the front, back, or either side (Figure 11).


Figure 11. Spatial Configurator Direction control via the easyTek App. It allows the wearer to override the automatic steering and dictate the direction of the microphone focus.

The user interface for both the Span and the Direction control of the Spatial Configurator is very intuitive, easy to interpret and to operate. When the user leaves the ambiguous situation (e.g. going back to work after walking in the park), he can give back the steering control to the automatic system, by pressing the “A” button in the App. In addition, the binax devices have an additional subsystem to check after 15 minutes from activating the Spatial Configurator if the user is still in the same acoustic environment or not. If the user has already left that situation, the device automatically takes back the control, without the need of any user interventions.


The binax platform creates a real binaural link between two hearing instruments to be able to achieve better directionality, environmental awareness and noise reduction. The new features provide better speech understanding and sound comfort in certain situations. Users often have high expectations about modern hearing instruments.  They expect that it performs in every situation just as they would like. The notion is that the users can forget that they are wearing hearing aids and not have to manually change programs or adjust volume for any changes of the acoustic environment.

Automatic Steering is a high level of automatic control over the hearing aid features, which applies feature and gain changes, similar to manual program changes and volume adjustments, but it is fully automatic at Program 1 “Universal.” The binax instruments use improved Automatic Steering with integrated binaural features. This advanced steering does not monitor just a few typical acoustical attributes (such as noise level and speech presence), and does not provide just a few possible feature combinations. With e2e wireless 3.0 and 48 signal processing channels, a very precise and real-time analysis of the environment can be achieved. Several monitoring subsystems are working in parallel with each other to finetune the feature settings continuously, in order to meet the momentary user expectation in that given situation. The result of this complex system is thousands of possible feature combinations provided to the user every day, with unnoticeable transition between the settings.

For the ambiguous listening situations, where the users want to listen to something different from the automatically selected target, a new user control option had been introduced (Spatial Configurator) to temporarily override the automatic focusing system.

And last but not least, none of the binaural features from the binax platform needs remarkable power consumption.  In this way, the real practical benefits of the improved Automatic Steering do not have to compromise with the battery life; it can be used all the time without any manual activation.


Beilin, J., & Herbig, R. (2014). Binaural processing for understanding speech in background noise (White paper).  Retrieved from

Berghorn, K., Wilson, C., & Pischel, C. (2014). Reducing the annoyance of wind noise with binax eWindScreen binaural. (Sivantos white paper – in submission).

Herbig, R. (2014). Addressing critical hearing aid user needs with binaural features: The practical benefits of siemens binax. (White paper). Retrieved from

Herbig, R., Barthel, R., & Branda, E. (2014). A history of e2e wireless technology. Hearing Review, 21(2), 34-37.

Hougaard, S., Ruf, S., & Egger, C. (2013). EuroTrak + JapanTrak 2012: Societal and personal benefits of hearing rehabilitation with hearing aids. Hearing Review, 20(3), 16-35.

Kamkar-Parsi, H., Fischer, E., & Aubreville, M. (2014). New binaural strategies for enhanced hearing. Hearing Review, 21(10), 42-45.

Kochkin, S. (2012). MarkeTrak VIII: The key influencing factors in hearing aid purchase intent. Hearing Review, 19(3), 12-25.

Lotter, T., Schätzle, U., & Fischer, T. (2014). e2e 3.0 – Energy efficient inter-aural audio transmission. (White paper). Retrieved from

Mueller, H. G., Ricketts, T., & Bentler, R. A. (2014). Modern hearing aids. San Diego, CA: Plural Publishing.

Powers, T. A., & Froehlich, M. (2014, Nov). Clinical results with a new wireless binaural directional hearing system. Hearing Review. Retrived from

Ramirez, P., Jons, C., & Powers, T. A. (2013). Optimizing noise reduction using directional speech enhancement. Hearing Review, 11(7), 14.

Sauer, G., Dickel, T., & Lotter, T. (2014). Acoustic Wireless Control – connecting smart phones to hearing instruments. (White paper). Retrieved from


György Várallyay, Ph.D.

Dr. György Várallyay is an Audiology and Training Manager for Sivantos GmbH in Erlangen, Germany. Dr. Várallyay received his M.Sc. and Ph.D. in Engineering from Budapest University of Technology and Economics, Hungary. With 15 years of experience on biomedical research and 10 years in the hearing aid industry, he is currently responsible for the competitive benchmarking and analysis of hearing aid technology, and for audiology trainings in the EMEA and LA region. He is a member of the Complex Committee on Acoustics at the Hungarian Academy of Sciences.


Sebastian Pape, Dipl.-Ing.

Sebastian Pape is Head of the Audiology Expert Team and project manager within the global Sivantos research and development team. Some recent projects he lead include the development of the Siemens proprietary fitting formula and the audiological optimization of several signal processing features. Mr. Pape earned his Dipl. Eng degree at the Technical University of Ilmenau. Prior to joining Sivantos, he gained extensive experience working with a variety of innovative recording and sound reproduction technologies – always in order to achieve the best possible listening experience.


Carol Meyer, Au.D.

Dr. Carol Meyers is an Educational Specialist for Signia.