AI headphones tune out the crowd, amplify a single voice with just a glance

A new type of headphones allows users to isolate a single voice in a crowd by looking at the speaker for just a few seconds.

© Image Copyrights Title
Font size:
Print

Noise-cancelling headphones have gotten very good at creating an auditory blank slate. However, allowing certain sounds from a wearer’s environment through the erasure still challenges researchers.

The latest edition of Apple’s AirPods Pro, for instance, automatically adjusts sound levels for wearers – sensing when they’re in conversation, for instance – but the user has little control over whom to listen to or when this happens.

Now, researchers have developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to ‘enrol’ them. 


The system, called ‘Target Speech Hearing’, then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

“We tend to think of AI now as web-based chatbots that answer questions,” says senior author Shyam Gollakota, a University of Washington professor in the Paul G. Allen School of Computer Science & Engineering. 

“But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With
our devices, you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking.”

To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. 

The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16-degree margin of error. 

The headphones send that signal to an onboard embedded computer, where the team’s machine learning software learns the desired speaker’s vocal
patterns.

The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more training data.

The team tested its system on 21 subjects, who rated the clarity of the enrolled speaker’s voice nearly twice as high as the unfiltered audio on average.

This work builds on the team’s previous ‘semantic hearing’ research, which allowed users to select specific sound classes – such as birds
or voices – that they wanted to hear and cancel other sounds in the environment.

Currently, the TSH system can enrol only one speaker at a time, and it’s only able to enrol a speaker when there is not another loud voice coming from the same direction as the target speaker’s voice. If a user isn’t happy with the sound quality, they can run another enrolment on the speaker to improve the clarity.

The team is working to expand the system to earbuds and hearing aids in the future.

The code for the proof-of-concept device is available for others to build on. The system is not commercially available.

Previous Article The “most ambitious line-up yet”? Advanced Engineering 2025 introduces new features
Next Article Powerful telescope could transform our understanding of stars, galaxies and blackholes
Related Posts
© mattImage Copyrights Title

Planet-friendly cups made the eco electric way

fonts/
or