back

Personalizing Audio: Four Ways AI/ML Technology Can Improve Experiences

diciembre 5, 2024 Xperi Sven Mevissen
Director, Streaming & Content Technologies, Partnerships

Today’s audio experiences — whether at home, in vehicles or on personal devices — are more immersive than ever before and the quality of the audio seemingly improves with each passing year. My colleague Matt Byrne wrote about this last year, with a particular focus on how the cinema experience of audio has been coming into the home theatre, and why home theatre sound is better than ever. This evolution expands past additional audio channels and replicating the cinema experience. In fact, much of the audio technology being developed today — and certainly in the future — is focused on creating personalized experiences so the individuals’ preferences, needs and environmental conditions are being met. Let’s explore how innovations in AI and machine learning (AI/ML) are reshaping the future of audio technology.  

Optimizing the audio experience

Historically most devices that reproduce audio only offer a handful of fairly generic sound settings. At most, these devices will allow the listener to tweak the treble and bass frequencies, adjust the balance between speakers (assuming there is more than one speaker in the system) and little else. However, everyone hears differently and there is tremendous variation in playback devices and listening environments, which should be considered.  

What if the user could have an audio experience that was always optimized for their specific needs? For example, when a user moves from a quiet office to a busy train, or when their home’s air conditioning system turns on during a crucial movie scene, the audio technology should be able to automatically adjust to maintain optimal audio quality.

These capabilities go beyond traditional hardware improvements though. This is about conditional and environmental awareness and leveraging technologies like AI/ML to process audio content to make it clearer, richer and better sounding, regardless of where the listener happens to be, while dynamically adapting to the surrounding environment so the audio can be optimized accordingly.

Leveraging AI/ML technologies not only means better sound in the moment, but also means that these audio systems can learn and adjust to different environments more effectively over time. More than just making static tweaks to balance or tone qualities, these systems are advanced enough to make real-time adjustments to room and device conditions and select the optimal settings and sound modes too. This results in a listening experience that is tailored for both the individual user and their environment, anticipating changes and adapting as changes arise.

Accessibility and inclusion

The advancements of audio technology can be particularly significant for accessibility. Many of us have experienced watching TV with an elderly parent or grandparent whose hearing is impaired, and often the TV is turned to a high volume. Traditionally, this may have resulted in an uncomfortable compromise, leaving everyone unsatisfied. But, as more personalization is factored into the audio experience, we may begin to see different ways of further enhancing the individual experience, without sacrificing the collective one. For instance, Apple recently announced their next-generation AirPods Pro, which claim to have a clinical-grade Hearing Aid capability. Assuming this means that more people will be wearing headphones like these in place of traditional hearing aids, this could very soon mean that if two people are watching TV together, they could both be wearing wireless headphones and each receiving a personalized audio stream, optimized for their individual needs and preferences.

Age-related hearing loss is only one of the more common ways hearing experiences differ. Some individuals are sensitive to certain frequencies for instance, while others have congenital hearing loss. AI-powered audio devices and systems can help in a variety of these cases by intelligently attenuating certain frequencies, or providing speech-to-text services for people who need a visual component to augment their experience.

The undercurrent of all of this is that each experience is personalized for the individual user, and that the technologies can leverage AI/ML behind the experience to adapt it dynamically to changing situations.

Benefits beyond sound quality

AI/ML, while powering technologies that enable a personalized audio experience, can bring other benefits as well. For instance, many of these impending devices may have small form factors, and AI/ML can help to optimize things such as power consumption in order to preserve battery life and reduce heat. Additionally, AI/ML will be able to reduce heat generation for better thermal management, provide early indicators for faulty hardware or potential issues and generally increase the longevity of a device.

DTS Clear Dialogue

While we’ve touched on a few ways that audio technology will surely advance in the coming year due to the AI/ML expansion, there is a more recent example that touches on how audio innovation is already taking place. We recently announced our DTS Clear Dialogue solution, which uses AI/ML techniques to help deliver a personalized audio experience to TV viewers. There are a number of reasons why a solution like DTS Clear Dialogue is needed, but the primary driver is that dialogue on a TV has become increasingly difficult to understand, coupled with the fact that — due to a variety of factors such as hearing impairment or device limitations — everyone hears dialogue differently. As my colleagues Samara Winterfeld and Martin Walsh wrote separately earlier this year, this area is fertile ground for innovative solutions that allow users to easily personalize their home audio experience.

Why does all of this matter?

There are several reasons why these considerations are important. First of all, most manufacturers are motivated to create differentiations within their product lines in order to address a range of use cases, thereby expanding their existing customer base and gaining new types of customers. Because of this motivation, it is likely that we will soon see new and improved devices that carry the capability to run the advanced processing that AI/ML requires. And of course, as we discussed, these technologies help to make audio experiences more inclusive for many of the underserved users such as those with hearing impairments, various forms of neurodivergence or noise sensitivities.

Consumers have been demanding more personalized experiences and manufacturers are answering that demand with a range of solutions. There is still work that needs to be done to make these audio experiences more inclusive for more people, but the trend is generally moving toward leveraging AI/ML technologies to deliver greater degree of personalization for audio.

To learn more about DTS audio solutions, click here.

Solutions