How Samsung’s Audio Lab is Using AI in Speakers

  • Samsung Research’s audio lab is located in Los Angeles, America
  • The first product that Samsung’s audio labs worked on was the Radiant 360 loudspeaker

Thanks to technologies including artificial intelligence, machine learning and internet of things, consumer electronics are getting smarter by day. As a matter of fact, the new improved smart consumer electronics present brands with opportunities both in terms of business and creating value-for-money products.

Market Research Future (MRFR) had published a research report about the global IoT in consumer electronics market that spells huge expansion for this market at 24.16 per cent CAGR (Compound Annual Growth Rate) between 2017 and 2023. In terms of value, the market has been estimated to be worth US $124 billion.

Allan Devantier, vice president and head of audio lab, Samsung, on a blog posted on Samsung website has shed light on how the consumer electronics giant is using AI to make its portfolio of audio products smarter and next generation ready. Located in Los Angeles, the R&D lab takes up 18,000 square feet and is equipped with multiple anechoic chambers, listening rooms and other state-of-the-art applied research facilities.

“The first product we worked on was the Radiant 360 loudspeaker – it was actually the first mass-market speaker to feature 360-degree sound. We were the industry leader, and now 360-degree speakers have grown very popular in the market,” stated Devantier .

Q-Symphony solution announced at this year’s CES

Samsung’s OTS (Object Tracking Sound) technology, as informed by the company, improves upon soundbar technology that allows sound to be delivered in fixed directions. The lab developed TVs that incorporate speakers not just on the bottom and sides of the screen, but in the top left and right sections as well. This allows OTS technology to make use of deep learning that can recognize what kind of content is being displayed on the screen and deliver multichannel sound accordingly.

Devantier, on the blog explained, “Our focus starts at the ear of the listener then expands out to the listening environment – the transducers (woofers and tweeters), amplifiers, and digital signal processors (DSPs).”

AI to reshape the audio industry

The inception of AI, as Samsung informed, is set to reshape the audio industry, and the company is working to anticipate the changes it will bring about. Asked to outline what he sees as AI’s role in the audio industry, Devantier said, “I think we can use AI to help our loudspeakers play louder, with more perceived bass and less perceived distortion. AI will also help make old recordings sound better and allow our loudspeakers to adapt to different listening environments.”

Another way in which AI is empowering future solutions is by enabling features like Adaptive Video Acceleration (AVA). Samsung’s AVA feature makes it possible for the TV to recognise the audiovisual space environment and provide sound that is optimised not only for the content being displayed on the screen, but also for the viewing space that the consumer is in.

In 2020, sensors embedded in QLED solutions will make optimised viewing possible, even in rooms with high levels of noise. For instance, if a user is watching something and a member of their family switches a blender on in the adjoining kitchen, the system will adjust to ensure that the dialogue remains audible to the viewer in real-time despite the unexpected peripheral noise.

As far as how his lab will contribute to delivering better audio going forward, Devantier reports that it will continue to meet the needs of customers who are looking for “high quality and portable sound” while improving and expanding its OTS technology. “Maybe one day we will develop a loudspeaker that sounds great, plays very loud and is nearly invisible or exceptionally small and portable,” Devantier concludes, “Until that time, we will continue to produce solutions that make incremental improvements to sound quality.