OPINION: If there’s been one buzzword that’s dominated tech (or culture in general) this year, it’s artificial intelligence. The way its marketed, most brands would have you think it’s a brand new concept – except it’s been here for years.
When it comes to headphones, AI has already been strutting its stuff, so to ask whether it’s the next frontier for headphones is nebulous since it’s been there in the background. The real question to ask is, does it work? Like most things, sometimes it does, other times it doesn’t.
There are various versions of artificial intelligence about that do different things. Machine learning is the version of AI you most likely have come across in headphones but likely never noticed. It’s a statistical model that observes and applies what it has learned from collected data to new tasks. Machine learning is often used in headphones for calls – it can learn your voice and how it sounds then try to focus on it and clear away noise around it, but machine learning is just one part, and as I mentioned before, it doesn’t always work.
Another version is deep learning, which relies on neural networks that process data in a similar way as the human brain does. This is a version of AI that’s become popular in TVs (especially for upscaling lower quality sources) but as far as I know, isn’t an AI model that’s used in headphones.
I imagine the amount of power and resource required would be a lot for a headphone to handle; though researchers in Washington developed deep learning algorithms that let users pick which sounds they wanted to hear, preserving some sounds and cancelling out others.
That hasn’t made its way to any commercially available headphones yet but AI is used to cancel noise but again, it’s nothing necessarily new – the adaptive form of noise-cancellation algorithms that detects the noise around you and automatically changes the level of noise suppression it performs has been around for years.
The other version of AI is generative AI but this is about creating new content: text, video, audio, images – ChatGPT and others of its ilk – and doesn’t have much to do with headphones, so I’ll ignore that version.
The likes of Samsung and Google with their new headphones would have you believe that AI is transformative to the experience of using headphones, but I’d say that it’s just marketing to spice up and sell customers on the usage of artificial intelligence. Most of the ‘AI’ features you’ll find on the Galaxy Buds Pro 3 or Google Pixel Buds Pro 2 aren’t in the headphones themselves but require compatible Samsung Galaxy smartphones or Google phones with Tensor AI to get these features to work.
What about the future?
So is AI the next frontier for headphones? To say that it could be is admittedly a weak answer, but at present any emerging AI features are not powered by the headphones themselves but by the smartphones they’re tethered to. Plus, machine learning is hit and miss with calls – Bose uses it across its headphones and I’d still say it’s the weakest aspect of their overall performance.
What about Adaptive AI noise-cancelling or sound? I honestly can’t hear the difference between adaptive or ‘standard’ modes. If there was a switch you could flick to immediately jump between the two, I’m not sure you’d be able to hear it, and in some cases over the years, firmware updates have been known to make noise-cancelling worse instead of better.
So my view on AI in headphones is that we’re still in the trial and error stages, and it could still use significant improvements. Right now, like AI in a lot fields, it is marketing bingo – either trying to make something that’s already been available more flashy, or to plump up something that’s really not as smart as we’re led to believe it is. The AI frontier is already here, but that doesn’t mean it’s the shiny new future some would think it is.
The post Sound & Vision: Is AI the next frontier for headphones? appeared first on Trusted Reviews.
from Trusted Reviews https://ift.tt/OWRocTE
via IFTTT
Comments
Post a Comment