AudioTelligence raises $8.5M Series A to bring its ‘autofocus for sound’ to voice assistants

Europe

AudioTelligence, a startup that spun out of University of Cambridge-funded CEDAR Audio, has raised $8.5 million in Series A funding for its “autofocus for sound”.

Leading the round is Octopus Ventures, with participation from existing investors Cambridge Innovation Capital, Cambridge Enterprise, and CEDAR Audio.

Founded in 2017 and based in Cambridge, U.K., the company has developed data-driven “blind audio signal” separation technology that is able to remove background noise, enabling the listener — which can be humans or machines — to hear the person speaking more clearly.

Its potential commercial applications are far ranging, from voice assistants operating in noisy environments, to smart speakers, smart TVs and set-top boxes where broadcast sound and other background noise can interfere with the a device’s ability to perform speech recognition.

Another obvious use-case is hearing assistance for people who struggle with hearing in noisy crowds, which the company is also exploring. In fact, the original impetus for the tech was to solve the so-called “cocktail party problem,” founder and CEO Ken Roberts told me in a video call last week where I was shown a live demonstration of AudioTelligence’s tech working in a very noisy cafe. As controlled as that demo may have been, the results were impressive nonetheless.

Roberts also explained that AudioTelligence intends to pursue a licensing strategy rather than build direct to consumer hardware of its own, and recently demonstrated the tech’s capability at CES where it saw a lot of interest from OEMs and others (90 business leads in 4 days, apparently). Furthermore, I’m told that tests with an undisclosed home assistant platform showed that sentence recognition rate in noisy conditions jumped from 22% to 94%.

With regards to what might set apart AudioTelligence’s background noise removal tech from existing solutions, Roberts said it doesn’t require “matched” microphones, which makes it cheaper and easier to implement, and it doesn’t require the user to train the algorithm beforehand. This, the company claims, means AudioTelligence is able to recognise new background sounds and new voices in realtime and adjust its “focus” accordingly.

In addition, the tech claims to offer high performance with very low latency — good enough to retain lip syncing, which is crucial for hearing assist applications.

“Our solution doesn’t need calibrating or training, and the code is production ready,” says Roberts. “This means existing devices can be easily upgraded to AudioTelligence with no more than a software update”.

Meanwhile, AudioTelligence plans to use the new capital for further “breakthrough” product development, and to support new partnerships with technology providers. This will see the startup triple employee headcount over the next three years.

Products You May Like

Articles You May Like

Fusion startup Tokamak Energy attracts $125M for its egg-like reactor design
Database startup Neo4j embraces AI to supercharge growth
Anthropic raises another $4B from Amazon, makes AWS its ‘primary’ training partner
Blue Bear Capital lands $160M to back AI founders in climate, energy, and industry
Fondo wants to mitigate the American accountant shortage with its AI bookkeeping service

Leave a Reply

Your email address will not be published. Required fields are marked *