Voice is everywhere and being integrated into devices all around us, from voice-activated alarm clocks to voice-controlled vacuum cleaners! As this market grows, cutting-edge voice devices need to be scalable, based on a proven foundation and widely supported by a broad software ecosystem.
Chris Shore, Director of Embedded Solutions at Arm, recently sat down with one of the leaders and visionaries in voice-based engineering: Ken Sutton, President and CEO of Yobe, Inc.
Picture this: you’re in a loud room with lots of conversations, such as a dinner party. You want your Amazon Echo to turn the music down or change the sound – but using the wake word in a loud setting isn’t picked up as your voice is lost in the noise. But, when Ken showed me this on his Yobe-enabled device, he can stand across the room with music blaring near the device and his voice is uniquely picked up.
This is one example of what Yobe is pioneering and why we are excited for our DSP software engineers to be working closely with them.
Yobe’s story…
Yobe was co-founded in 2014 by Ken Sutton, and S. Hamid Nawab, PhD (MIT), an internationally renowned researcher and professor of electrical and computer engineering at Boston University.
Today, Yobe is revolutionizing the field of signal processing with its “Signal Processing That Thinks” platform. VISPR (Voice Identification System for User Profile Retrieval) was Yobe’s first product launch and it is a solution that, in its commercial form, addresses most of the persistent challenges of voice technology in the market today the inability for smart, connected devices to accurately identify, track and personalize voice interactions when they take place in uncontrolled or noisy, everyday environments.
Yobe is solving this problem by enabling voice-based products to use just a wake word to immediately identify the voice, track it and personalize it; reducing wake-word and speech recognition errors by up to 85% in high noise environments.
How do you define success at Yobe?
We will consider ourselves successful when voice interface between humans and their connected devices is seamless. For this to happen, voice technologies need to work effectively in any auditory environment. Issues of poor speech recognition and speaker identification will need to be addressed before there can be true interface personalization. We see Yobe enabling multiple use cases where voice plays an important role, whether that be smart device user experience, business productivity or even mission-critical applications that rely on voice commands.
We believe that Yobe’s Artificial Intelligence (AI)-assisted signal processing will help eliminate the current challenges around identifying, tracking and separating voices to enable improved pattern recognition and overall sound quality, speech command accuracy for far-field applications and speaker identification platforms.
Can you tell us about some the projects you’re currently working on?
Yobe has successfully configured its generic artificial intelligence – Assisted Signal Processing algorithms for operation on an Arm Cortex-M7 processor, to enable wake-word recognition in acoustically challenging environments, as the one described above. These algorithms employ a blend of rule-based abductive reasoning and shallow machine learning strategies to control advanced signal processing operations that utilize the CMSIS library for intensive FFT and vector multiply or add operations.
We used Arm because of the combination of performance and the wide product offering. Also, as a software-centric company, we have seen value in Arm’s willingness to offer their internal support. Arm’s software engineering team managed by Laurent Le Faucher has made libraries and technical support available to us as we worked through the implementation of our software on their microcontrollers.
What are Yobe’s plans for the future?
As Arm furthers their expansion into the voice and low-power markets, we will continue to build and strengthen our partnership in the future. We believe there is a massive opportunity to grow our signal processing solutions and on different hardware platforms. Some potential front-end applications for the future range from automatic speech recognition (ASR), voice recognition (AVR) and wake-word recognition (WWR).