Core Building Blocks
A first of its kind AI-solution that uses unsupervised learning to correlate audio data and get smarter over time. Our unique design approach does not use modeling or extensive training data but rather uses inferencing AI to adapt to the dynamically changing auditory scene.
A voice-biometric driven tracking methodology for effective voice from noise and voice from other voices isolation. A byproduct of this approach is the ability to extract perceptual data (identity, emotion, direction, range, gender) from an individual voice within a noisy signal.
Signal Enhancement IP
IP that repairs and enhances the signal of interest for a better listening experience, improved performance for automatic speech recognition (ASR), effective metadata extraction, and accurate speaker identification. In addition, our unique method of signal enhancement does not introduce unnatural artifacts.
The Yobe Product platform adds value to the entire voice ecosystem, fundamentally changing the effectiveness of edge-based VOICE technologies.
Enhanced Human-to-Machine Voice Interaction
Platform agnostic (Google, Amazon, Watson, Etc) edge-based for enhanced ASR/NLU capabilities in high-noise environments.
Adaptive Biometric Identification
On the edge “Voice Print” generation (voice biometric identification) capabilities for user authorization and profile retrieval in high noise environments.
Enabling Generative AI
LIVE CES demonstration of our edge-based solution for Digital Human platforms.
A Convergence of Yobe Capabilities
VISPR (Voice Isolation for Sonic Perceptual Recognition) is a first of its kind auditory processing system that adds a layer of “perceptual awareness” to voice platforms.