EMEET's Major Breakthrough on Voice IA Algorithm

2 minutes

The basic knowledge of the Voice IA Algorithm has been explained in our previous article, and we understood its fundamental principle of it. So we will now go through it to see what new features the latest algorithm has added. The latest Voice IA Algorithm enhances noise cancellation and provides more crystal-clear audio output for users. Some of us may think that the noises are mostly just the echo from the speaker and it is not difficult to cancel those noises out, but this is not true. There are many noises other than echoes. The noises include echo, reverberation and background noises. These are the additional noises that the algorithm is trying to rule out. Now let's take a look at how the latest voice IA algorithm rules out those additional noises.

How to filter out the noises?

When audio comes through the speaker, there is more than 1 signal, which includes:

1. The human vocal

2. Echo from the speaker

3. Background noises

4. Reverberation

Each signal from above does not balance; they all have a different amount of frequency length each time. Therefore, if we just rule out a set amount of frequency length, then the noise-cancellation may not work every time. Moreover, to filter out those useless noises, the Voice IA Algorithm will conduct a series of calculations to maximize noise reduction and highly increase the vocal audio.

The adaptive filter is a digital filter that can automatically adjust the performance according to the input signal for digital signal processing. The audio will be processed through the adaptive filter to filter out most of the noises, and in order to do so, the audio will go through a few steps.

1. When the audio has been transmitted to the speakerphone, the Voice IA Algorithm will be conduct an initial separation to the Subband feature and Vocal Segment Characteristics.

2. Encoder-Decoder - Then the encoder will extract the key features of the entire audio, which is mostly the human vocal and the decoder will restore the key features.

3. Subband Processing - The audio will initially be processed in this stage for rough filtering. This will filter out the most outstanding background noises. The mechanics of this process is that there are thousands of example frequencies of those noises, and they will cross-match the frequency to identify which are noises and which are human vocals.

4. Depth Filter - Once the audio has partially been processed, it will then be passed to the depth filter to conduct more detailed filtering. It is like the DNN (Deep Neural Network), where the audio will go through multiple in-depth calculations, and during the calculation, the small noises will be extracted from the audio and enhance the human vocal.

Results

This video shows the difference between before and after using the voice IA algorithm.

Conclusion

With the installation of the latest Voice IA Algorithm, the speakerphone can effectively cancel the majority of background noise, echoes and reverberation. This can provide the most immersive and effective audio output when conducting conference meetings. Currently, the latest Voice IA Algorithm is available in the EMEET OfficeCore M3 Speakerphone and the EMEET OfficeCore M2 Max Speakerphone. There will be more and more products will also have the latest Voice IA Algorithm.

You May Also Like

Introducing the brand new EMEET OfficeCore Speakerphone: M0 Plus, design for remote meetings used in small to medium sized businesses.
2 minutes
Join EMEET team at CES 2023 (January 5-8) at booth #19153 at Las Vegas Convention Center.
1 minutes
The speech from the received signal and process these signals with pre-designed rules to identify the sound and give feedback on the result to the user.
4 minutes