Google Assistant will get better at recognizing what we say – here’s how

The Google Assistant is already quite awake, but in Mountain View they are not satisfied and, as announced during I / 0 2022 at the beginning of May, they continue to work for better integration with the various devices and an increasingly natural and fluid relationship with the beyond. 700 million people (data provided by Big G) who use it every day.

Colleagues from 9to5Googleanalyzing the code of the latest update of the Google app, they found some strings that refer to the “personalized speech recognition“Basically, the intention is to implement an option that allows for soon train your Assistant by letting the app start storing audio on the phone to change the voice recognition model and adapt it more precisely to the way the user speaks and their frequently asked questions. Considering that the Assistant is confronted with languages ​​and dialects from all over the world, it is a useful tool to make it more reliable.

Apparently in the code there is also the message that will appear on the screen if you choose to do without this feature and disable it:

If you turn it off, the Assistant will be less accurate in recognizing names and other words you say frequently. All audio used in order to improve speech recognition for you will be deleted from this device.

In short, it would be an expansion of collaborative learning (also known as federated learningor federated learning) implemented starting from March 202: we are talking about that machine learning technique that allows you to train an algorithm by exploiting the data stored in decentralized devices or servers without however having this data exchanged.

We expect that when Google officially introduces the function, in one of the next updates, it will explain in more detail how this new function works: the timing with which this will happen, however, is currently unknown. But considering there are already important traces in the app code, it shouldn’t be much missing.