Personal Data: If you’re worried about facial recognition, you’ll love voice recognition and what it can tell you about you…
Speech recognition tools and the collection of personal data have been on the rise for a number of years. But your voice reveals much more information than you can imagine.
Atlantico: After facial recognition, voice recognition tools like Siri or Alexa, developed by Apple and Amazon respectively, have seen significant growth. But these systems also make it possible to collect personal data. What can our devices learn through their voice recognition capabilities?
Anthony Poncier: Much more than you can imagine. This makes it possible to recognize emotions, a state of health, etc. As more and more phones have these features and many people have a connected speaker, the question of the security of personal information arises. A few years ago, insurance companies set up a wiretapping system to determine whether reports of theft or loss were true or false. The software could recognize the texture of the voice, its feverishness… The issue of dates wasn’t as important as it is today, and technology was ahead of legislation, which is no longer the case.
Siri’s terms of service have changed. Theoretically, 80% of the data stays on the device because Apple has prioritized security. With connected speakers like those using Alexa, a system developed by Amazon, the situation is very different and the devices can pick up voices and ambient noise before restoring sound. In short, if you go to a place where your vote can be picked up, there’s a good chance it will be reused.
For what purposes can the data collected by speech recognition tools be used? How can you protect yourself against this?
This data can be used to set up fraud. An insurance company can theoretically know that a person is ill, which can affect the insurance premium. So the question is what risks consumers are exposed to and whether they are aware of them. Organizations also need to communicate about the level of information collected. After all, since we regularly see data leaks from credit cards or passwords, we can rightly wonder what would happen if votes fell into the wrong hands.
In theory, the legal framework is the best consumer protection. In Europe, GDPR covers the language spectrum, so we’re pretty good off. To see if this is enough or not. Could this come to the United States like other legislation? It’s entirely possible. For example, the United Kingdom, which is no longer a member of the European Union, has a number of questions on this subject.
Finally, individuals need to be aware of these issues and when to provide consent to data collection.
Although it’s now possible to recreate a voice using software, should we be concerned about the drifts this can cause? So would it be better to just abandon speech recognition technologies?
It is indeed very worrying. We see this in the phenomenon of deepfakes, which achieve impressive accuracy. It’s even easier to create a fake voice than a good deep fake because it also includes a moving image.
In absolute terms, it might be better not to have systems that use our voice. But everyone sees noon on their doorstep. As language usage matures in the United States and people use their keyboards less and less, one wonders whether individuals really want to go back, especially as the line between private and public life becomes increasingly blurred and so many people agree to give out their personal information . So it will decide the use. Like any new system, this use may bring new technologies. Therefore, there is a need to know how this information is used and how it may be diverted from its primary use.
“The worst enemy of democracy is laziness,” it is sometimes said. Maybe our voices can help us save time because it’s easier and faster to write. However, it is not mandatory. But people’s laziness is often the first enemy of privacy.