When a voice assistant becomes a spy – dangers and solutions
The arrival of voice assistants in the homes of millions of citizens has been as unexpected as it has been rapid. The phenomenon started with smartphones, which have featured voice search facilities for a few years now. The idea is to help the user find answers to their questions quickly, without needing to type anything on the display.
A direct consequence of these new methods of interaction has been domotics, otherwise known as smart living. Thanks to the integration of special systems and being able to talk to increasingly receptive and intelligent voice assistants, the whole house can now be managed using vocal commands.
Washing machines which choose the best cycle for each load on their own, fridges which order the shopping automatically according to which items are missing, smart lighting and temperature control are just some examples of how voice assistants are changing people’s lifestyles.
The risks posed by voice assistants
A futuristic lifestyle is certainly attractive, but voice assistants can also pose a concrete risk to the security and privacy of those who use them. Considering the rapid spread of domotics, some renowned universities have decided to conduct more detailed research into the dangers which can result from the abuse of these devices.
The research results were of great interest because they exposed certain vulnerabilities of these voice assistants that any ill-intentioned individual with the right technical skills could exploit to their own advantage.
George Orwell feared a Big Brother who spied on the characters in his book via telescreens and other commonly found devices, but in the case of voice assistants, it is no longer science fiction.
Voice assistants are always vigilant and it has been shown that even when the code word which activates their listening function has not been spoken, they are able to perceive conversations and dialogues often of a private nature.
Another vulnerability (which the multinationals who produce voice assistants have been working on for some time) involves their difficulties in recognising the voice giving the commands. The assistants in fact, respond regardless of the speaker and are often activated erroneously by the sound of voices on TV or radio.
Ultrasound waves can also be a potential risk. The human ear can perceive a broad range of frequencies but it will never be as receptive as a sophisticated microphone. Some experiments have shown that it is possible to take control of voice assistants using commands given by ultrasound which are not audible to humans, known as Dolphin Attacks (although it should be noted that this technique requires close proximity to the device).
Precautions to take in order to defend your privacy
Despite these system failures (which will no doubt be fixed in due course), voice assistants offer an improvement in people’s everyday lives. In the meantime, it is possible to continue using them, although some steps should be followed to protect your privacy.
- All physical devices containing a microphone can be manually deactivated. It is sufficient to remember to switch off the voice assistant when it is not in use.
- Protecting online purchases by using a specific password is another excellent way of avoiding unpleasant surprises;
- The use of a trusted antivirus is highly recommended to protect PCs, smartphones and tablets;
- Should the code word which activates the voice assistant be similar to the name of a family member or other words which are often repeated near the device, it is advisable to change it to a less common word.
There is no point demonising imperfect technology, the important thing is to be aware when using it, so as to limit the risks where possible. By following these small suggestions, the risks can be considerably reduced.
Translated by Joanne Beckwith
