Since the beginning of the year, multiple Nest security camera users have reported instances of strangers hacking into and issuing voice commands to Alexa.

Apparently, voice assistant technology, security experts say, comes with some uniquely invasive risks.

Google recently integrated Google Assistant support into Nest control hubs and will get affected hugely by this new risk. Though Google has blamed weak user passwords and a lack of two-factor authentication for the attacks, even strong security passwords may create vulnerabilities to hacking.

Over the past couple of years, researchers at universities across the world have successfully used hidden audio files to make AI-powered voice assistants like Siri and Alexa follow their commands. But with these successes comes the possibility that hackers and fraudsters could hijack freestanding voice assistant devices as well as voice-command apps on phones. They could also potentially use these entry points to open websites, make purchases, and even turn off alarm systems and unlock doors — all without humans hearing anything amiss.

At the heart of this security issue is the fact that the neural networks powering voice assistants have a much better “hearing” than humans do. People can’t identify every single sound in the background but AI systems can. This makes the assistance vulnerable to “silent” commands. One risk is if hackers are to bury malicious commands in white noise. Another example of this sort of attack comes from researchers at Ruhr University Bochum in Germany, who successfully encoded commands in the background of louder sounds at the same frequency. In their short demonstration video, both humans and the popular speech-recognition toolkit Kaldi can hear a woman reading a business news story. Embedded in the background data, though, is a command only Kaldi can recognize: “Deactivate security camera and unlock the front door.” In theory, this strategy could be used through apps or broadcasts, to steal personal data or make fraudulent purchases.

A third approach is what researchers at Zhejiang University in China call a DolphinAttack – creating and broadcasting commands in a frequency that relies on ultrasonic transmissions, outside the range of human hearing. Zhejiang researchers have used this technology to get a locked iPhone to make phone calls per inaudible commands.

Following these risks, Amazon, Google, and Apple are working on their voice assistants to fight these vulnerabilities. A paper presented by the Zhejiang researchers recommends that device microphones be redesigned to limit input to the ultrasonic range that humans can hear. The authors also suggested using machine learning to recognize the frequencies most likely to be used in inaudible command attacks.

But in addition to the tech-fixes, scientists and lawmakers will need to create a national legislative or regulatory framework for voice data and privacy rights.

As IoT creates more opportunities for use cases for voice recognition,  and as the number of players in the space increases, the risk of voice-data breaches will rise too. Technology can help up to an extent-  maybe fraud prevention professionals can build and maintain clean, two-way databases of consumer voice data to ensure that companies can recognize legitimate customer contacts.