Published: Sat, May 12, 2018
Sci-tech | By Brandy Patterson

Attackers can use audio frequencies beyond human hearing to exploit smart speakers

Attackers can use audio frequencies beyond human hearing to exploit smart speakers

The problem, according to researchers, is that these smart assistants might follow even those commands that you might not be able to hear.

According to the New York Times, researchers in both China and the USA have carried out a series of experiments which ultimately proved that it's possible to communicate silent commands that are undetectable to the human ear to voice assistants like Siri, Alexa and Google Assistant.

These commands can be hidden in white noise played over loudspeakers or YouTube videos, as students from the University of California, Berkeley, and Georgetown University demonstrated two years ago.

Researchers say that scammers could use this technology to ask your device to shop online, wire money or unlock doors.

More news: WestJet Pilots Overwhelmingly Vote to Take Strike Action

So far, there's nothing to suggest that such subliminal messages have been successful outside of a student research setting, but one of the Berkeley paper's authors says he believes it's just a matter of time:"My assumption is that the malicious people already employ people to do what I do", he told the New York Times. Makers of these devices have not ruled out the possibility of attacks such as this happening in future, but Apple, Amazon and Google have responded to the research, noting their respective security risk mitigation strategies in place.

Apple said its smart speaker, HomePod, is created to prevent commands from doing things like unlocking doors; it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures. A technique called DolphinAttack shows that subliminal messages can even be embedded in sounds inaudible to the human ear.

They were able to execute the command "Okay Google, browse to evil.com" within the spoken phrase "without the dataset the article is useless". Currently, the scope of the DolphinAttack is limited but the University of IL has already demonstrated what ultrasound attacks from 25 feet away are capable of. This way, they found out the devices can perceive a series of silent commands that we are incapable of detecting.

They used an external battery, an amplifier, and an ultrasonic transducer.

More news: The Premier League weekend analysed through XI players

More recently, Mr Carlini and his colleagues at Berkeley have incorporated commands into audio recognised by Mozilla's DeepSpeech voice-to-text translation software, an open-source platform.

The commands aren't discernible to humans, but will be gobbled up by the Echo or Home speakers, the research suggests. Researchers used the loophole to embed this command into a four-second clip from Verdi's Requiem in music files.

But Carlini explained their goal is to flag the security problem - and then try to fix it.

More news: Britons kidnapped in ambush at Congolese national park

Like this: