If you buy one of those intrinsically insecure, always-on "smart speakers" from Google, Amazon, Apple or other players, you're installing a constantly listening presence in your home that by design listens to every word you say, and which is very likely to suffer at least one catastrophic breach that allows hackers (possibly low-level dum-dums like the ransomware creeps who took whole hospitals hostage this year, then asked for a mere $300 to give them back because they were such penny-ante grifters) to gain access to millions of these gadgets, along with machine-learning-trained models that will help them pluck blackmail material, credit card numbers, and humiliating disclosures out of the stream of speech they're capturing.
I don't own one of these and I've turned off the "voice assistant" on my mobile devices.
Writing in Gizmodo, Adam Clark Estes explains what a fantastically dumb fad these gadgets represent, and how they normalize surveillance.
One use-case I've heard of sounds like it justifies all these risks: helping people with dementia by serving as an infinitely patient interlocutor who can answer questions like "what day is it?" and "where am I" over and over again without losing its temper.
Which brings us back to security and surveillance. I’m not here to be Tin Foil Hat Man and convince you that companies like Amazon are spying on your every move and compiling data sets based on your activity so that they can more effectively serve you ads or sell you products. I am here to say that smart speakers like the Echo do contain microphones that are always on, and every time you say something to the speaker, it sends data back to the server farm. (By the way: If you enabled an always-listening assistant on your smartphone, now’s a good time to consider the implications.) For now, the companies that sell smart speakers say that those microphones only send recordings to the servers when you use the wake word. The same companies are less explicit about what they’re doing with all that data. They’re also vague about whether they might share voice recordings with developers in the future. Amazon, at least, seems open to the idea.
We do know that Amazon will hand over your Echo data if the gadget becomes involved in a homicide investigation. That very thing happened earlier this year, and while Amazon had previously refused to hand over customer data, the company didn’t argue with a subpoena in a murder case. It remains unclear how government agencies like the FBI, CIA, and NSA are treating smart speakers, too. The FBI, for one, would neither confirm nor deny wiretapping Amazon Echo devices when Gizmodo asked the agency about it last year.
Sinister ambitions of governments and multinational corporations aside, you should also worry about the threat of bugs and hackers going after smart speakers. Anything that’s connected to the internet is potentially vulnerable to intrusions, but as a new category of devices, smart speakers are simply untested in the security arena. We haven’t yet experienced a major hack of smart speakers, although there’s plenty of evidence to suggest that they’re hardly bulletproof. Not long after its launch, the Google Home Mini experienced a bug that led to the device recording everything happening in a technology reporter’s house for dozens of hours. You can chalk that up to a very bad screw up on Google’s part, but it’s a tear in the fabric of trust that should encase these kinds of gadgets.
Don't Buy Anyone an Echo [Adam Clark Estes/Gizmodo]
Sunday, December 10, 2017
Sunday X-Mas advice:
(Virtually) No one should ever own an Echo
or any other "voice assistant" product
from Boing Boing:
Labels:
echo,
surveillance,
voice assistant
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment