This week I read all the history I have told Alexa and felt a bit like reading an old diary. While I remembered the things I told Alexa privately, they were stored on the Amazon server and probably read by an Amazon employee. That's all to make Alexa better, the company continues to say. However, it is not clear to many people how people interact with your seemingly private voice commands, except for monitoring. Alexa, these people say, is a spy that lies in the calling device.
The discussion about whether Alexa or any spokesperson spies on us is old at this point and it doesn't happen. Privacy lawyers have filed a complaint with the Federal Trade Commission (FTC) claiming that these devices violate the Federal Wiretap Act. Journalists have figured out what threats always come from microphones and artificially intelligent voice assistants. Skeptical technology bloggers like me have argued that these things were stronger than people who understood and overloaded with privacy violations. Recent news about Amazon's review of some Alexa teams shows that the situation is worse than we thought.
It starts to feel like Alexa and other voice assistants who are meant to spy on us because it is the way the system was designed to work. These systems rely on machine learning and artificial intelligence to improve themselves over time. The new technology that justifies them is also currently subject to errors, and even if they were perfect, the hungry businesses that created them are constantly thinking of new ways to use users for profit. And if incomplete technologies and strong companies collide, the government is trying so much to fight for what is happening, this regulation seems like an impossible solution.
The situation is not terrible. This technology could be really cool if we pay more attention to what is happening. What is quite complicated again.
Never ending mistakes
One major problem with Alexa or other voice assistants is that technology tends to fail. Devices like Echo are equipped with always-on microphones that only need to be registered when you want to listen to them. While some devices require Alexa to be pressed on a physical button, many of them are designed to start recording after you have spoken the wake-up call. Anyone who has spent some time using Alexa knows that it doesn't always work like this. Sometimes the software hears accidental noises, thinks it is a wake-up word and starts recording.
The problem of erroneous positive problems became evident at the time I started reading through my Alexa team history on Amazon's website. Most entries are dull: "Hey Alexa;" "Show me the omelet recipe;" "What is going on?" Alexa. "Every time I saw it, I saw it twice and read it aloud:" There was no Alexa for audio. "These are the things Alexa heard that she shouldn't have heard, teams sent to Amazon servers and sent back because the machine decided that the wake-up word was not said or that Alexa had recorded the audio when the user did not give the command. In other words, they are errors.
At par value, voice assistants picking up stray audio are an inevitable lack of technology. A very sophisticated computer program that can understand everything you say is behind a very simple one who is trained to hear the word and then send any commands that are then delivered to the smartest computer. The problem is that a simple computer often does not work properly and people do not always know that there is a record in the room. This is how we receive Echo-based nightmares, such as the Oregon couple, who accidentally sent a record of the entire conversation. Amazon itself has been working on improvements to reduce the error rate, but it is hard to imagine that the system will never be perfect.
"It's a scary thing: your home has a microphone and you don't have the ultimate control when it is activated," said Jeremy Gillula, Director of Technical Projects at Dr.Fhristian Frontier Foundation (EFF). "From my point of view, it's problematic from a privacy perspective."
This type of event is a bad luck, although it is more common than most people. What is perhaps worse than glitches is a very deliberate workflow that reflects user interaction with stranger voice assistants. Bloomberg recently reported that Amazon's team of employees has access to the geographic coordinates of Alexa users and that these data were collected to improve voice helper capabilities. This revelation was only a few weeks after Bloomberg also reported that thousands of people employed by Amazon across the globe are analyzing the Alexa team of users to train the software. They can surprise the compromising situations, and in some cases, Amazon employees ask what people say.
Amazon strongly opposed these reports. The company representative told me that Amazon only notes "a very small number of interactions from random customer groups to improve customer experience." These records are stored in a protected system that uses multi-factor authentication to "limit" the number of closely monitored employees "can access. Bloomberg points out that there are thousands of teams.
But for Alexa and other artificially intelligent voice assistants to work, a human review is needed. This training could prevent future mistakes and create better features. Amazon is not the only company that uses people to review voice commands. Google and Apple also employ groups of people to review what users say about their voice assistants to train software to better understand people and develop new features. Of course, the human element of these seemingly computerized services is creepy, but it is also an essential part of developing these technologies.
"After all, in really serious cases, you need a man to tell you what's going on," Dr. Carnegie Mellon University, a computer scientist, said. Alex Rudnicky said. Rudnicky has been developing speech recognition software since the 1980s and has led a team of teams competing with Alexa to sponsor an artificial intelligence competition. Although he claims that people are needed to improve the handling of natural languages, Rudnicky also believes that it is unlikely that a voice team will be able to track one individual.
"When you're one of the 10 million," said Rudnicky, "it's hard to say that someone will find it and track you back and find out about you that you don't want them to know."
However, this idea does not believe that the idea of a stranger who reads your daily thoughts or knows your location history is not considered a bit creepy. Often the voice assistant can accidentally record me, but the systems are not smart enough to wake up to 100% accuracy. The fact that Amazon's directories and all Alexa records are available – accidentally or otherwise – makes me feel terrible.
Privacy issue Nobody wants to fix it
In the last conversation, half a dozen technology and privacy experts told me that we need stronger privacy laws to address some of these problems with Alexa. The amount of personal data your Echo collects is bound by the rules set by Amazon and the United States lacks strong federal privacy laws, such as the European General Data Protection Regulation (GDPR). In other words, companies that create voice assistants adopt rules more or less.
So I consider myself some issues. Who cares about users? Why can't I choose to allow Amazon to register my commands rather than get rid of privacy settings by looking for ways to stop sending my data to Amazon? And why are my choices limited?
In Alexa's privacy settings, you can opt out of Amazon's use of your records to develop new features and improve transcriptions. You cannot opt out of Amazon to save your records for other purposes.
Such settings require the user to protect their privacy. If so, why can't these companies be completely anonymous in my interaction with voice assistants?
Apple seems to be trying to do that. When you talk to Siri, these commands are encrypted before they are sent to the company with the added Siri identifier. Your Siri identifier is not related to your Apple ID, so you can't open your privacy settings on your iPhone and see what you've said about Siri. Not all Siri features require the device to send information to Apple servers, reducing exposure. Apple uses Siri team records to train the software because you have to train artificially intelligent software to make it better. The fact that Apple does not link specific teams to a particular user can explain why so many people think Siri is terrible. Once again, Siri could be your best bet for a voice assistant to consider privacy.
This is a question in the debate when Tim Cook would like to remind you that Apple is not a data company. Businesses like Google and Amazon are turning your personal data into products that they can sell to advertisers or use to sell you more stuff, he says. This is the same argument we saw from Apple CEO when he wrote this year's Time Magazine column and announced plans to promote federal privacy laws.
The idea begins to get a little traction. In January, the government account office published a report calling on Congress to adopt comprehensive privacy online legislation. This report was accompanied by a Privacy Chorus, which has long claimed that the United States needs its own version of GDPR. In March, the Senate Court Committee examined evidence from several people who pushed for federal privacy laws. However, it is not clear whether Congress will act or not.
"The technology of speech has been so good, it is important to worry about privacy," said Professor of Electrical Engineering at Washington University, and a speaker of speech technology. Mari Ostendorf. "And I think companies may be more concerned about the US government."
It would be hoped that Amazon would at least reflect on its approach to privacy and voice assistants. Because right now, it seems that the general public just separates countless ways in which devices like Echo record our lives without our permission or by sharing their personal data with strangers. The latest dispute over Alexa just scratching the surface, as a world full of always-on microphones, is a complete privacy nightmare.
The problem is that companies with data-driven business models have the whole incentive to gather as much information as possible about their users. Every time you use Alexa, Amazon gets a sharper picture of your interests and behaviors. When I asked for specifics about how Amazon uses these data, the company gave me a strange example.
"If a customer uses Alexa to make a purchase or interact with other Amazon services, such as Amazon Music," said an Amazon representative, "we can use the fact that the customer has acted the same way it would be if the customer did it using our website or any of our apps, for example, to provide product recommendations. ”
There is evidence that this type of advice might become more complex in the future. Amazon has patented technology that can interpret your emotions based on the tone and volume of your voice. According to this patent, this hypothetical version of Alexa-like technology can tell you whether you are lucky or sad and provide "highly targeted audio content, such as audio ads or promotions." One could say that the only thing that keeps Amazon from freeing ad support Alexa has the chance that the Echo owner will be able to recover. The government may not stop it.
A frightening future
The future without much supervision could very quickly get very Philip K. Dickian. I recently talked to Dr. Norman Sadeh, Professor of Computer Science Carnegie Mellon, painted a gloomy picture of what the future might be without a better privacy regime.
"At the end of the day, all these speakers connect to one unit," Sadeh said. "So Amazon could use voice recognition to identify you, and so it could create very broad profiles of who you are, what you do, what your habits are, all the other attributes you don't always want to reveal to them."
He suggests that Amazon could do business from knowing who you are and what you love with just your voice. And unlike most of the dystopian notions of what face recognition could allow, voice recognition could work without seeing you. It could work on phone lines. In the future, when Internet-connected microphones are in an ever-increasing number of rooms, such a system could always listen. Several researchers I am talking about to show this dystopian idea and regret its inevitable arrival.
Such a system is hypothetical so far, but if you think about it, all the pieces are in place. There are tens of millions of devices across the country, at home, and in public places, full of always-on microphones. They are allowed to listen and record what we say at certain times. These artificially intelligent machines are also subject to errors and will only improve by listening more, sometimes allowing people to correct their behavior. Without government supervision, who knows how the system will evolve from here.
We wanted a brighter future than we did, didn't we? Talking to the computer, it seemed very cool in the 1990s, and it was definitely an important part of Jetson's lifestyle. But so far it seems that the inevitable truth is that Alexa and other voice assistants are forced to spy on us, whether we like it or not. In a way, technology has been designed to be avoided, and in the future it is likely to deteriorate without supervision.
Perhaps it is foolish to think that Amazon and other companies that make voice assistants are concerned about privacy. Perhaps they are working on troubleshooting problems caused by faulty technology, and maybe they are working to solve the anxiety people feel when they see devices like Echo recording them, sometimes with no user awareness. Heck, maybe Congress is working on the laws that hold these companies responsibly.
It is inevitable that the future of computers in voice mode should not be so dystopic. Talking to our gadgets, we would change the way we work with technology in the deepest way, if everyone were to be with it. This is not the case at the moment. And ironically, the fewer people we've helped to develop technology like Alexa, the worse Alexa will be.