Contributed by Future of Privacy Forum
Is your Smart TV listening to your conversations? Are your children’s toys spying on your family? These types of questions are increasingly raised as the next generation of internet-connected devices enter the market. Such devices, often dubbed “always on,” include mobile phones, televisions, cars, toys and home personal assistants — many of which are powered and enhanced by speech recognition technology.
The increasing prevalence of voice integration into everyday appliances enables companies to collect, store, analyze and share increasing amounts of personal data. But what kinds of data are these devices actually collecting, when are they collecting it and what are they doing with it?
As it turns out, the answers are not so straightforward. Voice control is an increasingly popular interface for engaging with our devices — consider the Amazon Echo, or Mattel’s oft-demonized “Hello Barbie,” or even Apple’s familiar personal assistant Siri, which can be activated by spoken command (“Hey, Siri”). Dubbing all of these “always on” is inaccurate. In reality, many of these devices are taking advantage of the microphone as an environmental sensor, listening for a spoken “wake phrase” that triggers the device to activate and begin sending data to the cloud. Other devices are truly always on: home security cameras, baby monitors, wearable microphones, or text-to-voice translators for people with hearing impairments. These different categories of devices raise different questions around data privacy.
In a new white paper by the Future of Privacy Forum, Always On: Privacy Implications of Microphone-Enabled Devices, the role of speech recognition is discussed as an area where consumer expectations are rapidly shifting and the collection and use of voice data runs up against unique social and legal barriers.
Three categories of microphone-enabled devices can be highlighted: manually activated; speech activated; and always on. Manually activated devices are the most traditionally understood, with the user being required to push a button or flip a switch to activate the microphone. Many Smart TVs work this way, with no audio being recorded unless the user is pushing and holding down a button on the remote. Speech activated devices have drawn more attention, with their use of the microphone as an environmental sensor to detect “wake phrases.”
Ultimately, privacy-conscious companies will be wise to keep our expectations in mind when designing the default frameworks of a device. Of course, consumers’ expectations will evolve — one day, for instance, we might think it’s crazy to not be able to talk to our cars, or tell the lights downstairs to turn off. Most likely, our expectations will evolve more gradually in some areas than others. For instance, as devices that have traditionally not contained microphones — like televisions and toys — begin to permit voice commands, these devices should feature enhanced notice around the use of voice data.
As we enter this new world of voice engagement, product design should involve serious questions like: should this device arrive with speech recognition pre-enabled? Does it contain obvious visual cues when transmitting data? Does it process audio locally or externally (on the cloud)? For instance, local processing might protect users from unintended interception or breach, but cloud processing allows for powerful, inexpensive computing that many buyers may find compelling.
And finally, not all of these issues will be about appealing to privacy-conscious consumers. Voice data has unique social and legal significance and the increasing collection of voice data may implicate state and federal laws and regulations. Although speech recognition does not require analyzing audio to detect the identity of individuals, there is a growing body of laws and regulations around biometric identification. Sector-specific laws and regulations will also apply on the basis of the content of the voice communications, meaning that Smart TVs and other internet-connected devices may have special implications in hospitals, assisted living facilities, or public employer workplaces.
Similarly, consumers buying home security cameras or wearable microphones — like a “life logging” hobbyist recording daily life from a camera worn around the neck — will be well-served to be aware of state laws around consent to wiretapping. Is it possible, for example, that your neighbor could one day sue you for wiretapping because you forgot to tell them that your home security camera was on in the living room? In addition to the shifting landscape of privacy norms, these are the kinds of legal questions that may shape the conversation around microphone-enabled devices in years to come.