[A video of this talk is available at http://bit.ly/NISTIoTTalk]

Thank you. Good afternoon. It’s a pleasure to be able to address all of you today. My name’s Gilad Rosner. I’m an information policy and privacy researcher, and I founded the non-profit Internet of Things Privacy Forum, whose mission is to produce guidance and analysis and best practices to help industry and government to reduce privacy risk and innovate responsibly in the domain of connected devices.

For the last year my research partner, Erin Kenneally, a cyber security attorney at the Department of Homeland Security, and I have been conducting research on IoT privacy risks so as to better understand what’s at stake and then suggest interventions and investments and future research. The work was funded by generous grants from the Hewlett Foundation and it’ll be published later in the year and I’d like to share with you some of the findings, preliminarily. I was asked to speak about the privacy threat landscape so I’ll focus on the risks and harms that Erin and I have been writing about. I have a few wordy slides but it’s because I’m a big believer in what one scholar long ago called “thick description”. So the research is based on two workshops, one on each coast, plus one-on-one interviews and an extensive literature review – a total of 40 experts.  Our respondents came from industry, government, academia, advocacy groups, and standards development organizations. The data was then coded and analyzed using thematic content analysis techniques.

Firstly, I’d like to define the Internet of Things for the purposes of this talk. Many definitions have been proposed over the years and I’m not trying to add a new one, but just to synthesize a few that I’ve seen. For me, the IoT means devices that are not full fledged computers. They are purpose-built items versus generic computing platforms, like your laptop or your tablet or your phone. They have sensors like cameras and microphones, infrared detectors, accelerometers and much more specialized sensors. They can communicate over networks. They bridge the physical world with the electronic one. And in the consumer space they tend towards being unobtrusive. Ultimately, I see the term Internet of Things going away – remember “mobile computing”? In the same way, more and more of these devices will come to have these characteristics such that it willl be unremarkable.

Tracking and analysis

Now I’m going to launch into the privacy risks, threats and harms that emerged from our workshops, interviews, and literature. And the first one is the IoT allows the tracking and analysis that happens online to occur in the physical world. Modern Internet technologies allow for peoples’ online activities to be tracked in the most granular levels – the webpages you visit, which ones you linger on, where your mouse is on the screen, what you buy, where your computer is, your age range, your gender, what you typed and then deleted, and the list goes on. Inferences from collected data plus the merging of purchased third party data yields deep, deep profiles about us augmented in real time. Such tracking has been endemic to the online world of desktops, laptops, and mobile phones but it’s making the leap to the offline world. This trend has been happening gradually, as we can see with the use of in-store retail tracking technologies, CCTV, and WIFI detection. But the increasing amount of cameras, microphones, and other sensors into consumer goods will fully enable the mature tracking practices that we see online to make the leap into our physical, social world. As you suspect, what will propel this forward is marketing.

The government affairs manager of Esomar, a world-wide trade association for the market research industry, told one researcher that

…the future of advertising lies in passive and always-on data collection, and that the Holy Grail is real-time information about customer needs and emotions. …Today this is dependent on advances in mobile and wearable technology, and correlation of geo-location with contextual and behavioral information. The value of passive data collection is instant access to transactions and conversations… Seen this way, biosensors and biometric data promise additional real-time understanding as people move throughout everyday life, the city and retail spaces.

Of course it’s not only marketing that will benefit from a much more sensor rich environment. The manufacturers of products will get much more useful information about how their products are used so that they can continually improve them. In many examples all kinds of tracking will be predicated on users giving them consent but it remains an open question as to whether notice and consent, a central basis of the American privacy regime, is meaningful. Whether you feel that the pervasive, panoptic monitoring of your online activities is problematic and whether the extension of that monitoring into public, private, and intimate spaces is cause for worry depends on your politics and how vulnerable you feel.

Diminishment of private spaces

A corollary to increasing offline traffic is the diminishment of private spaces. I’m using the term private spaces here in the sense of having places to retreat to where you can control who can be present, who is listening, who’s watching, places of solitude, seclusion, and reserve. The most obvious example of a private space is the home and the most obvious example of technology that has the potential to diminish its private character is the Amazon Echo, Google Home, and other emerging virtual assistants. These devices are always on, however, work by the Future of Privacy Forum shows us that there’s important nuance in the types of always on technology. Most of these virtual assistants need a gate word or a wake word before they start transmitting anything you say to them. But the virtual assistants are growing eyes as we can see with this latest addition to the Echo family which was just announced last month. We can also see from Amazon’s marketing material that they clearly intend for some people to place this camera on the nightstand right next to the bed. If the home is a private space surely the bedroom is more so. One privacy advocate I spoke to related the concern of private spaces to the democratic values of freedom from surveillance and unwanted interference and freedom of thought.

I think that you can put them in context of the IoT and ask, is for example us having devices in our homes that are maybe surreptitiously recording us or collecting and sharing information that we’re unaware of, or even just the fact that they’re in our homes recording, is that a violation of the idea that we are free from surveillance? From government interference? That in our homes, it is a private space? I think once that fundamentally that idea is challenged, then you have questions of is it possible to ever find a private space, and how necessary is private space to freedom of thought? I would answer that it’s very necessary. It’s at the core.

As I mentioned there are multiple models for triggering data collection in the home and so one can imagine a spectrum of being aware, fully, that it’s occurring to being completely unaware. The gateword is a useful mechanism for control and awareness but it is a design choice by product makers and there are others. Also, the introduction of cameras into the home is rather new so the defaults and design choices are in their infancy. The point is sensor rich devices in the home, in cars and in public, because the question of how much privacy you have in public is far from settled, introduces third party observation into places that people have historically seen or desired to be private. If you accept that private spaces are essential to the human condition, and you may not, then we must cast a critical eye on the proliferation of sensors in spaces we retreat to and believe to be a secluded domain. A key danger is what the privacy community calls “chilling effects.” Florian Schaub of the University of Michigan explained it this way:

You think you’re being observed, so you behave differently. There are also habituation aspects – you might behave differently at first, and then you kind of become attuned to the technology being there, so you become maybe a bit more laissez-fair in therms of how you behave. You forget that these devices are active, but that doesn’t mean that you’re not leaving digital traces of mundane behavior anymore, and this data can reveal information about your preferences, what you like, what you don’t like, or your health, your family situation, your financial situation, all kinds of different things that people prefer to keep private.

One of the challenges facing the privacy community is detecting and measuring abstract harms like chilling effects, unwanted revelation, or stigmatization. There’s a small amount of research that attempts to do this but the fact is that it’s difficult to articulate and evidence. This has the effect of making it difficult to protect via legislation and adjudication. Especially in the United States where privacy protection for society tends to be reduced to preventing concrete economic harms and ensuring that people consented. However, if you’re persuaded by the argument that people need private spaces in which to thrive and that the connected devices appearing in the marketplace have the potential to diminish those spaces we must bring policy, technological, and design strategies to bear upon the issue.

Bodily and emotional privacy

The most private of spaces is the body. In our liberal democratic tradition freedom of thought is sacrosanct and we have a great many laws that prevent or heavily regulate intrusion of the body such as forced blood samples or unwanted medical procedures. Medical information about the body is one of the privileged information types and in the United States its disclosure is governed by HIPAA and other statutes. Privacy scholar Gary T. Marx writes that

Informational privacy encompasses physical privacy. The latter can refer to insulation resulting from natural conditions such as walls, darkness, distance, skin, cloths and facial expression. These can block or limit outputs and inputs. Bodily privacy is one form of this. This is seen in crossing the borders of the body to implant something such as a chip or birth control device or to take something from it such as tissue, fluid or a bullet.

But bodily privacy is not just a matter of the corporeal. Even as far back as 1890 when the oft cited Warren and Brandeis conceived of the right to privacy they included thoughts, sentiments, and emotions. Cameras on your nightstand, in your television, in retail stores, in the eyes of your children’s toys, in the front of your phone, coupled with biometric sensors of fitness devices and other wearables is slowly ushering in emotion detection technology. Andy McStay, one of the world’s foremost researchers on the use and privacy issues of emotion detection stated:

We’re seeing a net-rise of interest in sentiment and emotion capture. The industries are really wide ranging: from automobiles, insurance, health, recruitment, media, basically anywhere where it’s useful to understand emotions… In terms of kinds of industries that are really taking the lead on this, advertising and marketing is one of the obvious ones. Increasingly we’re seeing retail move in to that area as well … all sorts of different sectors, ranging literally from sex toys all the way up to national security agencies, and all the marketing and organizational stuff in-between.

One of professor McStay’s research respondents, the chief insight officer at an advertising firm, was dubious of traditional marketing surveys and focus groups and sought better methods to understand customers:

including biometric data about emotions to understand ‘brand levers’, or how to get people to act, click, buy, investigate, feel, or believe. … [T]his objective involves ‘understanding people through all of their devices and interaction points, i.e. wearable devices, mobile and IoT’. In general, the aim is to ‘collect it all’ so to build more meaningful interaction with brands.

Professor McStay noted this executive’s “interest in emotional transparency, or the unfolding of the body to reveal reactions, indications of emotions, feelings about brands, tracing of customer journeys and information that will help create ‘meaningful brands’.”

Given the advertising industry’s growing hunger for the fullest range of human data, the incorporation of emotion detection across a range of industries, the value of emotion data in creating richer gaming and entertainment experiences, the broadening capabilities of sensors and penetration of those sensors deeper in to private and public spaces, it seems likely that we will face questions about our emotional privacy.

Professor McStay’s existing and forthcoming work on the subject is far deeper and more nuanced than I can hope to describe in this short presentation but when interviewing him I asked him, “What’s at stake with the collection of intimate information and the datafication of everyday emotional life?” He answered:

In a sense, perhaps it depends on your politics. You could say not very little. But, as a minimum, what’s at stake is the capacity to understand people in richer ways than we haven’t seen before. So certainly in the marketing and advertising context what’s at stake is the capacity to get people to buy more stuff. So, when you talk about what underpins getting people to buy more stuff, essentially what you’re talking about is the controlling and the manipulation of human behavior. In terms of what’s at stake, as a minimum, nobody could disagree that there’s a better than average chance of raising the capacity to manipulate human behavior, typically in a consumer setting.

Children’s Privacy

The introduction of more sensing devices into the human environment has the potential to sweep up increasing amounts of children’s data both in intentionally consented ways and unintentional ways. Consider any camera in your home, in retail spaces, or in public. Consider the emergence of Internet connected toys. You may have heard about a wildly insecure doll named My Friend Cayla who’s security was so bad a complaint was filed with the FTC and foreign regulators and Germany banned the doll outright. Not long after we saw a data breach of Internet connected toys with microphones that allowed parents and children to exchange voice messages. 2.2 million of those voice exchanges were exposed when it was found they were sitting unprotected on a server.

Now those two examples are security issues that lead to potential or actual privacy violations of children’s data. Given the very weak security posture of emerging devices it’s inevitable that we’ll see more of these. But there are also the more abstract questions of children’s privacy to consider. How does the increase in the monitoring of human activity relate to children? What will awareness of always-on devices do to children’s behavior? If private spaces are under threat, what does that mean for child development? Adults can ostensibly consent to some of the IoT being introduced but A) is that consent sufficiently meaningful with regard to the collection of children’s data and B) what of the non-consensual capture of data that occurs public and retail environments? One of the first toys to capture the public’s attention was Hello Barbie. With this upgrade Barbie can finally converse with children. Instead of always on you must press a button to cause Barbie to start listening. That’s a good privacy design choice, as is Mattel’s pledge to use strong encryption. Mattel offers parents the ability to review recordings of everything their children say to Barbie, which is required by the US children’s privacy statute COPPA. Mattel also makes it very easy for parents to share these adorable utterances with others via social media. Children form personal relationships with their toys and one legal scholar I interviewed wondered about how those relationships are impacted by the introduction of microphones, AI, and network connections. She said:

Barbie is an iconic toy that people remember… She is really intended to be a peer… When you… set her up, one of the first things that she says is that you’re one of her best friends and that she feels like she can tell you anything. She, through her design, creates this really intimate and important role in a child’s life, both through those kind of big statements, “You’re my best friend, I can tell you anything,” but also we know that Barbies are kind of powerful. If someone says, “I can tell you anything,” you don’t expect that [the] intimacy comes from a statement like that [to] also usually mean, “I’m recording everything you’re saying and posting it on an Internet platform that can also be viewed by your parents who can share it onto their social media sites.”

Many US lawyers and researchers believe that COPPA is a good law by which to approach children’s privacy issues which require security, verifiable parental consent, and the right for parents to review children’s data and to delete it. The broader questions of the social and psychological impact on children from the introduction of networked toys and a more surveilled environment, however, requires much, much more study.

Before wrapping up this abbreviated tour of the privacy risks that my partner and I are researching I want to introduce a voice of decent. I tried to touch upon privacy threats that the law may be ill suited to address. Threats to abstract ideas such as the preservation of liberty, of intimacy, of democracy, of a healthy society are difficult to articulate and evidence. And so somebody may reasonably ask is this ‘threat inflation?’:

Threat inflation: “the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large”

Adam Thierer of George Mason University, both a participant in one of our workshops and the author of a number of papers on IoT subjects, maintains a consistent, deep skepticism of Technopanics: “a moral panic centered on societal fears about a particular contemporary technology… instead of merely the content flowing over that technology or medium.”

A key word in Thierer’s definition of threat inflation is “artificially” and it must be seen in light of another of his statements:

“While cyberspace has its fair share of troubles and trouble-makers, there’s no evidence that the Internet is leading to greater problems for society than previous technologies did.”

It is this arguable point upon which Thierer’s valuable skepticism and the debate to which is contributes, balances. Thierer and others argue for a variety of non-legislative ways to manage social changes brought about by the evolutionary advancements of the IoT. In the main, he and his colleagues are concerned with ensuring that innovation is not derailed by heavy-handed, precautionary regulation. I, for one, am not worried about the health of capitalist enterprise. It seems to be doing just fine. But concerns over the preservation of private spaces, over a panoptic world in which children are increasingly observed by third-parties, where adults chill their behavior and their speech because of the spectre of monitoring, and where data collection leads to greater degrees of manipulation, these are macro-social possibilities that require all hands on deck. Law, transparency, norm entrepreneurship, public discourse, social-psychological research, and a willingness to let the omnipresent capitalist drive to sell products sometimes take a back seat to the greater social good.

Thank you, I’d be happy to take your questions.