why privacy issues matter… to me

why privacy issues matter… to me

Bloody Gmail (and the scarier A9) has me back to thinking about my love/hate relationship with privacy issues and my deep need to unpack the term and insert the issues of vulnerability into the discussion. Privacy is a loaded term. I’ve heard way too many people talk past one another thinking that they’re both talking about privacy issues. It’s a slippery discussion and i leave it to Dourish to fully flesh out why. But i do think that there are important issues that must be teased out in order to have a conversation about privacy, vulnerability or any of our data woes.

Key privacy-related questions

Given XYZ situation, i ask myself two key privacy-related questions:
1) Does XYZ make any person or group of persons feel icky? Who? Why?
2) Are there any rational scenarios of how XZY can be abused by the creators, potential hackers, or ill-advised governments/coups?

[Note: these are my questions for myself and thus i define rational, a notably arbitrary definition that falls under the “i know it when i see it” category. The key anecdote that i keep in my head is that at the turn of the century, Holland (and other countries) collected religion as part of their census data. In 1939, that data was horribly horribly abused. This may not have appeared to be a rational situation in the 1920s, but it is in my scope of the possible now.]

Reasons for the ickiness factor

First, i address the ickiness factor. I immediately disregard any groups that involve the paranoid from my list of ickiness contenders that must be addressed. I do not exclude the marginalized. Often, the ‘why’ answer for this group has to do with heightened walls around what is normative and what is not. Given that i’m politically all-in-favor of challenging normative values, i recognize their plight and pay special attention to it, albeit reflexively so.

Of the groups who fall into the ickiness reaction zone, i’ve identified a few reasons why there’s usually a reaction to XYZ:

  • XYZ makes a someone feel at risk to situations of theft, notably identity theft. This is usually from people who have experienced identity theft, a growing group.
  • XYZ asserts values or normative boundaries that feel uncomfortable. Example: you tend to be hyper-aware of demographic requests when your race/religion/sexuality/gender are not listed; thus, you feel invaded in ways that you wouldn’t feel if you fit the mold perfectly.
  • XYZ opens the possibility of having material available to an undesired audience. This is a control issue. Most frequently, the undesired audience consists of known individuals with whom the individual has a relationship but that relationship does not include the sharing of material required by XYZ.
  • XYZ makes information available to authorities with power over the individual. This is not simply a fear of the paranoids. This is a rational concern of many people who reside in countries whose governments have abused their power and individuals who work in companies whose bosses have regulated employee’s behavior.

Vulnerability embedded in ickiness

This ickiness feeling in relation to ‘privacy’ is what i called vulnerability. Something that XYZ has done has made people feel vulnerable to potentially abusive strangers, cultures and cultural norms, known others, and institutions with power. I am particularly interested in rational constructions of vulnerability, particularly amongst those who have felt the fire. We already live in a culture of fear – i’m not interested in magnifying it.

Outside of those who live in a fear for fear’s sake mentality, there’s a pretty consistent set of patterns regarding vulnerability:
– New situation raises people’s vulnerability concerns; walls go up
– Situation appears to cause no harm; walls start lowering
– Incentives are used to encourage participation; walls lower faster
– Vulnerability comes to forefront with resultant situation; walls spike

Point two is where the concerns slumber and why civil rights activists are essential. People’s innate vulnerability concerns definitely subside over time. Incentives definitely work, particularly when the consequences are not high.

While you may not give any demographic information just because, you will probably give it for the chance of winning a Porsche. For most people, this isn’t an issue of high vulnerability and there are low consequences so they don’t need a strong incentive. Take it to the next level. What will it cost to have a bot track your web surfing? Many people will do it… but the necessary incentive is usually more than dreadful odds at winning a Porsche. Take it to the next level. What will it take for you to be willing to turn your personal web surfing data over to your boss, lover or parents? Surfed any porn lately? The incentive (or, more likely, extreme guilt/requirement) must be high because the consequences of having to face your actions are much higher, particularly if you weren’t prepared to turn over your data to those with power over you. Note that for many people, fear of turning over this information to known undesired audience is far more threatening than having to turn this over to institutions; this is not the case in certain countries where vulnerability to dreadful governments runs much deeper than vulnerability to known individuals. A lot has to do with power and ability to execute enforcement over undesired behavior.

Why we need civil rights activists, legal changes and architects

Let me dig out of this hole and return to the civil rights activists. As people’s concerns lower, they’re willing to tolerate much more invasive access to data because they only see the incentives and they don’t see the consequences. This is rational. We tend to operate on local, not meta levels in everyday life. The role of the civil rights activist is to go meta and deal with first point #2 – can any rational abuse of data be expected? Their role is to look at the larger picture and protect people from engaging in localized decisions that might harm the larger picture.

There are usually two approaches that said activists take:
1) Try to educate the masses.
2) Try to change XYZ from happening through any means possible.

Education is nice and it works locally through social networks, but i genuinely do not believe that privacy education (which usually works by inserting fears) will overcome the incentives. Furthermore, the incentives will be increased and living in a culture of fear sucks; even Americans have started to ignore the bloody terrorist warning color markers. Of course, a moment of super-fear and then its slow decline to disregard always puts people on greater guard than originally. But i wouldn’t want the education camp to educate by creating situations that instigated super-fear. Leave that to governments.

I should clarify… i’m not entirely opposed to education; i just don’t believe that it’s the solution. Let’s keep it in mind as the social norms part of Lessig’s 4 point regulation scheme – valuable as a contributor, but not effective as the sole approach.

Then there’s the systemic changes. Going with Lessig, there are three types of systemic changes that can be made – the market, the law and the architecture. Personally, i think that the market is the reason that things are being moved in this direction and thus, i think that they’re a bit impossible to swing, so i believe that more effective approaches can be made on the law and the architecture side. Architecture is a bit more obvious, except that it is inherently tied into the market (or government). That kinda leaves law. And law continued to become more fubared. One excuse is that it is in bed with the market. Another excuse is that it’s fending off the paranoids.

The reality, i believe, ties into how law negotiates social norms. I wish i remember the details, but i remember learning once that social practices are often enough to affirm laws. In other words, if a law and the social practices are primarily in cahoots, it is unlikely that the law will change. It is only when there are significant differences that change is likely to occur. In other words, if people are tolerant of invasive practices, why regulate against them?

This is where i start to believe in the education branch of the civil rights movement. The key shouldn’t be to make people see the world differently, but stall them enough that they don’t assimilate to problematic breaches of privacy so that laws can be changed. Of course, i don’t know how to do this and thus, i suspect that it will take extreme conditions of masses feeling vulnerable to upset the law structure. (It is for this reason that Europe is much slower about opening up privacy… they remember WWII.)

The opportunity for designers and why i’m involved

Bring this back to me. From my perspective, a lot of the architectural decisions that induce vulnerability emerge from naivety, not poor intention. I genuinely believe that many creators really meant to do the right thing. The problem is that their construction of how to do the right thing is about privacy, not vulnerability. They only imagine how to address the data, not how to address people’s relationship with the data. The approaches are fundamentally about creating control or transparency. I’ve never found anyone who really thought through the implications of having all of the data in the first place. And most designers don’t realize the cultural norms that they insert into a system. Also, control is really really hard when people are trying to manage an external representation of their information. These systems insert new architectures: persistence, searchability, lurkers, etc. Control doesn’t work when people don’t know how to operate the controls. As for transparency, i am horrified by most people’s reading of Brin. Universal transparency will only heighten vulnerability, particularly that on a local level. It is not a solution for most of the situations that i’m concerned with.

So, as i see it, i have two roles as an activist on this issue:
– Educate people to conceptualize vulnerability and go through the exercise of thinking about who a design might affect, how, and why. Encourage them to minimize vulnerability in their design, not simply protect privacy.
– Work directly in domains that are all about vulnerability management and dive deep into the design issues with a conscientious perspective trying to maximize the protections afforded to users.

Dear me that was a rant…

31 thoughts on “why privacy issues matter… to me

  1. Napsterization

    A9 Search Goes Live… With The Attendent Privacy Issues

    From Amazon. What they say it will do: Search Inside the Book: In addition to web search results we present book results from Amazon.com that include Search Inside the Book. When you see an excerpt on any of the book…

  2. Jay

    Not a rant, darling, a treatise. “Ick factor” calculation is exactly what has been missing from privacy discourse.

    I need to tell you, though that only some Europeans seem to remember the privacy lessons of WWII. On a trip to Rome in 2001, I was stunned that I had to buy a memebership card from ARCIGAY to go to the gay bars. Apparently they had had trouble with bashers, so they started requiring membership to better track clubgoers. And all of this data was stored in a handy database! I was floored. To sum up, fear of anti-gay violence had led gays to create a handy database of… gays.

    Since I didn’t live there, I was relatively unconcerned, but in principle I had to ask one club owner (an American) if anyone every brought up Mussolini et al. “Nah, that’s ancient history. Never happen again.” And thus the market tempts social norms in a decidedly dicey direction.

  3. Black Phoebe :: Ms. Jen

    Privacy, I believe, and other bits

    Here are the tidbits and links for a Thursday afternoon in April. Enjoy! 1) My birthday party show will be on Sun. April 25th at Alex’s. Come on down. 2) Danah has a good rant/post on privacy. 3) TNH’s Making…

  4. Many-to-Many

    The ickiness factor

    In the process of unpacking my frustration with privacy issues (in the context of Gmail and A9), i started addressing a a key concept that i believe applies to all social software: the ickiness factor. Ickiness is the guttural reaction…

  5. Jamie

    I’m one of those computer science types that keep inventing dangerous and flawed technologies. I consider the topic of my PhD particularly dangerous and disruptive (and possibly flawed) – essentially it’s a kind of ambient information sharing network between familiar (and potentially, complete) strangers. So I’ve spent most of my time reading and trying to understand the social issues because I’m confident I can solve the techie stuff, the social stuff is another matter.

    Your post struck a chord with me because I’ve been considering privacy issues, primarily by ensuring that the user explicitly publishes particular information rather than broadcasting information without their knowledge. It kinda turns the thing into a form of personal publishing, like blogging. But that made me think: although I try to avoid revealing personal details of myself, my family or friends on my blog does it still leave me vulnerable? A search of any blog will almost definitely reveal political views and possibly even religious and sexual orientation, often implied and without being explicitly recorded in a row of a database. Certainly, there is more personal publishing happening today than in the past, and it is more widely distributed, better indexed and searchable than ever before (I’m thinking specifically in terms of “Letters to the Editor” in national newspapers etc.). Does this make us feel more vulnerable? Does it make us more vulnerable? [noting your point, that the perception of vulnerability and the actual reality do not necessarily coincide]

    I guess that there may also be a difference between the feeling of vulnerability generated by a third-party (such as Google peaking at your mail or official collecting census information) and that caused by your own actions (writing to newspapers, writing a blog, talking loudly in a pub). Perhaps the former over-generates a feeling of vulnerability and the latter under-emphasises it?

    I’ve no idea, I’m just talking out loud, and I think that’s enough of my waffling. One question: do you have any pointers for the socially-concerned computer scientist to understand these issues of privacy/vulnerability?

  6. Jay Fienberg

    Good and useful post. With your criteria:

    “XYZ makes information available to authorities with power over the individual”

    I might suggest this be extended, or another criteria be added, along the lines of:

    “XYZ makes information available that gives others a means to have power over the individual”

    This is maybe also just a variant on having materials available to undesired audiences. But, it might be useful to highlight other contexts (e.g., corporate marketing) were one’s information is being taken out of the context for which it was initially given and being used to implicitly or explicitly manipulate one or one’s family/friends.

  7. Sean

    Well put danah!

    But RE: “Universal transparency will only heighten vulnerability, particularly that on a local level.”

    Completely universal transparency is a scary concept, but have you run across anyone who really designs for this or advocates it?

    As for non-universal transparency — there are definitely situations in which transparency -reduces- vulnerablity, surely you agree? (i.e., when the Rodney King incident is filmed and viewed worldwide it can’t be easily swept under the table. The Sunshine Laws, the press as watchdog, etc.) Therefore “will -only- heighten vulnerability” overshoots the mark perhaps?

    Aside from that bit I think you’re right on the mark and the idea of framing these things as vulnerability issues rather than privacy issues is a very useful one.

    But there’s another important thing to take into account here. It’s true that “control doesn’t work when people don’t know how to operate the controls,” and control doesn’t work when people don’t know what they can and should control. Today most (all?) wi-fi enabled Palm Pilots come out of the box with absolutely no way to securely check e-mail or surf the Web. Unless you know to seek out, purchase and install obscure software you’ll never know that every password you send across the transom using these popular devices can easily be intercepted by any 12-year-old within range of the access point you’re using; all the kid needs is a $500 laptop, a $30 wi-fi card and free packet-sniffing software. Of course we’d love Palm to at least warn their customers that their data is completely insecure, or preferably ship secure browsers and mail readers on their devices, but don’t hold your breath. Activists should counteract your pattern #2 (or rather, kick off pattern #1: “new situation raises people’s vulnerability concerns; walls go up”) by seeding public wi-fi areas with applications and devices that prominently display all unencrypted passwords being sent over the transom, with a few images grabbed from people’s Web browsing trails thrown in for good measure. But as you say, it’s important not to build a culture of fear here; after raising a clear and discrete concern you provide a clear and easy solution to that concern. At the same time that and place that you educate people about their vulnerabilities, clearly present easy solutions (in this case, links to download secure browsing/e-mail software) that substantially reduce the vulnerabilities.
    Entities “with power over the individual” (the NSA, large corporations, and so on) are very familiar with these vulnerabilities. McDonald’s or anyone else can intercept passwords and can easily monitor, record, forge and censor unprotected wi-fi communications. Sure we can prevent such misdeeds through technical and legal means, but such means tend not to come into play until people know they need them. Such development isn’t profitable without demand.
    We need to raise public awareness that the problems exist, and pranksters can spread this sort of consciousness. Never resort to “super-fear” and hysteria, don’t raise perception of risk beyond actual risk. But the fact is there -are- things going on that become scary if people do not take the basic steps to deal with them.

  8. The Pre-Commerce Blog

    A9, and Counting

    Amazon’s A9 search engine got a lot of blogging attention last week (Following John Battelle’s break of the news). Here are some of my thoughts on it: First, it’s a very smart move by Amazon for a number of reasons. There aren’t many …

  9. scott

    dig it, sean.

    brin’s transparency is impossible. if it were possible — and realistic — i suspect it would be a good thing to work towards. but being impossible, the fact that people consider it realistic is scary.

    it’s really a beautiful concept. and it serves a very useful role as a sort of polar notion in the ideaspace of societal privacy — a conceptual touchstone. we can measure our progress against it, even use it as a model under careful circumstances, but to literally work to make it a universal reality is not only a waste of effort, it may very well be dangerous. adults don’t believe in santa claus, tho we may celebrate him with sincerity nonetheless.

  10. zephoria

    Scarily, complete universal transparency is something that i’ve heard from way too many designers who miss the in-between bits. And usually, those who are promoting it are more focused on making sure the public’s data is transparent instead of those in power.

  11. Abstract Dynamics

    Gmail and the “Emergent” Power of the Internet Oligarchy

    Another one bites the dust. Say good bye to Tim O’Reilly as a thinker, and say hello to another defender…

  12. Life With Alacrity

    Four Kinds of Privacy

    I’ve been thinking about the nature of privacy a lot lately. (Full post includes discussion of my background relatd to the topic of privacy, and proposes four different kinds of privacy: defensive, human-rights, personal privacy, and contextual privacy.)

  13. Life With Alacrity

    Four Kinds of Privacy

    I’ve been thinking about the nature of privacy a lot lately. (Full post includes discussion of my background relatd to the topic of privacy, and proposes four different kinds of privacy: defensive, human-rights, personal privacy, and contextual privacy.)

  14. Many-to-Many

    Wikis Anonymous

    Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others are using Socialtext with our…

  15. Many-to-Many

    Wikis Anonymous

    Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others are using Socialtext with our…

  16. Many-to-Many

    Wikis Anonymous

    Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others are using Socialtext with our…

  17. RocknGo

    Wikis in Academia

    Via: Many-to-Many: Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others are using Soci…

  18. RocknGo

    Wikis in Academia

    Ross wrote on his blog: Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others…

  19. RocknGo

    Wikis in Academia

    Ross wrote on his blog:Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others are…

  20. RocknGo

    Wikis in Academia

    Ross wrote on his blog:Brian Lamb has a great article on wikis in academia in EDUCAUSE Review. I didn’t interview for the piece (would have shared how academic communities in Stanford [our very first customer], Berkeley, USC and others are…

  21. Abject Learning

    Espaces Grand Ouverts — Translators Wanted

    I’ve been relieved by the response to my recent EDUCAUSE Review article on wikis. I was terrified at the prospect of SeriousWikiHeads reading the piece and ripping it to shreds, but so far the feedback has been gentle. Among others, Clay Shirky made a …

  22. Abject Learning

    Espaces Grand Ouverts — Translators Wanted

    I’ve been relieved by the response to my recent EDUCAUSE Review article on wikis. I was terrified at the prospect of SeriousWikiHeads reading the piece and ripping it to shreds, but so far the feedback has been gentle. Among others, Clay Shirky made a …

  23. The Pre-Commerce Blog

    A9, and Counting

    Amazon’s A9 search engine got a lot of blogging attention last week (Following John Battelle’s break of the news). Here are some of my thoughts on it: First, it’s a very smart move by Amazon for a number of reasons. There aren’t many …

Comments are closed.