Google+ is calling its platform “real-life sharing”. It wants to approximate the ways we interact on its platform to our real life behaviors. This would certainly seem to be a good idea, assuming that what people want is to carry on interacting online in ways that are continuous with their offline interactions. A number of points need making at this point. First, the way we interact offline nowadays includes all sorts of computer mediations, such texting. In other words, it is not as if the “real life” part is unambiguous. Whose real life, exactly, is Google+ trying to mimic? This is a great question, because it gives us an insight into the ideal Google+ user.
This then iterates back into real life. First of all, we may want to adopt the lifestyle of the ideal Google+ user. It is an ideal type that we may think we ought to try and approximate ourselves. Second, maybe we will start to conceive of our friendships in the way that we enact them on social network sites – that is, as a collection of discreet acts of information transfer. Google is certainly right that Facebook is off the mark in terms of the nuances of what we share and with whom (and speaking from personal experience, this is certainly the reason that I would not befriend my mother on Facebook, and find it hard to believe that anyone would (befriend their mother, that is, not mine)). But you can’t help think that “real life” is not just sitting in front of your computer screen and sharing your thoughts (and favorite links) about your favorite hobby, but actually going out and doing that hobby (I’m thinking of the Sparks part of the Google+ platform right now). More generally, the interesting question – to me at least – is whether people are starting to understand their own friendships in terms of sharing, and whether this would reflect an adoption of a category from the Internet. After all, we understand ourselves and our lives through social categories and constructs. My hunch is that “sharing” is becoming an increasingly important one.
The latest privacy furore surrounding Facebook involves its new facial recognition feature. According to Facebook’s own blog entry on the feature, when you upload photos of people who are already tagged in other photos, Facebook will suggest their names, making it quicker for you to tag your photos.
Now I’m just as keen on privacy as the next person, but some of the anti-Facebook responses appear to me to be slightly overblown, especially those that think that this facial recognition gives governments greater access to information about its citizens. First of all, if governments can access our Facebook photos, they don’t need Facebook’s facial recognition technology to identify us. I’m sure they have their own. So the problem would lie with the photos being online, not with their being tagged. Secondly, just because someone has tagged a photo as being a photo of me doesn’t make it a photo of me. It may well be a photo of me, but the person tagging may have made a mistake, or may have tagged someone else as me in order to get my attention. Thirdly, to the best of my knowledge, I cannot prevent myself being tagged by someone else (but I can prevent other people from checking me in with them to places). I can remove the tag if I wish, but it may be too late. And disabling the facial recognition feature doesn’t mean I can’t be tagged anyway, which suggests that the problem, if there is one, is with tagging itself, and not with the new feature.
Most interestingly for me, though, as a sociologist of technology, is the fact that Facebook is offering a technological means for doing something that we can (and do) do anyway: Facebook says that 100m tags are added to photos every day. Actor-network theory analyzes how networks of humans and non-humans are configured so as to successfully perform so-called “programs of action”. It looks at those tasks that are undertaken by humans and those undertaken by non-humans, without privileging one type of “actant” over another. Often, non-humans (computers, say) can carry out out tasks much quicker than humans: a computer will find me someone’s phone number in the phone book much quicker than I can. Sometimes we are happy for non-humans to be involved in programs of action (such as finding someone’s number in the phone book) and sometimes we are less happy (for instance, cameras that photograph us speeding on the motorway).
I think the way to think about this new feature on Facebook is in terms of actor-network theory, and your view on the feature will be determined by whether you think that having a computer do something you could do yourself (and have been doing yourself) makes the action qualitatively different. Is having a machine help you identify your friends’ faces qualitatively different from you identifying them yourself? Does the fact that the machine can do it far quicker make the very task itself different? If it is OK to tag your friends, does having a computer help you tag them faster make it wrong? If so, how come?
And finally a quick “I’m not naive” disclosure. Of course, the reason Facebook wants to “help” us tag our friends is because Facebook makes money from the information it collects. A tag on a photo is information. I’m not so naive to think that Facebook is taking pity on those who have to spend hours tagging their friends in all their photos. There is real economic value in those tags, and so we should always be suspicious of everything Facebook does. Also, I have no doubt that there are serious privacy threats latent in facial recognition software. The state already knows what I look like (there are photos in my passport and on my driving license). If it wants to, it can film me at a political gathering, run facial recognition software, and thereby know I was at that gathering. That, to me, is much more worrying than Facebook’s offering.
Below is a talk I gave yesterday at the annual conference of the Israeli Law and Society Association. The “Benny and Ayelet” are two academic lawyers who have written a paper arguing for children’s right to privacy vis-a-vis their parents. Parental surveillance of children includes all sorts of filtering and monitoring programs that parents might install on their computers so they know what their kids are getting up on online, for instance. Enjoy!
My aim for today is to try and put the discussion of children’s right to privacy in a broader social context. The first thing that I should note is that my perspective is that of a sociologist, and from this perspective I am not going to take a stance on whether children should or should not have a right to privacy, though I do have something to say about the reasons for parents’ surveillance of their children, some of which are creations of the media and based on empirically unfounded anxieties. My understanding of both rights and privacy, at least as a legal concept, is far inferior to the other speakers’. So the question I ask here is: What is the social context of our current discussion? This is not merely an academic question. If we can identify the trends that are framing our discussion today, then this could give us a clue as to what might happen in the future.
Before I get going, I would like to outline the limitations of my talk: firstly, like Benny and Ayelet, I’m dealing with children’s relations with their parents. [I’m not dealing, for instance, with the activities of companies who place adverts for children on the internet, or who track children as they surf.] Also, I’m mainly restricting myself to children’s exposure to pornography and online grooming, which is when an adult develops a relationship with a child in order to sexually abuse him or her.
My argument is that current parental concerns about what children are doing online are twofold. One aspect is to do with the transgression of the boundary between the inside and the outside. This in turn is the consequence of quite long-term and extremely widespread changes in childhood, specifically, the definition of the outside as dangerous and the domestic as safe. What I want to argue is that surfing is an activity that brings the outside in and takes the inside out. The second aspect is to do with a particularly modern uncertainty about the boundaries between adults and children, or adulthood and childhood, especially in relation to sex and sexuality. In my conclusion, I suggest that current parental practices are embedded in two extremely deep-rooted conceptions of childhood.
So how do I establish these arguments?
As a starting point, we can say that if there is a call to recognize a right to privacy among children, it is because of a perception that their privacy is being invaded. There are all sorts of indicators that this is the case, or, to put it in more morally neutral terms, that today’s young people are subject to more forms of surveillance than children at any time in history (Howe & Strauss, 2000). It has even been suggested that ultra-sound technologies are part of this array of surveillance technologies, when fetuses are imaged to check for abnormalities.
This then raises the question, why is their privacy being invaded? What are parents, or adults more generally, hoping to achieve through these practices? In general terms, or more accurately, on their terms, parents want to protect their children. (But it might also be true that they don’t really know why they are doing it – in her study of CCTV in schools in England, Emmeline Taylor (2010) found that sometimes the school didn’t really have a well thought out rationale for placing cameras around the school, it just did it. In this sense it might be an act of imitation; or the principal might think he has to keep up with other schools. This might be the same with parents: they adopt certain practices though without necessarily knowing why. At bottom, though, the motivation is to protect children, very often from themselves.)
So what are the dangers that concern parents? There would appear to be two. The first, and most significant, are online predators and indecent material, especially pornography. This can be inferred from the marketing materials of companies who provide tracking and filtering software for parents. (For instance.) The second is a concern that young people in online environments are behaving in a way that is (1) inappropriate in and of itself; and (2) might have unwelcome unforeseen consequences by leaving digital footprints of youthful indiscretions. I am thinking here of the popular discourse, which has been shown to be inaccurate by study after study (boyd, 2007; Herring, 2007; Livingstone, 2008), that claims that the youth of today has no sense of shame or embarrassment.
One way of accounting for these concerns is in terms of a generation gap. One aspect of this is children’s relationship with technology. As Buckingham has noted: “new technology is often invested with our most intense fantasies and fears. It holds out the promise of a better future, while simultaneously provoking anxiety about a fundamental break with the past. In this scenario, children are perceived both as the avant-garde of media users and as the ones who are most at risk from new developments” (Buckingham, 2007). If we want to understand current concerns about youth and privacy, though, both those of privacy advocates and those of parents, we need to uncover the specific contours of the current day instantiation of this gap.
It is at this point that I shift my emphasis from privacy and technology to the sociology of childhood.
There has been a well-documented movement of children from the outside to the inside, and a change in the social meanings of those categories, which, I argue, is extremely relevant to our discussion. This movement might be said to have been ongoing since the expansion of formal schooling in mid-19th century Europe, and can be seen in the general idea that “the street” is no place for children. In recent decades the process has taken on specific characteristics, which, surprisingly enough, are very well captured by an ethnographic study of childhood in China. In her research, Orna Naftali describes a movement into the home among Chinese children, particularly since the emergence of capitalism. Children, she notes, are being given a room of their own. This is due to growing affluence in China and reduced fertility rates. Both of these are processes that have characterized the West in recent decades as well. There is also a sense in China, according to Naftali, that if a family only has one child, then they want the best for that child (Naftali, 2010). Interestingly, a study of privacy attitudes among Chinese teenagers shows them to have quite similar concerns to those of Western kids; it also shows differing views of what should be considered private among teenagers and their parents (Tang & Dong, 2006). (A study of children in Amsterdam that compares the 1950s with the 1990s discusses the movement of children from the outside to the inside too (Karsten, 2005)).
A related idea has been put forward by European sociologists of late modernity, and especially Ulrich Beck. For Beck, the contemporary era is one characterized by risk. As a result, children become the focus of their parents’ hopes and aspirations. “Late modern constructions of childhood become a form of moral rescue, a means by which adults try and recapture a sense of purpose and belonging”: this is how sociologist of childhood, William Corsaro describes it (Corsaro, 1997, p. 53). Similarly, Chris Jenks suggests that adults’ insecurities are projected onto children. This implies that the less control people feel they have over their own lives, the greater control they will try to assert over their children’s lives (Jenks, 1996).
This idea was also expressed by Wyness, who studied parents and children in Scotland in the early 1990s (Wyness, 1994). Asking whether parents know where their children are, he found that parents are worried about their children being outside, and would rather they bring friends home than hang out with them in the street. The television was part of this trend – a technological device that can help babysit the children.
So – and I am quite aware that I am painting in broad brush strokes here – there is a trend for children to be brought indoors and to be given more space within the home. This is to do with greater affluence and the re-conceptualization of the home as safe, and the outdoors, the “street”, as dangerous. I said at the beginning of my talk that the boundaries between inside and outside are being transgressed. I would now like to return to this notion.
[slide] The reason I say that the boundaries between inside and outside are being transgressed is that today, even when the child is at home, parents do not really know where he is. When the child is on the internet, he escapes the boundaries of the home. The internet undermines society’s attempts to create the domestic sphere as isolated and protected from the world outside. When surfing, the child brings the outside in. This is also an effect that has been attributed to the television (Harden, 2000), but my next point shows that the internet differs from the television quite dramatically.
I said earlier on that surfing the internet takes the inside out. My hunch here is that parental and societal anxiety about children’s self-over-exposure through social network sites can be conceptualized in terms of the inside – the domestic – making its way outside in a manner that is uncontrolled by the adults of the family. The domestic sphere is suddenly much less enclosed than we might think. There isn’t really time to develop this idea fully here, but there is an interesting literature about the importance of the secret in the construction of the family (Brown-Smith, 1998). It is also worth noting that the issue of family secrets has been brought up by researchers in the context of another medium of sharing, or over-sharing, according to some, namely the talk show.
The second aspect that I would like to touch upon as part of the explanation for parents’ invasion of children’s privacy is sexuality. This is of course relates to concerns about grooming and inappropriate contact with strangers, or “stranger danger”. Other practices that this relates to are surfing for pornography and sexting – the sending of sexually explicit photographs of oneself over mobile phones. Here, the problem is that the too-early onset of sexuality is seen as disrupting the project that we call raising children.
Childraising has by now been almost thoroughly psychologized. We are all familiar with Piaget and Erikson, even if we don’t know it, and we all talk about stages in development (Corsaro, 1997). We also know, thanks to Carol Gilligan and her feminist critique of Erikson’s theory of moral development, that what is presented as descriptive is very often normative (Gilligan, 1982). Thus, if children disrupt the proper order of things, this is a moral problem as much as anything else. This idea has deep roots in western culture: indeed, sociologists of childhood James, Jenks and Prout remind us that Freud himself thought that children seeing their parents, or other adults, in sexual congress could suffer from hysteria as a result (James, Jenks, & Prout, 1998, p. 39). Going even further back, Postman (1982), in his book on the disappearance of childhood, attributes to the ancient Romans the idea that young people should not be exposed to adult, and especially sexual, behavior.
If parents consider young people’s sexuality to be threatened by the use of the internet and other technologies, then this would help to explain why parents want to keep track of their children’s usage of these technologies – indeed, a recent report found that 65% of American parents have looked through their children’s text messages (Lenhart, Ling, Campbell, & Purcell, 2010).
We should notice here that parents, and society in general, are doing what Marcia Pally has called “image-blaming” (Pally, 1994), namely, holding pornographic images responsible for anti-social behavior. Image-blaming is an effective mechanism: it offers a simple, technological solution – preventing access to the images – to a perceived problem – over-sexed teenagers. Without going into this debate in depth, it at the very least suggests that parents’ attention is directed – by the media – to the wrong place. This is also the conclusion reached by John Holmes, who finds that the online dangers to young people are vastly over-stated. “There is little evidence to suggest young people are at significant risk and, where risks are present, most young people are able to safely negotiate them,” he says, adding that SNS users are “less likely to experience distressing stranger contact than non-users.” (Holmes, 2009, pp. 1177, 1184)
It is also reflective of the dominance of two particular models of childhood, as described by James, Jenks and Prout in their seminal book on the sociology of childhood (James, et al., 1998). For a right to privacy to take hold, I suggest that we would need to see the emergence of a different model of the child.
The two models, or ideal types, of childhood or children that play a role here are those of the innocent child and the evil child. The innocent child is the child whom we are contracted to bring up “in such a manner that their state of pristine innocence remains unspoilt by the violence and ugliness that surrounds them” (p. 14). This is the child whom parents are protecting from pornography and grooming. The view of the evil child, on the other hand, assumes that “evil, corruption, and baseness are primary elements in the constitution of ‘the child’” (p. 10). This is the child whom parents are preventing from accessing pornography in case it sets free the monster within. The point is that these two contrasting models of childhood give rise to identical practices: the strict surveillance of children’s online activities. If a single practice is able to draw on more than one deep social justification, then it may be hard to put an end to it. And this is without explicitly discussing surveillance as an increasingly central component of modern society. Of course, this is to say nothing about whether it is desirable to put an end to parental surveillance of their children; but it is to say that doing so will certainly not be easy.
boyd, d. (2007). Why Youth ♥ Social Network Sites: The Role of Networked Publics in Teenage Social Life. In D. Buckingham (Ed.), Youth, Identity and Digital Media (pp. 119-142). Cambridge: The MIT Press.
Brown-Smith, N. (1998). Family secrets. Journal of Family Issues, 19(1), 20.
Buckingham, D. (2007). Introducing Identity. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning, -, 1-22.
Corsaro, W. A. (1997). The sociology of childhood. Thousand Oaks, Calif.: Pine Forge Press.
Gilligan, C. (1982). In a different voice : psychological theory and women’s development. Cambridge, Mass.: Harvard University Press.
Harden, J. (2000). There’s no Place Like Home: The Public/Private Distinction in Children’s Theorizing of Risk and Safety. Childhood, 7(1), 43.
Herring, S. C. (2007). Questioning the generational divide: Technological exoticism and adult constructions of online youth identity. In D. Buckingham (Ed.), Youth, Identity, and Digital Media (pp. 71–92). Cambridge, MA: The MIT Press.
Holmes, J. (2009). Myths and Missed Opportunities — Young people’s not so risky use of online communication. Information, Communication & Society, 12(8), 1174 – 1196.
Howe, N., & Strauss, W. (2000). Millennials rising: The next great generation: Vintage.
James, A., Jenks, C., & Prout, A. (1998). Theorizing Childhood: Cambridge: Polity Press.
Jenks, C. (1996). Childhood. London ; New York: Routledge.
Karsten, L. (2005). It all used to be better? Different generations on continuity and change in urban children’s daily use of space. Children’s Geographies, 3(3), 275-290.
Lenhart, A., Ling, R., Campbell, S., & Purcell, K. (2010). Teens and Mobile Phones: Pew Internet & American Life Project.
Livingstone, S. (2008). Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10(3), 393.
Naftali, O. (2010). Caged golden canaries: Childhood, privacy and subjectivity in contemporary urban China. Childhood, 17(3), 297-311.
Pally, M. (1994). Sex & sensibility : reflections on forbidden mirrors and the will to censor Available from http://www.marciapally.com/Pages/sxsn.html
Postman, N. (1982). The disappearance of childhood. New York: Delacorte Press.
Tang, S., & Dong, X. (2006). Parents’ and Children’s Perceptions of Privacy Rights In China. Journal of Family Issues, 27(3), 285-300.
Taylor, E. (2010). I spy with my little eye: the use of CCTV in schools and the impact on privacy. Sociological Review, 58(3), 381-405.
Wyness, M. G. (1994). Keeping tabs on an uncivil society: positive parental control. Sociology, 28(1), 193.
I was at the recent OECD Conference on the Evolving Role of Individual in Privacy Protection: 30 Years after the OECD Privacy Guidelines. One of the issues that was discussed was the right to be forgotten. This is analyzed in an extremely readable book, Delete, by Victor Mayer-Schonberger. The problem is simply that digital memory is eternal: stupid stuff that we say online, or pictures that perhaps we shouldn’t have uploaded, are there for ever. You might take down the picture, but if someone has already saved it and distributed it, it’s out of your control.
A solution that people are beginning to think about is the idea of an automatic self-destruct feature for stuff we put on Facebook and other social networks. In other words, unless you specifically request otherwise, any update or comment you write on Facebook, say, would disappear after a certain period of time. This would make it harder for future employees (or dates) to prejudge you on a youthful indiscretion, as the automatically deleted stuff would not appear when they Google you (and they will Google you).
There is, of course, no business case for doing this. The more data Facebook has about us, the more accurate profiles it can sell to advertisers. So there are two ways something like this could happen. The first would be through regulation, the second would be through consumer power. I wouldn’t count on either of them.
I’m doing some research into privacy and technology. I’ve probably mentioned that already. Yesterday, as part of my research, I interviewed an man who has been involved in the internet in Israel since 1994. We had a very interesting chat over a lovely cup of coffee in a very beautiful corner of Tel Aviv (Cafe Ben Ami, if you were wondering).
Our conversation was very interesting, but then it occurred to me that we hadn’t really explicitly defined which aspects of privacy we were talking about. We were mostly talking about how people put more and more stuff online, more and more of which is publicly accessible by other people. In other words, we were talking about privacy in terms of the stuff other people know about you.
This is no doubt interesting, but I’m not sure it’s the main point at all. I think that of more interest is what machines know about you, and how this enables them to target you with certain adverts. For instance, Google’s search logs have been called a “database of intentions”. Put very simply, your past behavior (and your searches are in some ways a proxy for your behavior) might predict your future behavior. If, every Friday, you search for a nice place to go out for dinner that night, how complicated would it be to give you an advert on Friday morning for a restaurant?
This is why Elliot Schrage, vice president for public policy at Facebook, gets it completely wrong in his really quite awkward questions and answers piece in the New York Times. As far as he is concerned, there’s no problem with Facebook sharing your data with other companies because they never share your name or other personally identifiable information. (The issue of de-anonymization is one for another post, so I’ll just put that aside for now.) The point, of course, isn’t that they have my name. The concern isn’t what other people know about me (or at least, that’s not the only concern). The concern is about how knowledge of my past behavior and interests might enable a commercial entity to have too much influence over my future behavior and consumption decisions.
A lot of the talk about privacy nowadays revolves around the ways that the bodies with whom we entrust so much information about our lives might misuse that information.
However, there is another aspect that is to do with the increasing complexity of the tools we use to manage our day to day lives. Sometimes, these tools get so complex that no one really understands exactly how the entire system works. They are so complex that changes to one part of the system are liable to have completely unintended consequences for other parts of the system. And the point is that the system is so big and complex that these consequences cannot all be checked in advance.
Facebook has become such a system. We know this because sometimes it does things that the people in charge of it are completely unaware of. For instance, for a while yesterday there was a security flaw in Facebook that could
expose personal information by enabling your Facebook friends to see both your live chats, as well as your pending friend requests.
Facebook was quick to resolve the flaw, but the point is that it was there, and it had been discovered by people from outside Facebook.
What does this mean? Well, it reminds us of another way in which personal information about ourselves that we entrust to a commercial entity might be accessible by strangers. This is not because of malintent on the part of Facebook, and I’m not even sure I’d say it was because of their negligence, but rather it is a consequence of the complexity of the systems in which we spend so much of our lives.
When you surf the internet, you are, by and large, anonymous. I don’t mean that your online activities can’t be tracked back to you via your IP number and your ISP, but that the site you are visiting doesn’t know who you are. You can log in to the site, and by doing so you are telling the site who you are. But the default is that you are anonymous.
The significance of the recent changes in Facebook that I discussed in the previous post is that they herald the end of anonymity as we surf. So far there are three sites who know who you are when you access them – Pandora, Yelp, and Docs.com. There will be more. What this means is that you will be taking your Facebook identity with you wherever you go on the internet. You will not be anonymous. Our online and offline identities, which are already merged in Facebook, will be united across the entire internet, with Facebook, a commercial company, acting as the go-between.
You can opt out of this. Right now it’s easy enough – there are only three sites you need to do it for. But as Facebook spreads out to more and more sites, it will become harder for us to control. This means that sites will be able to serve us with more personalized experiences (and more individually-tailored advertisements, of course), which for many people may be a worthwhile benefit. Other people may be slightly anxious that a privately-owned company is becoming the mediator between us and the internet.
This morning I logged onto Facebook and found a message up there at the top of the page. This message was telling me that I can connect with my friends on my favourite websites. Ooh, I thought, lucky me.
Right there in the box is a link that I can click on in order to learn more, and understand my privacy. So I clicked. I learnt that I can expect to see a “Like” button on various sites that I can click on to share the site with my friends. The site gets no information about me. This can be quite useful to people who like to post links to sites they think other people might like to see and I can’t see any privacy issues here.
The other thing they mention is something about partner sites, including Microsoft Docs. From what I can glean, this is to be a service that competes with Google Docs. You can create documents and share them with your Facebook friends in much the same way that you share photos. So where’s the privacy issue? It says on the Facebook site, “You can easily opt-out of experiencing this on these sites by clicking here or clicking “No Thanks” on the blue Facebook notification on the top of partner sites”. So far so good. But, it goes on: “If you opt-out, your public Facebook information can still be shared by your friends to these partner sites unless you block the application.” What does this mean? What is “the application”?
I did some digging and found out that I can block the Facebook application, Docs. I didn’t install it. Why should I have to block it in order to prevent it from getting information from me?
I think there are three things to note here:
- This is actually normal Facebook app behaviour. They all try to get information about our friends from us. You should check how much of your profile is public, because you have no control over the apps your friends install.
- I don’t like the opt out option. Surely it should be opt in? Shouldn’t it always be opt in?
- How long before someone creates a personal doc and shares it with the world? And when they do, is it their fault for not reading the manual properly?
There’s a great article in the New York Times today: How Privacy Vanishes Online, a Bit at a Time – NYTimes.com.
It’s about how our privacy may be eroding in ways that we cannot really perceive. Partly, this is because of data mining techniques that enable discrete pieces of information about us to be tied together, thus exposing our identity.
No less interesting than the article are the comments at the end of it. There we can find a few arguments that reflect different views to privacy. In a nutshell, these are:
- There is no privacy any more, so let’s work out how we make transparency work for us
- If you want to preserve your privacy, don’t take any part in social internet sites, and don’t blame others if the information you put out there comes back to haunt you later on
- If you don’t take part in social internet sites, you are putting yourself at a disadvantage in comparison to people who do (you lack information)
- What’s new? Court records and so on have always been publicly available
- The internet isn’t the problem – look how much information the credit card companies have about you
The Microsoft in-house sociologist (lucky thing) was the keynote speaker at the SXSW 2010 conference, and she used her talk to argue that we haven’t all given up on privacy quite yet. This is pleasing to read. It is refreshing to come across a view other than the moral panicky one that we don’t have any privacy, the kids don’t care about their privacy, and that the end of the world is nigh. Her examples, unsurprisingly, are Google Buzz and Facebook, both have which have recently aroused the ire of users by exposing too much of their information. In both cases, user protest forced the companies to backtrack.