Facebook’s new facial recognition feature

June 15, 2011 by · Leave a Comment
Filed under: Uncategorized 

The latest privacy furore surrounding Facebook involves its new facial recognition feature. According to Facebook’s own blog entry on the feature, when you upload photos of people who are already tagged in other photos, Facebook will suggest their names, making it quicker for you to tag your photos.

Now I’m just as keen on privacy as the next person, but some of the anti-Facebook responses appear to me to be slightly overblown, especially those that think that this facial recognition gives governments greater access to information about its citizens. First of all, if governments can access our Facebook photos, they don’t need Facebook’s facial recognition technology to identify us. I’m sure they have their own. So the problem would lie with the photos being online, not with their being tagged. Secondly, just because someone has tagged a photo as being a photo of me doesn’t make it a photo of me. It may well be a photo of me, but the person tagging may have made a mistake, or may have tagged someone else as me in order to get my attention. Thirdly, to the best of my knowledge, I cannot prevent myself being tagged by someone else (but I can prevent other people from checking me in with them to places). I can remove the tag if I wish, but it may be too late. And disabling the facial recognition feature doesn’t mean I can’t be tagged anyway, which suggests that the problem, if there is one, is with tagging itself, and not with the new feature.

Most interestingly for me, though, as a sociologist of technology, is the fact that Facebook is offering a technological means for doing something that we can (and do) do anyway: Facebook says that 100m tags are added to photos every day. Actor-network theory analyzes how networks of humans and non-humans are configured so as to successfully perform so-called “programs of action”. It looks at those tasks that are undertaken by humans and those undertaken by non-humans, without privileging one type of “actant” over another. Often, non-humans (computers, say) can carry out out tasks much quicker than humans: a computer will find me someone’s phone number in the phone book much quicker than I can. Sometimes we are happy for non-humans to be involved in programs of action (such as finding someone’s number in the phone book) and sometimes we are less happy (for instance, cameras that photograph us speeding on the motorway).

I think the way to think about this new feature on Facebook is in terms of actor-network theory, and your view on the feature will be determined by whether you think that having a computer do something you could do yourself (and have been doing yourself) makes the action qualitatively different. Is having a machine help you identify your friends’ faces qualitatively different from you identifying them yourself? Does the fact that the machine can do it far quicker make the very task itself different? If it is OK to tag your friends, does having a computer help you tag them faster make it wrong? If so, how come?

And finally a quick “I’m not naive” disclosure. Of course, the reason Facebook wants to “help” us tag our friends is because Facebook makes money from the information it collects. A tag on a photo is information. I’m not so naive to think that Facebook is taking pity on those who have to spend hours tagging their friends in all their photos. There is real economic value in those tags, and so we should always be suspicious of everything Facebook does. Also, I have no doubt that there are serious privacy threats latent in facial recognition software. The state already knows what I look like (there are photos in my passport and on my driving license). If it wants to, it can film me at a political gathering, run facial recognition software, and thereby know I was at that gathering. That, to me, is much more worrying than Facebook’s offering.

The Internet Never Forgets…

March 9, 2011 by · Leave a Comment
Filed under: Uncategorized 

…but sometimes we do. So a new startup, Memolane, wants to help us remember. You sign up and then add the services you want to add to your digital memory lane. Memolane then collects pretty much everything you’ve done using those services (Picassa, FourSquare, Twitter, and so on) and puts them in a time line. The right to be forgotten? Forget that. This is the most amazing instantiation of the internet’s memory that I’ve seen.

And I write this having just created my own memolane. It goes back to 2006, when I first posted a photo online using Picassa. I have to say that I’m sitting here looking at my screen with my jaw on the floor. It is completely remarkable.

The privacy issues here barely need stating: hack this, and you’ve hacked all of my Web 2.0 activities for the last five years. Having said that, the Memolane folk are very clear about privacy being extremely important to them.

I need to go and stare at my timeline for a bit longer and say wow a few more times. Perhaps then I’ll come back with something more intelligent to say. Until then, eat your heart out, Viktor Mayer-Schoenberger.

(Thanks to the New York Times for bringing my attention to this.)

Talk entitled “Privacy, Technology and Children: The Problem in Context”

December 20, 2010 by · Leave a Comment
Filed under: Uncategorized 

Below is a talk I gave yesterday at the annual conference of the Israeli Law and Society Association. The “Benny and Ayelet” are two academic lawyers who have written a paper arguing for children’s right to privacy vis-a-vis their parents. Parental surveillance of children includes all sorts of filtering and monitoring programs that parents might install on their computers so they know what their kids are getting up on online, for instance. Enjoy!

My aim for today is to try and put the discussion of children’s right to privacy in a broader social context. The first thing that I should note is that my perspective is that of a sociologist, and from this perspective I am not going to take a stance on whether children should or should not have a right to privacy, though I do have something to say about the reasons for parents’ surveillance of their children, some of which are creations of the media and based on empirically unfounded anxieties. My understanding of both rights and privacy, at least as a legal concept, is far inferior to the other speakers’. So the question I ask here is: What is the social context of our current discussion? This is not merely an academic question. If we can identify the trends that are framing our discussion today, then this could give us a clue as to what might happen in the future.

Before I get going, I would like to outline the limitations of my talk: firstly, like Benny and Ayelet, I’m dealing with children’s relations with their parents. [I’m not dealing, for instance, with the activities of companies who place adverts for children on the internet, or who track children as they surf.] Also, I’m mainly restricting myself to children’s exposure to pornography and online grooming, which is when an adult develops a relationship with a child in order to sexually abuse him or her.

My argument is that current parental concerns about what children are doing online are twofold. One aspect is to do with the transgression of the boundary between the inside and the outside. This in turn is the consequence of quite long-term and extremely widespread changes in childhood, specifically, the definition of the outside as dangerous and the domestic as safe. What I want to argue is that surfing is an activity that brings the outside in and takes the inside out. The second aspect is to do with a particularly modern uncertainty about the boundaries between adults and children, or adulthood and childhood, especially in relation to sex and sexuality. In my conclusion, I suggest that current parental practices are embedded in two extremely deep-rooted conceptions of childhood.

So how do I establish these arguments?

As a starting point, we can say that if there is a call to recognize a right to privacy among children, it is because of a perception that their privacy is being invaded. There are all sorts of indicators that this is the case, or, to put it in more morally neutral terms, that today’s young people are subject to more forms of surveillance than children at any time in history (Howe & Strauss, 2000). It has even been suggested that ultra-sound technologies are part of this array of surveillance technologies, when fetuses are imaged to check for abnormalities.

This then raises the question, why is their privacy being invaded? What are parents, or adults more generally, hoping to achieve through these practices? In general terms, or more accurately, on their terms, parents want to protect their children. (But it might also be true that they don’t really know why they are doing it – in her study of CCTV in schools in England, Emmeline Taylor (2010) found that sometimes the school didn’t really have a well thought out rationale for placing cameras around the school, it just did it. In this sense it might be an act of imitation; or the principal might think he has to keep up with other schools. This might be the same with parents: they adopt certain practices though without necessarily knowing why. At bottom, though, the motivation is to protect children, very often from themselves.)

So what are the dangers that concern parents? There would appear to be two. The first, and most significant, are online predators and indecent material, especially pornography. This can be inferred from the marketing materials of companies who provide tracking and filtering software for parents. (For instance.) The second is a concern that young people in online environments are behaving in a way that is (1) inappropriate in and of itself; and (2) might have unwelcome unforeseen consequences by leaving digital footprints of youthful indiscretions. I am thinking here of the popular discourse, which has been shown to be inaccurate by study after study (boyd, 2007; Herring, 2007; Livingstone, 2008), that claims that the youth of today has no sense of shame or embarrassment.

One way of accounting for these concerns is in terms of a generation gap. One aspect of this is children’s relationship with technology. As Buckingham has noted: “new technology is often invested with our most intense fantasies and fears. It holds out the promise of a better future, while simultaneously provoking anxiety about a fundamental break with the past. In this scenario, children are perceived both as the avant-garde of media users and as the ones who are most at risk from new developments” (Buckingham, 2007). If we want to understand current concerns about youth and privacy, though, both those of privacy advocates and those of parents, we need to uncover the specific contours of the current day instantiation of this gap.

It is at this point that I shift my emphasis from privacy and technology to the sociology of childhood.

There has been a well-documented movement of children from the outside to the inside, and a change in the social meanings of those categories, which, I argue, is extremely relevant to our discussion. This movement might be said to have been ongoing since the expansion of formal schooling in mid-19th century Europe, and can be seen in the general idea that “the street” is no place for children. In recent decades the process has taken on specific characteristics, which, surprisingly enough, are very well captured by an ethnographic study of childhood in China. In her research, Orna Naftali describes a movement into the home among Chinese children, particularly since the emergence of capitalism. Children, she notes, are being given a room of their own. This is due to growing affluence in China and reduced fertility rates. Both of these are processes that have characterized the West in recent decades as well. There is also a sense in China, according to Naftali, that if a family only has one child, then they want the best for that child (Naftali, 2010). Interestingly, a study of privacy attitudes among Chinese teenagers shows them to have quite similar concerns to those of Western kids; it also shows differing views of what should be considered private among teenagers and their parents (Tang & Dong, 2006). (A study of children in Amsterdam that compares the 1950s with the 1990s discusses the movement of children from the outside to the inside too (Karsten, 2005)).

A related idea has been put forward by European sociologists of late modernity, and especially Ulrich Beck. For Beck, the contemporary era is one characterized by risk. As a result, children become the focus of their parents’ hopes and aspirations. “Late modern constructions of childhood become a form of moral rescue, a means by which adults try and recapture a sense of purpose and belonging”: this is how sociologist of childhood, William Corsaro describes it (Corsaro, 1997, p. 53). Similarly, Chris Jenks suggests that adults’ insecurities are projected onto children. This implies that the less control people feel they have over their own lives, the greater control they will try to assert over their children’s lives (Jenks, 1996).

This idea was also expressed by Wyness, who studied parents and children in Scotland in the early 1990s (Wyness, 1994). Asking whether parents know where their children are, he found that parents are worried about their children being outside, and would rather they bring friends home than hang out with them in the street. The television was part of this trend – a technological device that can help babysit the children.

So – and I am quite aware that I am painting in broad brush strokes here – there is a trend for children to be brought indoors and to be given more space within the home. This is to do with greater affluence and the re-conceptualization of the home as safe, and the outdoors, the “street”, as dangerous. I said at the beginning of my talk that the boundaries between inside and outside are being transgressed. I would now like to return to this notion.

[slide] The reason I say that the boundaries between inside and outside are being transgressed is that today, even when the child is at home, parents do not really know where he is. When the child is on the internet, he escapes the boundaries of the home. The internet undermines society’s attempts to create the domestic sphere as isolated and protected from the world outside. When surfing, the child brings the outside in. This is also an effect that has been attributed to the television (Harden, 2000), but my next point shows that the internet differs from the television quite dramatically.

I said earlier on that surfing the internet takes the inside out. My hunch here is that parental and societal anxiety about children’s self-over-exposure through social network sites can be conceptualized in terms of the inside – the domestic – making its way outside in a manner that is uncontrolled by the adults of the family. The domestic sphere is suddenly much less enclosed than we might think. There isn’t really time to develop this idea fully here, but there is an interesting literature about the importance of the secret in the construction of the family (Brown-Smith, 1998). It is also worth noting that the issue of family secrets has been brought up by researchers in the context of another medium of sharing, or over-sharing, according to some, namely the talk show.

The second aspect that I would like to touch upon as part of the explanation for parents’ invasion of children’s privacy is sexuality. This is of course relates to concerns about grooming and inappropriate contact with strangers, or “stranger danger”. Other practices that this relates to are surfing for pornography and sexting – the sending of sexually explicit photographs of oneself over mobile phones. Here, the problem is that the too-early onset of sexuality is seen as disrupting the project that we call raising children.

Childraising has by now been almost thoroughly psychologized. We are all familiar with Piaget and Erikson, even if we don’t know it, and we all talk about stages in development (Corsaro, 1997). We also know, thanks to Carol Gilligan and her feminist critique of Erikson’s theory of moral development, that what is presented as descriptive is very often normative (Gilligan, 1982). Thus, if children disrupt the proper order of things, this is a moral problem as much as anything else. This idea has deep roots in western culture: indeed, sociologists of childhood James, Jenks and Prout remind us that Freud himself thought that children seeing their parents, or other adults, in sexual congress could suffer from hysteria as a result (James, Jenks, & Prout, 1998, p. 39). Going even further back, Postman (1982), in his book on the disappearance of childhood, attributes to the ancient Romans the idea that young people should not be exposed to adult, and especially sexual, behavior.

If parents consider young people’s sexuality to be threatened by the use of the internet and other technologies, then this would help to explain why parents want to keep track of their children’s usage of these technologies – indeed, a recent report found that 65% of American parents have looked through their children’s text messages (Lenhart, Ling, Campbell, & Purcell, 2010).

We should notice here that parents, and society in general, are doing what Marcia Pally has called “image-blaming” (Pally, 1994), namely, holding pornographic images responsible for anti-social behavior. Image-blaming is an effective mechanism: it offers a simple, technological solution – preventing access to the images – to a perceived problem – over-sexed teenagers. Without going into this debate in depth, it at the very least suggests that parents’ attention is directed – by the media – to the wrong place. This is also the conclusion reached by John Holmes, who finds that the online dangers to young people are vastly over-stated. “There is little evidence to suggest young people are at significant risk and, where risks are present, most young people are able to safely negotiate them,” he says, adding that SNS users are “less likely to experience distressing stranger contact than non-users.” (Holmes, 2009, pp. 1177, 1184)

It is also reflective of the dominance of two particular models of childhood, as described by James, Jenks and Prout in their seminal book on the sociology of childhood (James, et al., 1998). For a right to privacy to take hold, I suggest that we would need to see the emergence of a different model of the child.

The two models, or ideal types, of childhood or children that play a role here are those of the innocent child and the evil child. The innocent child is the child whom we are contracted to bring up “in such a manner that their state of pristine innocence remains unspoilt by the violence and ugliness that surrounds them” (p. 14). This is the child whom parents are protecting from pornography and grooming. The view of the evil child, on the other hand, assumes that “evil, corruption, and baseness are primary elements in the constitution of ‘the child’” (p. 10). This is the child whom parents are preventing from accessing pornography in case it sets free the monster within. The point is that these two contrasting models of childhood give rise to identical practices: the strict surveillance of children’s online activities. If a single practice is able to draw on more than one deep social justification, then it may be hard to put an end to it. And this is without explicitly discussing surveillance as an increasingly central component of modern society. Of course, this is to say nothing about whether it is desirable to put an end to parental surveillance of their children; but it is to say that doing so will certainly not be easy.

Bibliography

boyd, d. (2007). Why Youth ♥ Social Network Sites: The Role of Networked Publics in Teenage Social Life. In D. Buckingham (Ed.), Youth, Identity and Digital Media (pp. 119-142). Cambridge: The MIT Press.

Brown-Smith, N. (1998). Family secrets. Journal of Family Issues, 19(1), 20.

Buckingham, D. (2007). Introducing Identity. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning, -, 1-22.

Corsaro, W. A. (1997). The sociology of childhood. Thousand Oaks, Calif.: Pine Forge Press.

Gilligan, C. (1982). In a different voice : psychological theory and women’s development. Cambridge, Mass.: Harvard University Press.

Harden, J. (2000). There’s no Place Like Home: The Public/Private Distinction in Children’s Theorizing of Risk and Safety. Childhood, 7(1), 43.

Herring, S. C. (2007). Questioning the generational divide: Technological exoticism and adult constructions of online youth identity. In D. Buckingham (Ed.), Youth, Identity, and Digital Media (pp. 71–92). Cambridge, MA: The MIT Press.

Holmes, J. (2009). Myths and Missed Opportunities — Young people’s not so risky use of online communication. Information, Communication & Society, 12(8), 1174 – 1196.

Howe, N., & Strauss, W. (2000). Millennials rising: The next great generation: Vintage.

James, A., Jenks, C., & Prout, A. (1998). Theorizing Childhood: Cambridge: Polity Press.

Jenks, C. (1996). Childhood. London ; New York: Routledge.

Karsten, L. (2005). It all used to be better? Different generations on continuity and change in urban children’s daily use of space. Children’s Geographies, 3(3), 275-290.

Lenhart, A., Ling, R., Campbell, S., & Purcell, K. (2010). Teens and Mobile Phones: Pew Internet & American Life Project.

Livingstone, S. (2008). Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10(3), 393.

Naftali, O. (2010). Caged golden canaries: Childhood, privacy and subjectivity in contemporary urban China. Childhood, 17(3), 297-311.

Pally, M. (1994). Sex & sensibility : reflections on forbidden mirrors and the will to censor Available from http://www.marciapally.com/Pages/sxsn.html

Postman, N. (1982). The disappearance of childhood. New York: Delacorte Press.

Tang, S., & Dong, X. (2006). Parents’ and Children’s Perceptions of Privacy Rights In China. Journal of Family Issues, 27(3), 285-300.

Taylor, E. (2010). I spy with my little eye: the use of CCTV in schools and the impact on privacy. Sociological Review, 58(3), 381-405.

Wyness, M. G. (1994). Keeping tabs on an uncivil society: positive parental control. Sociology, 28(1), 193.

Does WikiLeaks have any privacy issues?

December 12, 2010 by · 1 Comment
Filed under: Uncategorized 

The latest round of WikiLeaks excitement is still going on. The press is producing news story after news story from the leaked cables, and now Assange himself is a story, having been arrested in England and refused bail.

While I have heard it argued that there is no privacy issue here, I think there is. It is indeed true that none of the people exposed by the leaks are private people going about their private affairs, but rather public figures busy doing their jobs. However, there are several parallels in the way the latest WikiLeaks episode has been reported and the way that privacy is discussed nowadays.

Firstly, the technological basis of WikiLeaks and today’s concerns about privacy is shared. The diplomatic cables were leakable because they were stored online – not on the World Wide Web, but on a parallel network run by the US. The reasons for storing this data online are obvious: it is cheap; the data is searchable and analyzable; and the data can be accessed from anywhere in the world. A further reason for the US administration’s decision to upload certain cables to this network is to do with 9/11, when the US discovered that its various intelligence branches failed to share knowledge. The current WikiLeaks affair, therefore, was made possible at least partly because of a desire to share information, and blatantly exposes the danger of gathering digital information in a single (virtual) place. This, I would suggest, is the same danger we expose ourselves to when we store personal information, such as our emails, address books, and to-do lists, online.

There is a paradox here that we do not yet really know how to deal with: the digitalization of our lives provides us with new efficiencies, but also makes possible the leaking of our lives. This would appear to be just as true for private individuals as for governments.

Secondly, some of the responses of diplomats and governments to the latest leaks seem to be quite similar to those of people who have experienced some terrible technology-caused embarrassment. I say this without having conducted a systematic study of reactions (because this is a blog, and not an academic article), but my impression is that the phenomenological experience of diplomats whose missives have been exposed is not entirely dissimilar to that of having sent an email to the wrong address, or having been tagged in an inappropriate photograph. Information escapes its proper context and reappears in another one, often with embarrassing consequences. So even if the cables do not include personal information unrelated to the official capacities of their authors, the emotions caused by having their content exposed would appear to be akin to other instances of digital leakage, even if in many other ways they are quite different.

Thirdly, and perhaps slightly tenuously, my attention was grabbed by a description I read of WikiLeaks as lacking in shame and embarrassment, emotions that, the author claimed, “are in short supply these days.” Who else is accused these days of lacking shame and embarrassment if not the digital youth, who, according to popular discourse, is shamelessly exposing every aspect of their intimate lives online? Of course, study after study has shown young people to have a very sophisticated sense of privacy, but the popular conception is that they shamelessly share too much, and that they do not find it embarrassing to publicize their latest bowel movements, for instance. I wonder, then, if it is not too far-fetched to suggest that the above quotation constructs a discursive association between WikiLeaks and Facebook, in that both are symptomatic of a society that lacks shame in its exposure of information that it would be better to keep under wraps.

In general, I think that the questions that the US government is asking itself now are not all that different from those that we face in our everyday lives. Should we digitize the information at our disposal? Who should be able to access it? How easy is it to remove it from one context and publish it in another? In a way, we are all still struggling to find our feet in the digital world.

It’s not just the big corporations…

November 5, 2010 by · Leave a Comment
Filed under: Uncategorized 

I came across an interesting article in the New York Times “Home and Garden” section the other day about the use of video surveillance by private individuals to identify neighbors who have been throwing bags of dog poo into their garden or scratching their car or otherwise behaving in a non-neighborly manner.

For me, this article was a timely reminder that data collection is not only being carried out by big fat corporations with an economic interest in our data, but also by normal people going about their normal lives. The article discusses the increased availability of small, affordable CCTV systems that can help people solve the small problems of everyday life – the dog that keeps pooping in the garden, the neighbor who leaves his rubbish by your house.

What is the meaning of all this? I think a few things are worth pointing out. First, practices that were originally initiated by the state and law enforcement authorities are being adopted by citizens: we can all be the policemen of our front garden and our driveway and confront our neighbors with video evidence of their misdemeanors. Or, of course, we can take it to the police (effectively doing their job for them).

Second, as the hardware costs come down, we are likely to see more of this: people want to stop their neighbor throwing his bags of dog poo into their garden more than they worry about the broader social implications of covering the streets with private CCTV cameras.

Third, unlike CCTV footage collected by the police, privately recorded images can be shared with the world, and YouTube would appear to be a popular destination for them. In other words, people are able to publicly shame their neighbors by posting their footage onto the internet. (This is an issue discussed at length by Viktor Mayer-Schonberger in his book Delete, which I mentioned in a previous post.)

I’m writing about this issue not because I want to start a campaign against private individuals’ CCTV networks, but because it looks like an interesting extension of debates over CCTV. More accurately, we might say that the same rationale given for the use of CCTV appears here, but writ small: there is a desire to prevent criminal or anti-social behavior; surveillance technologies are adopted that do not require physical presence (many of these systems enable you to access your cameras over your smartphone); and to the person who might be concerned that he is being filmed as he walks past someone’s house one might say, if you’ve nothing to hide, you’ve nothing to worry about – perhaps the most fallacious of arguments in favor of CCTV.

Data with a sell-by date

November 3, 2010 by · Leave a Comment
Filed under: Uncategorized 

I was at the recent OECD Conference on the Evolving Role of Individual in Privacy Protection: 30 Years after the OECD Privacy Guidelines. One of the issues that was discussed was the right to be forgotten. This is analyzed in an extremely readable book, Delete, by Victor Mayer-Schonberger. The problem is simply that digital memory is eternal: stupid stuff that we say online, or pictures that perhaps we shouldn’t have uploaded, are there for ever. You might take down the picture, but if someone has already saved it and distributed it, it’s out of your control.

A solution that people are beginning to think about is the idea of an automatic self-destruct feature for stuff we put on Facebook and other social networks. In other words, unless you specifically request otherwise, any update or comment you write on Facebook, say, would disappear after a certain period of time. This would make it harder for future employees (or dates) to prejudge you on a youthful indiscretion, as the automatically deleted stuff would not appear when they Google you (and they will Google you).

There is, of course, no business case for doing this. The more data Facebook has about us, the more accurate profiles it can sell to advertisers. So there are two ways something like this could happen. The first would be through regulation, the second would be through consumer power. I wouldn’t count on either of them.

When does an reasonable expectation of privacy become unreasonable?

September 19, 2010 by · Leave a Comment
Filed under: Uncategorized 

One of the more interesting questions about the future of privacy revolves, I think, around changing cultural perceptions of the private. By this I mean that what we see as private today, we might not consider as private tomorrow. Examples are readily available: not too long ago, a diagnosis of cancer was something to be kept secret, not shared with neighbors, friends, and sometimes even family. It was private. Today, that is the case to far lesser extent.

The issue of perceptions is an important one, not least because the law (in the US) sometimes rests on what the reasonable person’s expectations of privacy are. In a recent court case in the United States, the judges ruled that the reasonable person does not expect the authorities to know exactly where he has driven over the period of a month. As a result, they deemed as inadmissible evidence gleaned from a GPS device attached to a suspect’s car without a warrant.

This, of course, raises the question as to what cultural or social conditions might precipitate a change in this expectation. If Facebook Places wins over the hearts of all Facebook users, and we become used to logging our location ourselves throughout the day, would this constitute a lessening of the expectation to locational privacy? Would this make the use of GPS devices by the police without a warrant less unexpected by the reasonable person?

The flip side of this is that the private sector carries out completely legal activities that the reasonable person really might not expect (such as his shopping habits on one site influencing the adverts he sees on another, completely unrelated site (unrelated in every way apart from having contracts with the same ad provider, of course)). I’m not a lawyer, and I might be missing the point of these reasonable expectations, but it does appear to me that (1) what is reasonable can change over time, and (2) that which we might reasonably expect not to be going on is already going on.

Dystopia or utopia?

September 14, 2010 by · Leave a Comment
Filed under: Uncategorized 

I stumbled across this cartoon. It’s quite amusing. I’m a man, so I first read it from the man’s perspective. When you do that, the cartoon appears to be representing a dystopian future. But then I thought that there might be some women who would find this really quite useful. And then I remembered that there are already websites out there in which women can warn other women about men they met on dating sites.

So while we might disagree as to whether the cartoon is dystopian or utopian, it is not particularly futuristic. All the technologies required for doing this are already in place…

The future of privacy

Thanks to dudelol.com

More about who’s following you around the net

August 1, 2010 by · Leave a Comment
Filed under: Uncategorized 

The Wall Street Journal did a pretty serious investigation into the cookies and other bits and pieces that websites put in our computers as we surf our way around the internet. From reading that article I learnt that the process of selling ad space based on our profile is an extremely quick one and can happen even as the page we are viewing is loading. I also learnt that the companies that are building our profiles can offer incredibly precise segmentations, but that they do not actually know who we are.

Here, though, is the rub. Let’s say they don’t know our names, but I’m pretty sure that they could work out who I am based on the sports, technology and health pages I look at. The sites I look at to check movie times would give you a pretty good idea of where in the world I live. On the other hand, all of the data processing is done automatically – no one at these companies is actually trying to work out who anyone is based on their profiles. Frankly, it would be a waste of their time.

But what if the data got out? When AOL released hundreds of thousands of searches in what it thought was a generous gesture to the research community, some zealous investigators tracked down individual users. This suggests that the main privacy issue is one of data protection. Unlike the bank, however, whose security measures you can assess, we don’t really have a clue what is going on in these companies. We know that they will have to pay hefty fines if there are data breaches, which, in the US at least, they are legally obliged to report, and that this should serve as a good reason for them to secure their (our?) data, but perhaps by then the damage will be done.

If you can’t beat ’em, join ’em?

July 18, 2010 by · Leave a Comment
Filed under: Uncategorized 

An article in Friday’s New York Times discusses a new start-up, Bynamite. What the people at Bynamite want is for us to take back control over the information that advertising networks have about us. Like they say on their home page:

You should always be in control over what advertisers know about you – you should be able to see it, change it, and delete it. If they won’t give you control, they shouldn’t use your information.

So they tell you what the ad networks know about you, giving you the chance to change that. This all seems very in keeping with recent developments surrounding privacy – being able to know what others know about you and correct errors in that information.

But I’m wondering about the direction of this. One of the features of online advertising is that it commodifies our online behavior – the links we click, the searches we do – and turns it into commercially valuable information. What Bynamite is doing is saying that this is an irreversible process, so you might as well have some input into it. Some input may be better than none, of course, but there is a sense here in which this new start-up is encouraging us to be active players in our own commodification and to help advertisers target us even more accurately.

The truth is, then, that Bynamite is not a company that has anything to do with privacy, except in the rather loose sense of controlling the information that marketers posses about us. The benefit that they offer us is that we might see fewer “irrelevant” adverts. Excuse me if I’m underwhelmed…

Update: One of the Bynamite founders took the time to comment on this post. He points out that:

Bynamite opts you out of ad networks that *don’t* give you enough transparency and control. If they won’t show you what they know about you, and give you the power to change their profile, then they can’t use your information. That give-and-take is built into the product, so that if we are successful, it should mean an overall increase in consumer power over the ad industry.

Next Page »