I stumbled across this cartoon. It’s quite amusing. I’m a man, so I first read it from the man’s perspective. When you do that, the cartoon appears to be representing a dystopian future. But then I thought that there might be some women who would find this really quite useful. And then I remembered that there are already websites out there in which women can warn other women about men they met on dating sites.
So while we might disagree as to whether the cartoon is dystopian or utopian, it is not particularly futuristic. All the technologies required for doing this are already in place…
The Wall Street Journal did a pretty serious investigation into the cookies and other bits and pieces that websites put in our computers as we surf our way around the internet. From reading that article I learnt that the process of selling ad space based on our profile is an extremely quick one and can happen even as the page we are viewing is loading. I also learnt that the companies that are building our profiles can offer incredibly precise segmentations, but that they do not actually know who we are.
Here, though, is the rub. Let’s say they don’t know our names, but I’m pretty sure that they could work out who I am based on the sports, technology and health pages I look at. The sites I look at to check movie times would give you a pretty good idea of where in the world I live. On the other hand, all of the data processing is done automatically – no one at these companies is actually trying to work out who anyone is based on their profiles. Frankly, it would be a waste of their time.
But what if the data got out? When AOL released hundreds of thousands of searches in what it thought was a generous gesture to the research community, some zealous investigators tracked down individual users. This suggests that the main privacy issue is one of data protection. Unlike the bank, however, whose security measures you can assess, we don’t really have a clue what is going on in these companies. We know that they will have to pay hefty fines if there are data breaches, which, in the US at least, they are legally obliged to report, and that this should serve as a good reason for them to secure their (our?) data, but perhaps by then the damage will be done.
An article in Friday’s New York Times discusses a new start-up, Bynamite. What the people at Bynamite want is for us to take back control over the information that advertising networks have about us. Like they say on their home page:
You should always be in control over what advertisers know about you – you should be able to see it, change it, and delete it. If they won’t give you control, they shouldn’t use your information.
So they tell you what the ad networks know about you, giving you the chance to change that. This all seems very in keeping with recent developments surrounding privacy – being able to know what others know about you and correct errors in that information.
But I’m wondering about the direction of this. One of the features of online advertising is that it commodifies our online behavior – the links we click, the searches we do – and turns it into commercially valuable information. What Bynamite is doing is saying that this is an irreversible process, so you might as well have some input into it. Some input may be better than none, of course, but there is a sense here in which this new start-up is encouraging us to be active players in our own commodification and to help advertisers target us even more accurately.
The truth is, then, that Bynamite is not a company that has anything to do with privacy, except in the rather loose sense of controlling the information that marketers posses about us. The benefit that they offer us is that we might see fewer “irrelevant” adverts. Excuse me if I’m underwhelmed…
Update: One of the Bynamite founders took the time to comment on this post. He points out that:
Bynamite opts you out of ad networks that *don’t* give you enough transparency and control. If they won’t show you what they know about you, and give you the power to change their profile, then they can’t use your information. That give-and-take is built into the product, so that if we are successful, it should mean an overall increase in consumer power over the ad industry.
I met a hi-tech entrepreneur who told me that the Web is nothing more than a marketing platform. We consume content while being provided with adverts. That, he summarised, is the business model of the internet. As we know, advertisers want to target their ads as precisely as possible. On television they do this by researching or guessing who is watching what. On the internet, advertisers are able to know exactly who is seeing which ads, and this makes internet advertising a very attractive proposition.
But how does it work? Look at this diagram:
The first thing to note is that there are three main actors: websites, who publish adverts; advertising networks, who buy advertising space on websites; and companies, who pay advertising networks to advertise their wares. This is very much the old media model. Where it becomes “new media”, though, is in the way that the advertising networks are able to track our online behavior. When I visit Website 1, I get a cookie from whichever advertising networks advertise on that site. This also happens when I visit Websites 2, 3 and 4. The clever part is that if any of Websites 2, 3 and 4 are also part of an advertising network’s network, then that network knows that I’ve been to those sites too. The bigger the advertising network, the more it knows about my surfing behavior (and hence about my consumption behavior).
It is important to realise that, unlike with television audiences, online advertisers do not have to deal with aggregates of audiences, but that they can individually tailor the adverts that we read. When placing adverts on television, an advertiser can quite confidently predict that the audience of a football match will be mostly male. But when you visit a site in the network of an online advertiser, they don’t have to guess what your interests are; if their network is large enough, they know exactly what you are interested in, which sites you have visited, what you did there, how long you were there, and so on and so on.
It turns out that there is a low-tech solution that can help you prevent seeing photos of yourself appear in other people’s Flickr streams. I’m not sure if you can actually see through them though.
An Israeli researcher, Yair Neuman, has developed an algorithm that analyses blogs and identifies depressed bloggers based on their writing. When clinical psychologists were asked to evaluate the same blogs, their views and the software’s output matched in 80% of the cases. Neuman sees this development has having practical applications: for instance, by enabling mental health workers to identify individuals who might need psychotherapeutic help.
Neuman himself seems well aware of the privacy issues here. He had permission from all of the bloggers whose writings were included in the study to use their blog entries; also, in his description of a usage scenario of his software he says, “Through this software it will be possible to contact a blogger and request a general examination of the contents of his blog. If the blogger agrees, he will know whether he needs to seek professional counseling for any possible distress”.
There are a number of interesting issues here. First off, one might say that this has no place in a blog about privacy as people’s blogs are inherently public. If someone is blogging, they want the world to know what they have to say. However, they may not have intended their writing to be analysed for the sake of making some kind of psychological report.
Second, one might say that a blogger who wants privacy can blog under a pseudonym. Third, one might say that an algorithm that can tell me that someone who wrote “oh my god my life is not worth living i’m so depressed” is depressed is not that clever. We don’t know which texts were analysed, so we’ll have to withhold judgment on that.
If there is a privacy issue here – and I’m pretty sure that there is – it is related to the technologies I wrote about in a previous post, namely, technologies that claim to infer our psychological state of mind from remotely taken physiological readings. If this technology could create some kind of psychological profile of a blogger, even, or rather especially, when he is not writing about how he feels (but rather about sport, say, or technology and privacy), then it would be a very powerful tool. I am not saying that this is what Neuman’s algorithm can do right now; instead I’m using it to think about possible developments, one of which might be the creation of a psychological profile of a blogger that stretches over years (the internet never forgets, remember).
I’m just reading an interesting article about how the mass media (i.e. adults) talk about how children and youth use technology. You know the story: kids today share every intimate detail with each other about their lives; they have no sense of privacy; when they grow up, there really will be no privacy left in the world…
But Susan Herring makes the excellent point – one of those points which is obvious once you come across it, but which you hadn’t necessarily thought of beforehand – that today’s youth and children are the most monitored and watched over generation ever to have lived. Their schools have CCTV cameras and they pay for their lunch at the canteen electronically (meaning their parents can see what they had for lunch every day this week); their parents install internet monitoring software on their computers; their parents use their kids’ mobile phones and their cars’ GPS systems to track them…
All of which raises the slightly subversive question: when adults say with despair that “the kids” don’t care about their privacy, are the adults not being perhaps a little bit hypocritical?
In the last few weeks I have had the opportunity to meet and chat with some very clever entrepreneurs who are doing all sorts of very clever things with computers and sensors and things. Much of their work is aimed at making us safer (by stopping terrorists at airports, for instance), but you can’t help but acknowledge the creepiness factor of it either.
So what have I learnt? Firstly, that here is a lot of work underway that endeavours to link psychological states to physiology. This isn’t all that new – ask anyone who has every played poker. Secondly, that there is a lot of work underway that endeavours to take physiological measurements in entirely non-intrusive ways. Apparently, you can measure changes in a person’s blood pressure without touching them. You can create heat images of a person’s face without them knowing. You can analyse their voice. Changes in these measurements might indicate psychological states, such as unease and nervousness, but also happiness, love, and so on and so on.
There have always been people who are especially good at reading other people, but this combination of robust linkages between psychology and physiology and the ability to take physiological measurements without the subject’s knowledge opens up all sorts of possibilities.
Without wanting to oversimplify things, this might mean that terrorists will no longer be able to board aeroplanes. I’m sure you can think of a plethora of other applications as well, though, which don’t look so attractive.
Maybe this is all very old news to you, but I’ve been reading a bit about smart meters and smart grids. I don’t know how far down the line they are in Israel, but the EU hopes to see smart meters installed in 80% of homes by 2020.
The smart thing about smart electricity meters (though they could be gas or water meters too) is that they talk to the electricity company about your electricity usage. This means that the electricity company knows exactly how much electricity you use when. In turn, this means that it can have peak and off-peak rates for electricity and charge you accordingly. The benefit of this is that it is expected to encourage people to use electricity more sparingly.
The somewhat surprising privacy implication is that the level of knowledge smart meters can gather is so precise that they can know which devices you are using in your house. Do you cook your food with a microwave oven? How long is your TV on every day? Do you have a big fluorescent light that is growing your home grown weed?
Marketers would love this information. So would the government. And so would insurance companies. The thing is, it could also be very useful for each of us to know more about our energy consumption too.
This would seem to be a field in which privacy awareness is high. The Electronic Frontier Foundation has written a clear explanation of the privacy threats that might be realised through a smart grid, as have The Future of Privacy Forum and the Canadian Information and Privacy Commissioner.
This example is a nice reminder of how technologies can serve other purposes from those for which they were created.
It does say in the name of this blog that I am a sociologist. I think that in some of the posts I write more like a privacy advocate, but this time I’ll try to avoid that. Every day I get a few links from Google alerts – I have an alert set up for “the future of privacy”. Just now I found myself reading some kind of business blog. It included the following paragraph:
Privacy is one of today’s “hot button” issues as technological advances have enabled increased access to, accumulation and manipulation of data on an unprecedented scale. Further, there is a growing societal need to share almost everything; Geolocation services like Foursquare and sites like Blippy and Twitter being but a few examples. There also seems to be a generational divide where, as a rule, those under 30 share anything about themselves without thinking twice about it.
This paragraph concisely expresses what would appear to be the dominant views on privacy today. So let’s do some sociology and unpack these views a little bit.
We can’t argue much with the first sentence. That much is undoubtedly true.
However, I cannot accept that there is a “growing social need to share almost everything”. People share information because they “need” to? No, that will not do as an explanation. I’m not going to pretend to know why people share the information they do, but it is not because they “need” to. It might be because they enjoy getting attention, because they find it helps them form and strengthen relationships with people, because of peer pressure, because they think it’s fun, or for a number of other reasons. I don’t know. But I do know that it’s not because they “need” to. If there’s a social need to use Blippy, for instance, then why isn’t the author of the blog I’m discussing using it? Why isn’t everybody using it? (Hint: because lots of people think that their credit card information is none of anyone else’s business.)
The next sentence refers to the “generational divide” between the under-30s and over-30s, whereby the under-30s indiscriminately share any and all information about themselves. This is a very popular conception, but increasingly research is showing it to be misplaced. For instance, a fascinating article in the New York Times discusses the increasing discretion among younger internet users. Meanwhile, a review of teens’ use of the internet suggests that younger people view privacy in a different way from older people, but that this does not imply a thoroughgoing laissez faire attitude to it. For instance, kids certainly want privacy from their parents. Perhaps paradoxically, it is the kids who use the internet most who are most aware of what they can do to protect their privacy. In any case, lack of knowledge should not be read as a lack of concern.
The idea that the under-30s tell all about themselves is a popular notion, but one that doesn’t really stand up to scrutiny. I expect more and more academic studies to show this in the near future.