surveillance

Thinking privacy after Thinking Digital 2014

What better way to launch back into blogging than with a download of what’s running through my head after another exceptional Thinking Digital 2014 conference?

This year, there was a distinct element of ‘digital underbelly’ – the things unsettling the presenters and/or audience in the world of tech, as well as many hopeful and inspiring talks. A key theme was privacy, with Aral Balkan (@aral) throwing down the gauntlet against the hegemony of the data monetisation business model and the fallacy of ‘free’ digital services, and an excellent panel discussion of Snowden and beyond.

So I started trying to work out where I stand on privacy, which is harder than I thought it would be. As a classic British liberal individualist by upbringing, you’d think my position would be clear – privacy good, invasion of privacy (whether by companies or government) bad. I do think privacy is good, but I also find myself wanting to unpick some of the issues exercising the conference, because, as with most things, it’s not quite that simple.

When it comes to the Snowden revelations, two arguments are often used to justify invasion of privacy by government – which has happened on the back of us giving our data to companies: ‘There’s no problem if you’ve got nothing to hide’, and ‘Government need to protect us, e.g. from terrorist atrocities’.

Let’s take the first one first. It’s flawed on several levels.

  • Actually, everyone has something to hide. Not in a bad way, just that we all have a point where disclosure would become uncomfortable – that’s pretty much why we have a concept of privacy in the first place, and don’t (usually) conduct our intimate lives in full view. Exactly where that line should lie, though, is partly cultural (some countries are much less worried about e.g. the government using their medical data than others), and should be subject to debate.
  • It’s not a very healthy approach to democracy to effectively say that the state should have unlimited access. That said, a perfectly functional democracy involves all sorts of trade-offs in terms of what people consider to be public goods or the role of their state – so again, there’s a debate to be had. At the least, though, there should be effective oversight and control of how far governments are going – the potential for misuse being what it is.
  • Even if you feel that you personally are comfortable with how far your government is going – e.g. you ‘have nothing to hide’ in your emails – what about all of your friends and contacts, and their governments? If your email provider can and does ‘read’ your emails, and any government can potentially piggy-back on that for intelligence, what are you letting your email correspondents in for? And what gives you the right to make that decision for them?
  • Personally,  and almost certainly wrongly, I feel less ‘icky’ about my data being parsed by computers and algorithms than e.g. having my mail opened and read by a person. I have ‘nothing to hide’ from a computer which I have no reason to suppose will identify any pattern requiring the intervention of a person – i.e. it will not ‘judge’ me as anything worse than uninteresting. None of that judgement – or lack of it – is transparent to me, though, so who’s to say that it will always be benign?

I find the ‘government surveillance is necessary for security’ argument tricky, too.

  • By definition, there is an absence of transparent information about how useful intelligence-gathering actually is – not to mention a vested interest in making it seem as effective as possible e.g. claiming that potential attacks have been ‘foiled’.
  • Even on a common sense basis, there must be a lot of caveats to the belief that access to more information gives better intelligence results. For starters, volume is useless unless you have the tools to do something effective with it, which brings us straight back to a lack of evidence one way or the other.
  • There clearly are instances of intelligence services getting things wrong – including in democratic, reasonably benevolent states. How many mistakes are we willing to live with as ‘for the greater good’? This is both a specific, practical / cultural question about our current circumstances; and a more philosophical one. The death penalty is a bit similar – I am against it for a bucket of reasons, some thought-through and some instinctive: because I don’t consider the loss of innocent life worth it; because I consider the ‘job’ of executioner dehumanising; and because I believe in rehabilitation in all but the most extreme cases. When it comes to surveillance, it seems to me there is a philosophical problem about intention vs action – they are not the same, the first is not a crime, and we’ve all had thoughts that we wouldn’t want to be judged for. ‘Evidence’ about my intentions when I haven’t yet done anything might have legitimate uses –  e.g. to increase security at an event in response to threat or risk – but if it’s used in any way to target me personally, before I have turned thought into illegal action, that seems to me to be crossing a line in terms of democracy and what is acceptable as ‘collateral damage’ to our civil liberties. The ideal might be that the security services identify someone as a threat, monitor them to gather evidence, then arrest them at the point of action, just in time to prevent an atrocity – but the likelihood of this all coming together, even for the best security services on Earth, seems small.
  • Christian Payne (Documentally) responded to the ‘government protect us’ point during the panel discussion by saying that the questioner was asking him to live in a prison in order to be safe. I’d go a step further and characterise it as  living in a prison to be free (from terrorism, fear etc) – i.e. categorically flawed.  Are we prepared and democratically able to decide to trade aspects of our privacy for other goods? Yes. Very few of us would argue that there should be no intelligence-gathering or security services at all. But does that mean we should accept almost any level of encroachment in the name of security? No. Governments that have intelligence, even broadly benevolent ones, will want to use it, and we might not like how they do.
  • There will always be individuals who can contrive something nastier or sneekier or both than the system can prevent (and really, we might on some level be glad of this – that the human element remains more powerful than the things we have created). It’s a bit like tax – you can’t just endlessly increase the law in the hope that you will find and close every loophole, someone will always be a step ahead – and you can’t just endlessly erode civil liberties in the hope that everything nasty will go away and we can all live happily ever after.
  •  A final more personal / cultural point – I spent a lot of time in London when IRA bomb scares (and sometimes actual bombs) were more or less a regular occurrence. The attitude of most people then was ‘Oh bother that station’s shut again I’ll have to take a different route home’, not ‘let’s give the government unlimited power to make war on terror’. Granted, the level of potential damage was lower, but I think we were freer, and therefore more successful in combating the extremism that feeds terrorism, when we took that attitude.

So, this has all taken rather longer than even I expected. I hope I’ve demonstrated some flaws in two arguments for invasion of privacy.

And yet I do still use gmail, and other companies, service and apps that monetise my data. Previously, I would have said that’s because they provide great things that I want to use – and that’s still true – but I might not choose them if I had any real alternatives.

I think the critical things are that

  1. I am using certain services consciously, in the knowledge that I am ‘paying’ in data, and with a willingness to debate whether this is desirable;
  2. I am doing this in the context of a wider debate about the points above and what trade-offs we are comfortable with; and
  3. I have real options.

For me personally, the first point is reality – although for many people it still isn’t, and the conference recommendation to watch Terms & Conditions is one I would repeat.

The second point is reality in certain circles, but doesn’t seem to go wide enough. And by definition, things like terms and conditions are not very transparent for most people, stifling a debate which might otherwise happen.

The third point doesn’t seem to be reality unless you have a level of technical capability (and patience with design flaws / difficulties) way above average – I’m not there. So I’ll be looking out for what Aral is doing next; reading up on Pretty Good Privacy; and generally keeping my eyes and mind open.

Thanks again to Thinking Digital, and thanks for reading. More on the rest of the conference and on other things soon!

Addendum – This is a really excellent – if scary – blog by Quinn Norton about why everything is broken… ‘Let me make something clear, because even some geeks don’t get this: it doesn’t matter how good your encryption is if your attacker can just read your data off the screen with you, and I promise they can’.