25 May 2015. Responses:
24 May 2015
What should GCHQ do?
Date: Sun, 24 May 2015 12:09:32 +0100 (BST)
From: William Waites <ww[at]styx.org>
Subject: <nettime> What should GCHQ do?
Edinburgh, May 24 2015
Back in late April, an invitation  was circulated around the School of
Informatics which asked academics for ideas about what projects they should
fund in the area of ``Cyber Defense''. Presumably the same invitation went
out to various universities and other organisations. I was very much conflicted
about whether to participate. On the one hand engaging with GCHQ at all seemed
like a bad idea. On the other, it was an invitation to tell them directly
what I think -- at least then it could be said that they had been told. As
it turns out, the even was cancelled at the last minute with no explanation.
If the event had gone ahead, what would I have said? The topic was defense,
keeping infrastructure and such safe from attack. This part of their job
is different from the offensive surveillance (or ``signals intelligence''
in the jargon) programmes. So it stands to reason that projects that would
make their SIGINT job harder would improve our defensive capabilities and
make ``UK interests'' safer. After all, the GCHQ is are not the only ones
with offensive capabilities, but they're reputed to be pretty well developed
so trying to defend against them seems like a good tactic for improving
everybody's security. If GCHQ were to fund work in that direction, they would
be making a positive contribution to our collective security. That's the
argument in broad strokes.
What, specifically, could this mean? One thing is to figure out how to get
strong encryption used pervasively. The science is well established, we have
good (technical) quality software that does encryption, but still an alarming
amount of communications still happen in the clear -- both the content and
the meta-data. Why is this? Originally the answer may have been expense,
doing encryption is computationally more expensive than not doing it. But
that is no longer much of a concern. Computers are fast. Modern computers
even have hardware support for encryption (how trustworthy that hardware
support is is another important thing to look at). Another answer is that
using encryption is difficult. But we know how to make simple, pleasant and
natural user interfaces, surely if serious effort were brought to bear this
too could be overcome.
The answers probably lie in psychology, sociology and economics. The false
argument that only criminals need privacy, and they don't deserve it still
convinces many people. Worse, the intuition of the average user about the
security properties of their actions does not match the reality. This leads
to people typing their lives into Facebook under the mistaken impression
that this is somehow a private communication with their friends. How can
this impedence mismatch of intuition be improved? If it were improved, we
could have an informed population with an accurate perception of the on-line
world, less susceptible to many of the threats on the Internet. Surely the
UK's population is a ``UK interest''.
Furthermore such research could similarly improve the safety of others outwith
the UK since the Internet does not recognise the borders of nation-states.
The security of the global population is also in the UK interest since a
home computer somewhere in another country with a virus can be used to attack
something that the UK cares about. Better that the owner of that home computer
is educated and aware and follows good practices by default so it does not
become infected in the first place. Of course this would limit the capabilities
of agencies in the UK to break into that computer (which, shockingly is now
completely allowed ) but that is worth it because it is delusional to
think that any bug or exploit that allows that to happen will not be also
used by criminals or countries that the UK considers to be enemies.
The Internet today, is incredibly centralised. In the UK, infrastructure
itself is heavily concentrated in London. A small number of large companies
are responsible for the lion's share of traffic and activity. This concentration
is a risk. It was not how the Internet was conceived to operate. The risk
comes because accident, disasters and bad actors have a relatively small
number of targets. The concentration makes mass surveillance easier but it
also makes revenue generation using advertisements (a common business model
among large Internet companies) possible. The value of such a company is
roughly proportional to the number of ``eyeballs'' it can sell to advertisers,
so there is a strong incentive to gather as many as possible in one place.
It's a lot harder to tailor advertisements if the communication between these
eyeballs is encrypted. Automated analysis of behaviour patterns is more difficult
and injecting ``relevant'' ads based on content is impossible.
And so we have arrived at the economic problem. The business model of advertising
has the same basic requirements as mass surveillance. Thwarting one by
decentralisation and ensuring confidentiality of communications means thwarting
the other. Improving safety and security by encouraging pervasive encryption
means finding a new economic model for the Internet that does not depend
on surveillance, that transcends the Web2.0 model of capturing users in silos.
Surely this too can be a fruitful direction for research.