17 August 2013
Jon Callas on What Snowden Is Telling Us
More on this thread:
http://lists.randombit.net/mailman/listinfo/cryptography
Date: Sat, 17 Aug 2013 10:50:42 -0700
To: Bryan Bishop <kanzure[at]gmail.com>
Cc: Crypto List <cryptography[at]randombit.net>
Subject: Re: [cryptography] Reply to Zooko (in Markdown)
On Aug 17, 2013, at 12:49 AM, Bryan Bishop
<kanzure[at]gmail.com> wrote:
On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas
<jon[at]callas.org> wrote:
-
It's very hard, even with controlled releases, to get an exact byte-for-byte
recompile of an app. Some compilers make this impossible because they randomize
the branch prediction and other parts of code generation. Even when the compiler
isn't making it literally impossible, without an exact copy of the exact
tool chain with the same linkers, libraries, and system, the code won't be
byte-for-byte the same. Worst of all, smart development shops use the *oldest*
possible tool chain, not the newest one because tool sets are designed for
forwards-compatibility (apps built with old tools run on the newest OS) rather
than backwards-compatibility (apps built with the new tools run on older
OSes). Code reliability almost requires using tool chains that are trailing-edge.
Would providing (signed) build vm images solve the problem of distributing
your toolchain?
Maybe. The obvious counterexample is a compiler that doesn't deterministically
generate code, but there's lots and lots of hair in there, including potential
problems in distributing the tool chain itself, including copyrighted tools,
libraries, etc.
But let's not rathole on that, and get to brass tacks.
I *cannot* provide an argument of security that can be verified on its own.
This is Godel's second incompleteness theorem. A set of statements S cannot
be proved consistent on its own. (Yes, that's a minor handwave.)
All is not lost, however. We can say, "Meh, good enough" and the problem
is solved. Someone else can construct a *verifier* that is some set of policies
(I'm using the word "policy" but it could be a program) that verifies the
software. However, the verifier can only be verified by a set of policies
that are constructed to verify it. The only escape is decide at some point,
"meh, good enough."
I brought Ken Thompson into it because he actually constructed a rootkit
that would evade detection and described it in his Turing Award lecture.
It's not *just* philosophy and theoretical computer science. Thompson flat-out
says, that at some point you have to trust the people who wrote the software,
because if they want to hide things in the code, they can.
I hope I don't sound like a broken record, but a smart attacker isn't going
to attack there, anyway. A smart attacker doesn't break crypto, or suborn
releases. They do traffic analysis and make custom malware. Really. Go look
at what Snowden is telling us. That is precisely what all the bad guys are
doing. Verification is important, but that's not where the attacks come from
(ignoring the notable exceptions, of course).
One of my tasks is to get better source releases out there. However, I also
have to prioritize it with other tasks, including actual software improvements.
We're working on a release that will tie together some new anti-surveillance
code along with a better source release. We're testing the new source release
process with some people not in our organization, as well. It will get better;
it *is* getting better.
Jon
_______________________________________________
cryptography mailing list
cryptography[at]randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography
Date: Fri, 16 Aug 2013 23:04:38 -0700
To: Zooko Wilcox-OHearn <zooko[at]leastauthority.com>
Cc: cryptography[at]randombit.net
Subject: [cryptography] Reply to Zooko (in Markdown)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Also at
http://silentcircle.wordpress.com/2013/08/17/reply-to-zooko/
# Reply to Zooko
(My friend and colleague, [Zooko
Wilcox-O'Hearn](https://leastauthority.com/blog/author/zooko-wilcox-ohearn.html)
wrote an open letter to me and Phil [on his blog at
LeastAuthority.com](https://leastauthority.com/blog/open_letter_silent_circle.html).
Despite this appearing on Silent Circle's blog, I am speaking mostly for
myself, only slightly for Silent Circle, and not at all for Phil.)
Zooko,
Thank you for writing and your kind words. Thank you even more for being
a customer. We're a startup and without customers, we'll be out of business.
I think that everyone who believes in privacy should support with their
pocketbook every privacy-friendly service they can afford to. It means a
lot to me that you're voting with your pocketbook for my service.
Congratulations on your new release of [LeastAuthority's
S4](https://leastauthority.com)
and
[Tahoe-LAFS](https://tahoe-lafs.org/trac/tahoe-lafs).
Just as you are a fan of my work, I am an admirer of your work on Tahoe-LAFS
and consider it one of the best security innovations on the planet.
I understand your concerns, and share them. One of the highest priority tasks
that we're working on is to get our source releases better organized so that
they can effectively be built from [what we have on
GitHub](https://github.com/SilentCircle/).
It's suboptimal now. Getting the source releases is harder than one might
think. We're a startup and are pulled in many directions. We're overworked
and understaffed. Even in the old days at PGP, producing effective source
releases took years of effort to get down pat. It often took us four to six
weeks to get the sources out even when delivering one or two releases per
year.
The world of app development makes this harder. We're trying to streamline
our processes so that we can get a release out about every six weeks. We're
not there, either.
However, even when we have source code to be an automated part of our software
releases, I'm afraid you're going to be disappointed about how verifiable
they are.
It's very hard, even with controlled releases, to get an exact byte-for-byte
recompile of an app. Some compilers make this impossible because they randomize
the branch prediction and other parts of code generation. Even when the compiler
isn't making it literally impossible, without an exact copy of the exact
tool chain with the same linkers, libraries, and system, the code won't be
byte-for-byte the same. Worst of all, smart development shops use the *oldest*
possible tool chain, not the newest one because tool sets are designed for
forwards-compatibility (apps built with old tools run on the newest OS) rather
than backwards-compatibility (apps built with the new tools run on older
OSes). Code reliability almost requires using tool chains that are
trailing-edge.
The problems run even deeper than the raw practicality. Twenty-nine years
ago this month, in the August 1984 issue of "Communications of the ACM" (Vol.
27, No. 8) Ken Thompson's famous Turing Award lecture, "Reflections on Trusting
Trust" was published. You can find a facsimile of the magazine article at
<https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf>
and a text-searchable copy on Thompson's own site,
<http://cm.bell-labs.com/who/ken/trust.html>.
For those unfamiliar with the Turing Award, it is the most prestigious award
a computer scientist can win, sometimes called the "Nobel Prize" of computing.
The site for the award is at
<http://amturing.acm.org>.
In Thompson's lecture, he describes a hack that he and Dennis Ritchie did
in a version of UNIX in which they created a backdoor to UNIX login that
allowed them to get access to any UNIX system. They also created a
self-replicating program that would compile their backdoor into new versions
of UNIX portably. Quite possibly, their hack existed in the wild until UNIX
was recoded from the ground up with BSD and GCC.
In his summation, Thompson says:
The moral is obvious. You can't trust code that you did
not totally
create yourself. (Especially code from companies that
employ people
like me.) No amount of source-level verification or scrutiny
will
protect you from using untrusted code. In demonstrating
the
possibility of this kind of attack, I picked on the C
compiler. I
could have picked on any program-handling program such
as an
assembler, a loader, or even hardware microcode. As the
level of
program gets lower, these bugs will be harder and harder
to detect.
A well installed microcode bug will be almost impossible
to detect.
Thompson's words reach out across three decades of computer science, and
yet they echo Descartes from three centuries prior to Thompson. In Descartes's
1641 "Meditations," he proposes the thought experiment of an "evil demon"
who deceives us by simulating the universe, our senses, and perhaps even
mathematics itself. In his meditation, Descartes decides that the one thing
that he knows is that he, himself, exists, and the evil demon cannot deceive
him about his own existence. This is where the famous saying, "*I think,
therefore I am*" (*Cogito ergo sum* in Latin) comes from.
(There are useful Descartes links at:
<http://www.anselm.edu/homepage/dbanach/dcarg.htm>
and
<http://en.wikipedia.org/wiki/Evil_demon>
and
<http://en.wikipedia.org/wiki/Brain_in_a_vat>.)
When discussing thorny security problems, I often avoid security ratholes
by pointing out Descartes by way of Futurama and saying, "I can't prove I'm
not a head in a jar, but it's a useful assumption that I'm not."
Descartes's conundrum even finds its way into modern physics. It is presently
a debatable, yet legitimate theory that our entire universe is a software
simulation of a universe . Martin Savage of University of Washington
<http://www.phys.washington.edu/~savage/>
has an interesting paper from last November on ArXiV
<http://arxiv.org/pdf/1210.1847v2.pdf>.
You can find an amusing video at
<http://www.huffingtonpost.com/2012/12/24/universe-computer-simulation_n_2339109.html>
in which Savage even opines that our descendants are simulating us to understand
where they came from. I suppose this means we should be nice to our kids
because they might have root.
Savage tries to devise an experiment to show that you're actually in a
simulation, and as a mathematical logician I think he's ignoring things like
math. The problem is isomorphic to writing code that can detect it's on a
virtual machine. If the virtual machine isn't trying to evade, then it's
certainly possible (if not probable -- after all, the simulators might want
us to figure out that we're in a simulation). Unless, of course, they don't,
in which case we're back not only to Descartes, but Godel's two Incompleteness
Theorems and their cousin, The Halting Problem.
While I'm at it, I highly, highly recommend Scott Aaronson's new book, "Quantum
Computing Since Democritus"
<http://www.scottaaronson.com/democritus/>
which I believe is so important a book that I bought the Dead Tree Edition
of it. ([Jenny
Lawson](http://thebloggess.com)
has already autographed my Kindle.)
Popping the stack back to security, the bottom line is that you're asking
for something very, very hard and asking for a solution to an old philosophical
problem as well as suggesting I should prove Godel wrong. I'm flattered by
the confidence in my abilities, but I believe you're asking for the impossible.
Or perhaps I'm programmed to think that.
This limitation doesn't apply to just *my* code. It applies to *your* code,
and it applies to all of us. (Tahoe's architecture makes it amazingly resilient,
but it's not immune.) It isn't just mind-blowing philosophy mixed up with
Ken Thompson's Greatest Hack.
Whenever we run an app, we're trusting it. We're also trusting the operating
system that it runs on, the random number generator, the entropy sources,
and so on. You're trusting the CPU and its microcode. You're trusting the
bootloader, be it EFI or whatever as well as
[SMM](http://en.wikipedia.org/wiki/System_Management_Mode)
on Intel processors -- which could have completely undetectable code running,
doing things that are scarily like Descartes's evil demon. The platform-level
threats are so broad that I could bore people for another paragraph or two
just enumerating them.
You're perhaps trusting really daft things like [modders who slow down entropy
gathering](http://hackaday.com/2013/01/04/is-entropy-slowing-down-your-android-device/)
and [outright
bugs](http://android-developers.blogspot.com/2013/08/some-securerandom-thoughts.html).
Ironically, the attack vector you suggest (a hacked application) is one of
the harder ways for an attacker to feed you bad code. On mobile devices,
apps are digitally signed and delivered by app stores. Those app stores have
a vetting process that makes *targeted* code delivery hard. Yes, someone
could hack us, hack Google or Apple, or all of us, but it's very, very hard
to deliver bad code to a *specific* person through this vector, and even
harder if you want to do it undetectably.
In contrast, targeted malware is easy to deploy. Exploits are sold openly
in exploit markets, and can be bundled up in targeted advertising. Moreover,
this *has* happened, and is known to be a mechanism that's been used by the
FBI, German Federal Police, the Countries Starting With the Letter 'I' (as
a friend puts it), and everyone's favorite The People's Liberation Army.
During Arab Spring, a now-defunct government just procured some Javascript
malware and dropped it in some browsers to send them passwords on non-SSL
sites.
Thus, I think that while your concern does remind me to polish up my source
code deployment, if we assume an attacker like a state actor that targets
people and systems, there are smarter ways for them to act.
I spend a lot of time thinking, "*If I were them, what would I do?*" If you
think about what's possible, you spend too much time on low-probability events.
Give yourself that thought experiment. Ask yourself what you'd do if you
were the PLA, or NSA, or a country starting with an 'I.' Give yourself a
budget in several orders of magnitude. A grand, ten grand, a hundred grand,
a million bucks. What would you do to hack yourself? What would you do to
hack your users without hacking you? That's what I think about.
Over the years, I've become a radical on usability. I believe that usability
is all. It's easy to forget it now, but PGP was a triumph because you didn't
have to be a cryptographer, you only had to be a techie. We progressed PGP
so that you could be non-technical and get by, and then we created PGP Universal
which was designed to allow complete ease of use with a trusted staff. That
trusted staff was the fly in the ointment of Silent Mail and the crux of
why we shut it down -- we created it because of usability concerns and killed
it because of security concerns. Things that were okay ideas in May 2013
were suddenly not good ideas in August. I'm sure you've noted when using
our service our belief in usability. Without usability that is similar to
the non-secure equivalent, we are nothing because the users will just not
be secure.
I also stress Silent Circle is a *service*, not an app. This is hard to remember
and even we are not as good at it as we need to be. The service is there
to provide its users with a secure analogue of the phone and texting apps
they're used to. The difference is that instead of having utterly no security,
they have a very high degree of it.
Moreover, our design is such to minimize the trust you need to place in us.
Our network includes ourselves as a threat, which is unusual. You're one
of the very few other people who do something similar. We have technology
and policy that makes an attack on *us* to be unattractive to the adversary.
You will soon see some improvements to the service that improve our resistance
to traffic analysis.
The flip side of that, however, is that it means that the device is the most
attractive attack point. We can't help but trust the OS (from RNG to sandbox),
bootloader, hardware, etc.
Improvements in our transparently (like code releases) compete with tight
resources for improvements in the service and apps. My decisions in deploying
those resources reflect my bias that I'd rather have an A grade in the service
with a B grade in code releases than an A in code releases and a B service.
Yes, it makes it harder for you and others, but I have to look at myself
in the mirror and my emphasis is on service quality first, reporting just
after that. Over time, we'll get better. We've not yet been running for a
year. Continuous improvement works.
I'm going to sum up with the subtitle of the ACM article of Ken Thompson's
speech. It's not on his site, but it is on the facsimile article:
To what extent should one trust a statement that a program
is free
of Trojan horses? Perhaps it is more important to trust
the people
who wrote the software.
Thank you very much for your trust in us, the people. Earning and deserving
your trust is something we do every day.
Regards,
Jon
-----BEGIN PGP SIGNATURE-----
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii
wj8DBQFSDxJ+sTedWZOD3gYRAiDiAJ0bR0EOetfQpPSTDtWX1qyn6wcIcACfbi5Z
M9oM0D1yL77QHaw6RnEEFIU=
=7StS
-----END PGP SIGNATURE-----
_______________________________________________
cryptography mailing list
cryptography[at]randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography
Date: Fri, 16 Aug 2013 21:46:10 +0000
From: Zooko Wilcox-OHearn <zooko[at]leastauthority.com>
To: cryptography[at]randombit.net
Subject: [cryptography] open letter to Phil Zimmermann and Jon Callas of
Silent Circle, re: Silent Mail shutdown
https://leastauthority.com/blog/open_letter_silent_circle.html
|