Donate for the Cryptome archive of files from June 1996 to the present

8 February 2013

Comsec Cliff

More in this thread:

From: Jon Callas <jon[at]>
Date: Fri, 8 Feb 2013 11:26:23 -0800
To: Randombit List <cryptography[at]>
Subject: Re: [cryptography] "Meet the groundbreaking new encryption app set to revolutionize privacy..."


Hash: SHA1

Thanks for your comments, Ian [below]. I think they're spot on.

At the time that the so-called Arab Spring was going on, I was invited to a confab where there were a bunch of activists and it's always interesting to talk to people who are on the ground. One of the things that struck me was their commentary on how we can help them.

A thing that struck me was one person who said, "Don't patronize us. We know what we're doing, we're the ones risking our lives." Actually, I lied. That person said, "don't fucking patronize us" so as to make the point stronger. One example this person gave was that they talked to people providing some social meet-up service and they wanted that service to use SSL. They got a lecture how SSL was flawed and that's why they weren't doing it. In my opinion, this was just an excuse -- they didn't want to do SSL for whatever reason (very likely just the cost and annoyance of the certs), and the imperfection was an excuse. The activists saw it as being patronizing and were very, very angry. They had people using this service, and it would be safer with SSL. Period.

This resonates with me because of a number of my own peeves. I have called this the "the security cliff" at times. The gist is that it's a long way from no security to the top -- what we'd all agree on as adequate security. The cliff is the attitude that you can't stop in the middle. If you're not going to go all the way to the top, then you might as well not bother. So people don't bother.

This effect is also the same thing as the best being the enemy of the good, and so on. We're all guilty of it. It's one of my major peeves about security, and I sometimes fall into the trap of effectively arguing against security because something isn't perfect. Every one of us has at one time said that some imperfect security is worse than nothing because it might lull people into thinking it's perfect -- or something like that. It's a great rhetorical flourish when one is arguing against some bit of snake oil or cargo-cult security. Those things really exist and we have to argue against them. However, this is precisely being patronizing to the people who really use them to protect themselves.

Note how post-Diginotar, no one is arguing any more for SSL Everywhere. Nothing helps the surveillance state more than blunting security everywhere.


Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii


cryptography mailing list

Date: Thu, 07 Feb 2013 14:52:17 +0300
From: ianG <iang[at]>
To: cryptography[at]
Subject: Re: [cryptography] "Meet the groundbreaking new encryption app set to revolutionize privacy..."

On 7/02/13 02:35 AM, Jeffrey Walton wrote:
> On Wed, Feb 6, 2013 at 7:17 AM, Moti <m at> wrote:
>> Interesting read.
>> Mostly because the people behind this project.

> No offense to folks like Mr. Zimmermann, but I'm very suspect of his
> claims. I still remember the antithesis of the claims reported at

When we [0] were building the original Hushmail applet, we knew the flaw - the company could switch the applet on the customer. The response to that was to publish the applet, and then the customer could check the applet wasn't switched.

Now, you can look at this two ways: one is that it isn't perfect as nobody would bother to check their applet. Another is that it isn't perfect but it was a whole lot better than futzing around with OpenPGP keys and manual decrypting. And it was the latter 'risk' view that won, Hushmail filled that niche between the hard core pgp community, and the people who did business and needed an easy tool.

This is also the same thing that is the achilles heel of Skype. It turns out (rumour has it) that the attack kit for Skype that circulated in the late 00s amongst the TLAs was simply a PC breach kit that captured the Skype externals - keystrokes, voice, screen etc. Once the TLAs had that, they were happy and they shut up. It was easier for them to breach the PC, slip in the wrapper tacker, and listen in than seriously hack the skype model. And, then, media perception that Skype was unhackable worked again, everyone was happy.

Same will be true of Silent Circle, and they will already know this (note that I have nothing to do with them, I just read the model like anyone else). The security requirement here is that they don't need it to be completely unbreakable, they just have to push 99% of the attacks onto the next easy thing -- the phone itself. Security is lowest common denominator, not highest uncommon numerator. See below.

FWIW, their security model looks pretty damn good, in that it is nicely balanced to their business model (the only metric that matters) and they trialled this through several iterations (ZRTP, I think). They are the right team. Even their business customer looks fantastic (hints abound). If you're looking for an investment tip, this wouldn't be so far off ;)

> I'm also suspect of "... the sender of the file can set it [the
> program?] on a timer so that it will automatically “burn” - deleting
> it [encrypted file] from both devices after a set period of, say,
> seven minutes." Apple does not allow arbitrary background processing -
> its usually limited to about 20 minutes. So the process probably won't
> run on schedule or it will likely be prematurely terminated. In
> addition, Flash Drives and SSDs are notoriously difficult to wipe an
> unencrypted secret.

Don't be suspicious, be curious -- this is where security is at.

Remember: The threat is always on the node, it is never on the wire.

Looking back at that Hushmail app, another anecdote. When I was doing business with a guy who was security paranoid, he used an unpublished nym, encrypted his messages with PGP, and then sent them via Hushmail to me. Life then turned aggressive, and we ended up in court. His side demanded discovery. I took all his untraceable, pgp-encrypted and Hushmail-protected mails and filed them in as cleartext discovery, as I was severely told to do by the court. Oops. From there they entered into the transcript as evidence, and from there, others were able to acquire the roadmap via subpoena.

The threat is always on the node. Never the wire.

Your node, your partners node, your partner's friend's node .... It is this that the Mission Impossible deletion feature is aimed at, and it is this real world node threat that it viably addresses. This is what people want. The fact that it is theoretically imperfect doesn't make it unreasonable.

> Perhaps a properly scoped PenTest with published results would ally my
> suspicions. It would be really bad if people died: "... a handful of
> human rights reporters in Afghanistan, Jordan, and South Sudan have
> tried Silent Text’s data transfer capability out, using it to send
> photos, voice recordings, videos, and PDFs securely."

Nah, this again is the wrong approach. Instead think of it this way: of 100 human rights reporters, if 99 are protected by this tool, and one dies, that is probably a positive. If 100 human rights reporters are scared away by media geeks that say it is unlikely to be perfect, and instead they use gmail, and 99 are caught (remember Petreus) then this is probably a negative.

Human rights reporters already put their life on the line. Your mission is not to protect their life absolutely, as if we are analysing the need for a neighbour's swimming pool fence, but to make their reporting more efficient. Which coincidentally also means raising the chances that they live to report the next one.

Risks, not absolutes.


[0] I say we - my company had a hand in the original crypto back when Hushmail was Cliff+1. FWIW.