|
5 November 1999
See Bruce Schneier's original comments on Serpent analysis and Ross Anderson's and Brian Gladman's initial responses: http://cryptome.org/bruce-bite.htm
Date: Tue, 02 Nov 1999 08:00:43 -0600
To: "Brian Gladman" <gladman@seven77.demon.co.uk>,
<ukcrypto@maillist.ox.ac.uk>
From: Bruce Schneier <schneier@counterpane.com>
Subject: Re: Serpent
At 05:58 PM 11/1/99 -0600, you wrote:
> > >[Bruce Schneier online interview comments:]
> > > Twofish, and for Mars, RC6, and E2. I worry about a
> > > cipher like Serpent that does not come with any
> > > analysis. Either the designers didn't do any, which is
> > > bad -- or they did it and are hiding it, which is worse.
>[Brian Gladman:]
>It would be truly amazing if Bruce had said this since the Serpent AES
paper
>itself contains several pages of analysis. If Bruce had said
'insufficient
>analysis' instead of 'any analysis' he might have had a point (although
>Ross's post answers this) but if he really did say the words as given
above
>then I fear that he has let his bias show through in a major way.
You pegged the problem exactly. And it was my fault. I wrote the words
above, although you are definitely correct in what I meant to write. I
didn't
proofread as carefully as I should have. It's unfortunate, to make an
understatement. Ross was right to be annoyed.
>If these really are Bruce's words they can only mean that he has either
not
>bothered to read the Serpent AES paper or, alternatively, that he is
trying
>to cast Serpent in a bad light in public. Sadly, the latter seems more
>likely since it is very hard to believe that he is unaware of the content
of
>the paper.
I really wasn't. The Twofish team had recently spent a week together trying
to analyze the various algorithm. In the discussions about Serpent, we
were continually frustrated by the lack of detail in the analysis section
of
the paper. What is the best differential attack? What does a differential
attack against two rounds look like? What are the avalanche properties of
the linear mixing section? These, and others, were all questions we would
have expected to be in the Serpent submission documentation. Certainly
the designers did the analysis; certainly they knew the answers. I felt
that
details of the analysis work they did were being withheld.
I know that both Eli and Lars like to keep unfinished or inconclusive
results
to themselves. They both said as much some years ago when I had the
naive thought that we could somehow "rate" algorithms based on the number
of hours smart cryptographers have spent analyzing them. It's a perfectly
reasonable position, but I think the AES process is a special case. In the
Twofish submission, we tried to put everything in the cryptanalysis
section:
attacks--attacks that don't work, observations that we can't turn into
attacks--everything. We felt this was the right thing to do. In our
analysis
of Serpent, we probably will end up covering a lot of ground that the
designers
covered already. This seems inefficient, if the goal is to choose a good
AES standard.
This is very different from the RC6 and Mars submissions, which contain
dozens of pages of analysis work. Of course this doesn't prove that any
algorithms are better than any others, but at least when you're working on
Mars you can see what the designers were thinking when they included
the various pieces they included. To me, it means that as an analyst you
can start covering new ground quicker.
In retrospect, the comment was unfair and I should never have made it.
But I do think I have a valid point.
>But I share Ross's hope that this report will prove to be inaccurate.
It's inaccurate. But I have to take the blame for the inaccuracy. I typed
the words and didn't pay enough attention while proofing.
Bruce
**************************************************************************
Bruce Schneier, CTO, Counterpane Internet Security, Inc. Ph: 612-823-1098
3031 Tisch Way, 100 Plaza East, San Jose, CA 95128 Fax: 612-823-1590
Free Internet security newsletter. See:
http://www.counterpane.com
From: "Brian Gladman" <gladman@seven77.demon.co.uk>
To: "UK Crypto List" <ukcrypto@maillist.ox.ac.uk>
Cc: "Jim Foti" <jfoti@nist.gov>, "Ross Anderson"
<Ross.Anderson@cl.cam.ac.uk>,
"Bruce Schneier" <schneier@counterpane.com>
Subject: Confidence in AES (was Serpent)
Date: Fri, 5 Nov 1999 11:03:20 -0000
----
[Snip Bruce Schneier message above]
From this clarification it is clearer that the issue that Bruce was trying
to raise is that of the level of detail provided for the cryptanalysis of
AES candidates by the respective design teams.
I certainly consider this an issue worth considering and, as a first cut,
I
have looked at each of the five *** round 1 *** specifications for the AES
finalists to see how much coverage of cryptanalysis was provided. I know
this is not a sensible measure but we have to start somewhere.
The number of pages covering cryptanalysis in each of these specifications
are:
RC6 - 2.5 pages
Serpent - 5 pages
Rijndael - 8 pages
Twofish - 15 pages
MARS - 27 pages
This shows a very large variation but actually suggests that the criticism
of 'insufficent cryptanalysis' could be levelled at RC6 even more than
Serpent. In this light, the suggestion by Bruce above, that the RC6
submission contains 'dozens of pages of analysis work', must be based on
other documents (round 2 publications?).
My own conclusion here is that Bruce was wrong to single out Serpent for
this criticism but that he was right to raise the issue of the scope of the
published cryptanalysis results and the extent to which details have been
provided for each of the five finalists.
I certainly would not want to see the world's secure data depend on an
algorithm for which the only published cryptanalysis work was descibed in
2.5 pages!
In any event, given that MARS only makes 27 pages, it is worth asking the
question "do we want much of the world's secure data to depend on an
algorithm (or algorithms) for which there is so little published cryptanalysis?".
So, while Bruce was wrong to 'have a go' at Serpent in particular, I do
believe that he has a valid general point.
In my view NIST should now publish a full list of all published
cryptanalysis work for each of the five AES finalsists so that we can then
assess whether we know enough about any of the algorithms here.
I doubt that we do and this must make it sensible to ask that NSA
crytanalsysis work in support of the NIST AES effort is now published.
Without this it is hard to see that we will be in good shape to select a
winner in April next year.
Moreover such action on the part of NSA would be a continuation of their
welcome moves to greater opennesss and a recognition of the vital role that
they can (and need to) play in ensuring that the cyberspace on which we
will
all depend critically in the next century is truly safe and secure.
Brian Gladman