Cryptography-Digest Digest #293, Volume #9 Sun, 28 Mar 99 07:13:03 EST
Contents:
Re: Encryption and the Linux password files (Chem-R-Us)
Re: seeking your opinion: client authentication (Bryan G. Olson; CMSC (G))
Re: Live from the Second AES Conference (Bruce Schneier)
Re: Live from the Second AES Conference (Bruce Schneier)
Re: Live from the Second AES Conference (Bruce Schneier)
----------------------------------------------------------------------------
From: Chem-R-Us <[EMAIL PROTECTED]>
Subject: Re: Encryption and the Linux password files
Date: Sun, 28 Mar 1999 02:00:51 -0800
Sundial Services wrote:
>
> I'm rather startled to find that Linux still uses a login password
> scheme that is based on a publicly-available password file that is
> easily replaced or substituted. Systems have been "taken over" in that
> way.
The shadow password system alleviates this, but wasn't copyright
compatible with the GNU GPL copyright (Shadow Suite has a BSD style
copyright) - so it was left out of Linux distibutions.
------------------------------
From: [EMAIL PROTECTED] (Bryan G. Olson; CMSC (G))
Subject: Re: seeking your opinion: client authentication
Date: 28 Mar 1999 11:18:15 GMT
denis bider wrote:
[...]
: Because there's an SSL session going
: on, we already have a secured channel and we trust the server's identity;
: we only have to find out if the client really is who he claims to be.
: The way I would perform the client authentication is this:
: 1. The client sends the server its name, C, and its certificate, A<<C>>:
: --> C, A<<C>>
: 2. The server checks the validity of the client's certificate, then
: generates some random data, rS, and sends it to the client:
: <-- rS
: 3. The client generates some random data, rC. The client then computes a
: hash of rS and rC combined and encrypts the hash with its private key.
: The encrypted hash is
: sent to the server along with rC:
: --> rC, Cs[h(rS, rC)]
: 4. The server computes the hash of rS and rC, decrypts the hash it
: received from the client and then compares the two hashes. If they match,
: the client indeed represents who it claims to represent.
: Do you find any obvious security gaps in the above protocol or is it OK
: as it is?
Well we've certainly seen worse. Two points:
The "encrypt" operation in 3, and the decrypt in 4 are actually
sign and verify respectively. Not all key pairs have the RSA
property that each key inverts the other, and even with RSA, the
form of message padding used for encryption is different from
that used for signatures.
Another issue that may or may not be problem in practice is that
the protocol makes the client authentication depend upon the
server authentication. If an attacker can spoof the server,
which admittedly is not easy, he can also play man-in-the-middle
and spoof the client to the server. This is a problem if the
client accepts server certs of lower confidence than the server
requires of clients.
I'd suggest the server should encrypt a symmetric key using the
client's public key, and require the client to MAC subsequent
messages with that key. The problem with the current protocol
is that the challenge/response is only bound to the meaningful
messages by the fact that they go over the same channel.
Oh, why not use the optional client authentication in SSL?
--Bryan
------------------------------
From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: Live from the Second AES Conference
Date: Sun, 28 Mar 1999 11:21:37 GMT
>Subject: Live from the Second AES Conference
>Date: Tue, 23 Mar 1999 01:19:57 GMT
>From: [EMAIL PROTECTED]
>Organization: Deja News - The Leader in Internet Discussion
>Newsgroups:sci.crypt
>
> First some general observations about today: A very full schedule
> with 13 presentations and a rump session with another 15 or so
> short themes, that was finally concluded at 9pm. Many surprises
> and, frankly, some confusion right now.
NIST has a hard job ahead of them. Still, I think we will get a
second round with many strong candidates. It's a good field.
> Almost all candidates
> ciphers were stained one way or the other - and a surprising
> strong showing of Rijndael (in the negative sense that very little
> negative was said against it).
I didn't think so. I was not suprised at all at Rijndael's showing.
It deserves the attention it is getting. To me the big suprise was
the complete lack of attention that Mars has been receiving. It just
wasn't talked about.
> Speed may be easy to measure, but I think its
> significance is overrated (the advance of hardware technology will
> double AES's effective speed every 18 months or so, and I don't
> believe that world wide speed requirements will grow that fast
> too).
Don't believe it. Moore's Law has two effects. The first is what you
said: the effective speed of AES will double every 18 months or so on
a specific platform: the computer on your desk, for example. This is
a good thing, but remember that Moore's law will also apply to the
network your computer is attached to, the disk drive in your computer,
etc., etc., etc. What mattes is not the raw speed of AES, but the
effecive speed with respect to the various devices it is being used in
conjunction with. Sure, if you can encrypt at 50 MBits/sec today you
can encrypt at 100 MBits/sec in 18 months, but that will just keep up
with the demand for encryption speed.
The other effect of Moore's Law is that cost goes halves every 18
months. So while a 6805 used to be an expensive processor that
powered your desktop computer, today it's in a smart card selling for
less than a dollar. It's in your dishwasher, your bread machine, your
car, and lots of other places. As things get even cheaper, you'll
find existing devices in smaller and weirder places. If you can
encrypt on the 6805 in 5000 clock cycles per byte, you'll be doing it
for decades to come.
What this all means is that performance does matter. It matters a
lot. If you have five secure algorithm to choose from, you'll choose
on the basis of performance. And you'll choose on the basis of
performance across a wide variety of platforms.
> In the afternoon smart card implementations were discussed. First
> efficiency estimates (both speed and space) were shown. After that
> came three very interesting presentations about attacks against
> smart card implementations. These are real world, practical
> attacks. IBM's Pankaj Rohatgi explained how he got all 128 bits of
> a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
> card!
As he pointed out, this attack works against ANY of the AES candidates
as well as any other algorithm we know about. These attacks are very
real, and have very little to do with the specifics of the
cryptography.
> (I wanted to have a
> complex scheduler and on my Pentium it used "only" one hundredth
> of a second, so I thought that was good enough!)
It is good enough for applications like email and other client
protocols. The problem comes when the Pentium is an SSL server that
needs to set up 1000 keys a second to keep up with the connection
demand. Then you need a fast key schedule. Or think about AES being
used as a hash function, where you have to do one key schedule for
each encryption function. That's even worse.
> Then came Bruce Schneier's presentation about speed. Again Bruce's
> exposition was interesting and clear. He made a good point that
> only assembly code speeds should be compared because otherwise you
> measure a particular compiler's efficiency rather than the
> cipher's. This is true, but in a networked world pure Java is
> going to be used too as a platform independent solution.
Yes, but those will never be time-critical applications. Jave is just
too slow, and not just for encryption. When speed really matters, the
code will be in Assembly.
> He
> mentioned that the CPU also affects relative speeds, especially
> with ciphers like RC6 and MARS that use data depending rotations
> and multiplications. (RC6 in particular took some beating for its
> relative inefficiency both with 8-bit smart cards and later,
> surprisingly, with next generation CPUs.)
It is interesting. Algorithms that could be implemented using
instructions that are part of the RISC subset (XOR, ADD, table lookup,
shift1) were fast and consistant across all CPUs. Algorithms that
used complex operations (multiplies, data-dependent rotates) had
wildly different performance on different platforms. DFC, for
example, is very slow on 32-bit CPUs and very fast on the 64-bit Dec
Alpha, and much less fast on the 64-bit Merced and McKinley (the
next-generation Intel chips). RC6 and Mars, because they use both
data-depdent rotates and 32-bi tmultiplies, are fast on the Pentium
Pro, Pentium II, and Pentium III, but are slow everywhere else. Their
peformance is medeocre on Pentium, ARM, and most other 32-bit CPUs.
Their performance is even worse on Merced, McKinley, and Alpha.
> Bruce mentioned that 8
> bit processors with little RAM will stay with us forever, because
> they will be used more and more for embedded systems. Less
> convincingly, he claimed that also 32 bit processors will stay
> with us for a long time. I think the prevalent view is that in the
> future 64 bits CPUs and beyond will be prevalent for high-end
> personal applications.
Of course, but that does not mean tha 32--bit CPUs will disappear.
Think of 32-bit CPUs in cellphones, smart cards, pagers, PDAs,
etc. Think of the ARM. These CPUs will be used for a long time to
come.
> His final ranking: Twofish, Rijndael,
> Crypton, E2, Mars for Pentiums; Rijndael, Crypton, E2, RC6,
> Twofish for hashing.
For overall performance, Rijndael is the best candidate. I did not
have Twofish first.
> Geoffrey Keating presented a similar paper for the Motorola 6805,
> also a 8-bit CPU. Best algorithms: Rijndael, Twofish, Crypton and
> RC6. RC6 does need a lot of RAM (170-200 bytes) and later in the
> evening River [ed- Rivest?] presented RC6a with a different key
> scheduler that
> requires only 25 bytes of RAM and also is twice as fast in
> assembler and five times as fast in C - Rivest's middle name of
> course is Merlin).
Beware that the RC6a key schedule is not part of AES, and has not
received any analysis. I strongly caution against using it at this
time. NIST has said that they would allow "tweaks" to the algorithm
for the second round, but a completely redesigned key schedule is
almost certainly not a tweak.
> Next, Joan Daemen, one of the authors of Rijndael, presented
> another comparative study about attacks against smart cards. He
> differentiated between timing attacks (useful mainly against older
> implementations of public key smart cards), power analysis
> attacks, and Differential Power Analysis (DPA) attacks. The latter
> is based on the fact that for some instructions the average power
> consumption correlates with the value of one input bit. As
> possible defense mechanisms he mentioned artificially produced
> noise but this can be cancelled out using a larger sample of
> cases. Another possibility is "balancing", an expensive
> proposition where, for example, you use 4 bit arithmetic on an 8
> bit smart card, taking care that the other 4 bits are always the
> complement of the "true" 4 bits, i.e. maintaining a constant
> Hamming weight. According to his analysis the algorithms which are
> easiest to protect against such attacks are Crypton, DEAL,
> Magenta, Rijndael and Serpent. The worse would be CAST-256, DFC,
> E2, HPC, Mars and RC6. His conclusion was that these kind of
> implementation attacks are really much more relevant than the
> academic attacks often discussed, because they can be implemented
> in practice and do some real harm.
The one sloppy part of this analysis was that he assumed that XORs are
easier to balance and hence more secure than ADD. This is, of course,
nonsense...since ADD can be built out of XORs.
> The easiest to attack were judged to be Crypton, Deal, Loki-97,
> Magenta, Rijndael, Safer+, Serpent and Twofish, where DPA needs to
> be done only on very few rounds. Slightly harder would be ciphers
> like CAST-256, DFC, E2, Mars and RC6. Hardest to attack would be
> Frog and HPC, but only because they employ large key dependent
> tables - which make them more difficult to implement in smart
> cards in the first place.
We've looked at a lot of these DPA attacks. Basically, all algorithms
are subject to attack. If someone thinks that one is easier than the
other, it's because he hasn't looked hard enough for an attack. This
problem needs to be solved at the protocol layer (best) or at the
smart-card hardware layer. It cannot be solved with algorithm design.
> Doug Whiting described some efficiency estimates on Merced
> (supposedly a secret future Intel CPU). Here are the results:
>
> cipher Pentium II Merced
> RC6 250 620
> Rijndael 283 170
> Serpent 900 800
> Twofish 258 230
>
> RC6 (as mentioned before) actually seems to get worse. The same is
> estimated will happen with Mars too.
Bizarre, isn't it?
> Takeshi Shimoyama claimed that some S-boxes of Serpent were not
> secure against high-order differential cryptanalysis. His
> presentation was laced with a lot of jokes and laughter and I
> cannot judge how serious (if at all) this a problem is for
> Serpent.
It is a serious problem, but it is by no means an attack. It is VERY
unlikely that a higher-order differential attack can be carried out to
anywhere near the total number of rounds that Serpent has; the attacks
only seem to be effective against ciphers with few rounds.
> Rivest presented RC6a with the much more efficient key scheduler
> variant (see above). He claimed that 8-bit chips "will go away",
> but nobody really believed that he really believed it.
Agreed.
> Bruce Schneier made a case against the idea of choosing not one
> but several "approved" AES algorithms. He used the argument that
> cryptanalytic efforts should not be divided between several
> ciphers because then each individual cipher would be less trusted.
> I didn't quite understand this argument: all ciphers that make it
> to the second round will be strongly analyzed anyway no matter
> whether only one or more than one is finally declared the winner.
My argument is that after a single algorithm is chosen, there is still
about a one-year period before it becomes a FIPS. And even after it
becomes a FIPS, we will continue to analyze AES. The question is
whether we are analyzing one or two ciphers.
> He mentioned that if they were two AES ciphers and you used one
> bit of the key to chose one of them, then half the time you would
> use a less powerful cipher. But one could also say that half the
> time one would use the more powerful cipher.
Indeed. But if you lose your privacy, security, money, or whatever 50%
of the time, it doesn't matter about the other 50%. There are few
operational systems where saying "half the time you use the stronger
cipher" is good enough.
> He argued that multiple standard ciphers would be much
> more expensive to implement. This is probably true but I think
> this measurable cost should be balanced against the risk (a
> non-zero probability) that a single AES may fail catastrophically
> some time in the future when it is already broadly used. If that
> should happen the cost would be extremely large. The possibility
> of a catastrophic failure is really what should keep the NIST
> people awake at night.
NIST can point to other algorithms as alternates, but I feel there
should only be one AES.
> One wonders what is so wrong in declaring several good algorithms
> for specialized environments. Actually, after today's presentation
> about attacks against smart cards, somebody mentioned that a smart
> card is a sufficiently different environment to justify a
> specialized cipher. There is a large class of applications where
> ciphers are implemented in software, where speed is not important,
> and where security is important. In these cases one might choose
> to combine several algorithms in series at no particular additional
> cost.
Because there are few specialized environments. Smart cards talk to
deskop PCs and remote servers. Cellphones talk to base stations.
Software talks to hardware. There are some applications where one
smart card encrypts and the other decrypts, but they are very much in
the minority.
> So who will make it to the second round? No idea.
Thanks for writing this, by the way. I'm glad someone is keeping a
public record. It was good seeing you, and I look forward to reading
your summary of the second day.
Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See: http://www.counterpane.com
------------------------------
From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: Live from the Second AES Conference
Date: Sun, 28 Mar 1999 11:25:58 GMT
On 23 Mar 99 09:32:56 GMT, [EMAIL PROTECTED] () wrote:
>This reminds me of something I've noticed. Although Magenta had serious
>problems, the cryptanalysis of it was based on a weakness in its key
>schedule. The only other possible problem with it is that the S-box,
>although nonlinear, seems a bit too simple.
There are a bunch of other problems the the security. I believe
one--the massive number of fixed points--is mentioned in our paper.
Honestly, there are many other ways of attacking Magenta. It's just
nor worth spending the time writing it up.
Magenta now replaces McGuffin as my favorite toy cipher to give
budding cryptanalysts.
>: IBM's Pankaj Rohatgi explained how he got all 128 bits of
>: a Twofish key after only 50 (that is 50 not 2^50) uses of a smart
>: card!
>
>I wonder how secure some of the other ciphers would be, if the kind of
>optimizations Bruce suggested for fitting Twofish on a smart card were
>applied to them. That is, if it were possible.
He said in his talk that every cipher is vulnerable. We've done this
sort of work, too, and we have found that you can't defend against
these types of attack with the algorithm. You can do some things with
the implementation and some things with the hardware, but basically
you need to defend in the protocol layer.
>If you compare all the ciphers on the same compiler ... and I remember at
>a computer chess competition, someone suggested that it wouldn't be a bad
>idea to require the entrants all to use C, so that it wouldn't be a
>competition of assembly-language coding skills.
Then you're comparing compiler efficiency. What we did in our
comparison is assume best possible ASM. We didn't implement
everything, but we gave all algorithms the benefits of the doubt.
Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See: http://www.counterpane.com
------------------------------
From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: Live from the Second AES Conference
Date: Sun, 28 Mar 1999 11:26:45 GMT
On 23 Mar 1999 20:58:36 GMT, [EMAIL PROTECTED] (STL137) wrote:
>Merced isn't a supposed chip - it is definitely going to come out mid-2000.
The Merced design (and McKinley) is frozen at this point. Both of
them are definitely coming out; they are the Intel 64-bit CPUs.
Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN 55419 Fax: 612-823-1590
Free crypto newsletter. See: http://www.counterpane.com
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************