Cryptography-Digest Digest #839, Volume #8        Mon, 4 Jan 99 17:13:04 EST

Contents:
  Re: Bitslice Implimentation of Crypt(3) (Solar Designer)
  Twofish speedup -- 258 clocks per block! (Bruce Schneier)
  bks on cryptology (esha98)
  Re: bks on cryptology (James Pate Williams, Jr.)
  Re: bks on cryptology (Terry Ritter)
  Re: Encryption Basics (Tim Redburn)
  CTS a la Schneier, Rivest (Jon Becker)
  Help: a logical difficulty (Mok-Kong Shen)
  Re: First issue of Journal of Craptology (John Savard)
  Re: Attention:  This is an encoded message????? (Matt Curtin)
  Re: On leaving the 56-bit key length limitation (Terry Ritter)
  test message (Dan)
  Re: Help: a logical difficulty (John Savard)
  REQ: Schneier's Applied Cryptography Source Disk (Dan)
  Sapphire II key length vs. US Export Law (chris)
  Security through obscurity in the smartcard world (Gary Howland)
  Academic Attacks on SAFER+ ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: Solar Designer <[EMAIL PROTECTED]>
Subject: Re: Bitslice Implimentation of Crypt(3)
Date: 4 Jan 1999 12:59:58 GMT

Christopher Abad <[EMAIL PROTECTED]> wrote:
> I am looking for a bitslice implimentation of the crypt(3) DES function used

http://www.false.com/security/john/

> in LibDES for hashing passwords

libdes doesn't do bitslice as of the last time I've looked at it.

Obviously, bitslicing also requires dropping the crypt(3) interface.

-- 
/sd

------------------------------

From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Twofish speedup -- 258 clocks per block!
Date: Mon, 04 Jan 1999 15:38:42 GMT

We came up with a new register assignment scheme that allowed us to
cut out nearly two clocks per Twofish round. There is no change to any
performance numbers except for the Pentium Pro in ASM compiled mode,
where a block can now be encrypted or decrypted in 258 clocks, and the
key setup is 10,900 clocks.  The number seems to fluctuate between 257
and 258, depending on the time of day, so we feel more comfortable
with the larger number.  It's not worthwhile to go into more detail on
how it works; look at the code (namely, the macro RF_PPro) if you want
to understand it.

Twofish was already the fastest AES submssion on general 32-bit CPUs.
Twofish is now within 3% of the fastest algorithm (RC6 at 250 clocks
per block) on the Pentium Pro/II.

There is new ASM code on the Twofish website, at
<http://www.counterpane.com/twofish.html>.  There's also new reference
C and optimized C code.  The new code is cleaned up a little, includes
Watcom C support.  There are also new explanatory comments and test
code in the include file AES.H.

Many thanks to Allan Stokes for the idea.

Bruce

**********************************************************************
Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: esha98 <[EMAIL PROTECTED]>
Subject: bks on cryptology
Date: Tue, 05 Jan 1999 01:12:51 +0800

hi!!

i'm new to this and am a comp science student. would someone reccomend
some good bks on cryptology...thanks.


------------------------------

From: [EMAIL PROTECTED] (James Pate Williams, Jr.)
Subject: Re: bks on cryptology
Date: Mon, 04 Jan 1999 17:37:20 GMT
Reply-To: [EMAIL PROTECTED]

esha98 <[EMAIL PROTECTED]> wrote:

>i'm new to this and am a comp science student. would someone reccomend
>some good bks on cryptology...thanks.

_Handbook of Applied Cryptography_ by Alfred J. Menezes et al.
_Applied Cryptography_ by Bruce Schneier
_Cryptography: Theory and Practice_ by Douglas R. Stinson
_A Course in Number Theory and Cryptography_ by Neal Koblitz

==Pate Williams==
[EMAIL PROTECTED]
http://www.mindspring.com/~pate




------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: bks on cryptology
Date: Mon, 04 Jan 1999 17:36:03 GMT


On Tue, 05 Jan 1999 01:12:51 +0800, in
<[EMAIL PROTECTED]>, in sci.crypt esha98
<[EMAIL PROTECTED]> wrote:

>hi!!
>
>i'm new to this and am a comp science student. would someone reccomend
>some good bks on cryptology...thanks.


Allow me to suggest my introduction to cryptography:

   http://www.io.com/~ritter/LEARNING.HTM

which does include a range of books, up to the last year or two
anyway.  One other that should be included is:

   Stallings, W.  1998.  Cryptography and Network Security:
   Principles and Practice.  2nd Ed.  Prentice Hall.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM



------------------------------

From: [EMAIL PROTECTED] (Tim Redburn)
Subject: Re: Encryption Basics
Date: Mon, 04 Jan 1999 19:17:27 GMT

>> <message snipped>

The post that this is a response to
has absolutely nothing to do with me.

- Tim

------------------------------

From: [EMAIL PROTECTED] (Jon Becker)
Subject: CTS a la Schneier, Rivest
Date: 4 Jan 1999 19:16:49 GMT


I have a question about Ciphertext Stealing, particularly as it
relates to RC5.  In "Applied Cryptography" (Schneier, p. 195) as well
as in RFC 2040 (section 8, on RC5-CTS), CTS is described as a way of
using a block cipher to encrypt a message of any size and have the
ciphertext length match the plaintext length.

In both documents, however, the descriptions assume the existence of a
second-to-last block of full size.  My question is, how does one use
CTS to generate the ciphertext for a plaintext message which is less
than the cipher's block size (so that the aforementioned assumption
doesn't hold)?

For example, if I'm using RC5 with a 64-bit block size and I have a
plaintext message which is 6 bytes long, what will the ciphertext be
using RC5-CTS?

I suppose it would be possible to act as if the IV is the encryption
of the second-to-last plaintext block.  Then it would be XORed with
the plaintext padded with 0s and encrypted to form the second-to-last
ciphertext block.  This would be transmitted as a new IV, while the
first 6 bytes of the original IV would be transmitted as the real
ciphertext.

I'm reasonably sure this method maintains security, but it does have
the disadvantage of changing the IV before and after encryption, which
is pretty much contrary to the idea of an IV.

Can anyone point me to some definitive documentation on the subject?

-Jon

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Help: a logical difficulty
Date: Mon, 04 Jan 1999 19:13:37 +0100

Recently there were some discussions in this group on true randomness,
Komolgorov complexity, etc. I have a little bit of difficulty with
the logic (or rather meta-logic) thereby involved. My throught
problem is as follows:

Since there does not exist an algorithm to deliver the shortest
string to describe an arbitrarily given random number sequence, 
couldn't one say that the problem of determining the shortest 
description of a sequence is undecidable? If so, the measure of 
complexity is not a well-defined quantity. It follows then that 
arguments based on the use of such a measure are also not 
well-defined.

What is the flaw in the above. Would experts please help? Thanks.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: First issue of Journal of Craptology
Date: Mon, 04 Jan 1999 20:21:08 GMT

A post by Lars Knudsen was replaced by the following as a result of
the Hipcrime attack...

>Attention:  This is an encoded message.


>Suk ime eik be qsq.

>Dqzvay hui lke hoi crefqb mqfr
>ye erlmf rs ekevgl dd
>fzq ceqeqe tod xu

and the place where I saw it (my server has apparently not been fooled
by too many of these messages) was the *moderated* newsgroup
sci.crypt.research.

I was rather puzzled at first as to how that could possibly happen,
but I suppose that, since a moderated newsgroup's posts are still not
on a server controlled by the moderator, some sort of forgery is all
that was needed.

John Savard
http://www.freenet.edmonton.ab.ca/~jsavard/index.html

------------------------------

From: Matt Curtin <[EMAIL PROTECTED]>
Subject: Re: Attention:  This is an encoded message?????
Date: 04 Jan 1999 11:52:20 -0500

[EMAIL PROTECTED] (EvanPic) writes:

> Did I miss something?  Now all of the sci.crypt posts start with
> this header and the following text is encoded/scrambled/?  Would
> appreciate info on what is going on.

A HipCrime supercedes attack on sci.script.  Basically, some
net.terrorist "superceded" a bunch of articles in sci.crypt.
Supercedes are like reposts, except that they actually replace the
article they're superceding.

http://usenet.miningco.com/library/weekly/aa072498.htm

-- 
Matt Curtin [EMAIL PROTECTED] http://www.interhack.net/people/cmcurtin/

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: On leaving the 56-bit key length limitation
Date: Mon, 04 Jan 1999 18:35:09 GMT


On Sun, 03 Jan 1999 15:17:34 -0600, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (wtshaw) wrote:

>In article <76mj9l$o1p$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>
>> 
>> The current concept of "unicity distance" leads to wrong results and it is not
>> practical....
>
>I'll agree to that.  

On the other hand, I'd like to see the details.


>[...]
>We do know that some ciphers can be solved with less ciphertext, but some
>cannot be conclusively solved with too little, which quantity might be
>different depending on the insights of the solver.   

This is more like the unicity distance I know, which is more about a
*unique* solution, than a solution:  The whole point of a cipher is to
have a huge multiplicity of possible plaintexts for a particular
ciphertext.  But if the plaintext message is known to have redundancy
or a particular format or any other distinguishable feature, we can
identify possibly correct solutions from random decipherings.  As long
as we still have multiple acceptable solutions (normally, multiple
possible plaintexts), we have not reached the unicity distance.  But
the Opponents might still act on *all* the solutions, even though they
have not *solved* the cipher.  So unicity distance is not a full
definition of "strength."  

Another possibility is for a cipher to have a keyspace larger than the
message, so that the same known-plaintext transformation can be
produced by a multiplicity of keys.  (This only works for short
messages and huge keys.)  Of course, if the keys are truly equivalent,
we only need one.  But if the keys are distinct, we need some sort of
hint or confirmation to select the correct key, completely independent
of redundancy or formatting in the plaintext.  Presumably, "unicity
distance" does not handle this case.  But we could easily produce such
a cipher, for example by always re-keying a stream cipher running-key
generator before it has gone through its state even once.  

Some block cipher constructions have an astronomical internal
keystate, even though only a tiny (but unknown) part of that is
actually selected by keying.  This at least admits the possibility of
using such a cipher repeatedly, producing an amount of ciphertext up
to perhaps the square root of the internal state, then re-keying the
cipher before continuing.  Again, this is not "unicity distance" as I
know it.  


>A clever ACA solver
>like Honey Bee knows more subtile linguistic patterns than almost
>anybody.  She can pick up clues that most of us would miss.  For her,
>perhaps that unicity distance would consistently seem smaller than it
>would for others; and given the same amount of ciphertext others could
>solve, she might solve it faster for the same reasons.  

This seems to be related to the fact that we cannot put a distinct
value on the "entropy" of plaintext.  If we had the real "entropy"
value, we could state the unicity distance.  But we can probably guess
something close to that value anyway, after some experience with a
particular cipher and its plaintext environment.  


>From a probability prospective, a good analysist can cut the alternatives
>to a lesser number, so there are fewer alternatives left to choose from. 
>With mechanized solving, you might miss some subtile hints; indeed, you
>must program in all of what you know if you do not simply want to brute
>force the whole thing and list all the possibile plaintexts.

I think this whole issue is a good argument for multi-ciphering as
standard practice.  As long as the plaintext is unknown and apparently
random, all the solutions tend to look the same, and it is pretty
tough to go after that sort of cipher.  For ultimate strength I
recommend three ciphering layers, large blocks, homophonic nulls, and
even randomly-frequent re-keying, if one is willing to repeatedly pay
the keying cost.  

Then we can talk about what a cipher of "ultimate strength" really
buys us in the real world.  


>From this, it might be infered that unicity distance is hardly an easily
>defined constant at all, but more or a Rod Serling type twilight zone
>without sharp edges, in the midst of which reality, plaintext, is
>reasonably indistingishable from the noise of alternative interpretations.

Well, OK.

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM



------------------------------

Date: Mon, 04 Jan 1999 21:49:27 +0100
From: Dan <[EMAIL PROTECTED]>
Subject: test message

sorry test


------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Help: a logical difficulty
Date: Mon, 04 Jan 1999 19:42:52 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote, in part:

>Since there does not exist an algorithm to deliver the shortest
>string to describe an arbitrarily given random number sequence, 
>couldn't one say that the problem of determining the shortest 
>description of a sequence is undecidable? If so, the measure of 
>complexity is not a well-defined quantity. It follows then that 
>arguments based on the use of such a measure are also not 
>well-defined.

>What is the flaw in the above. Would experts please help? Thanks.

There is no flaw in that, it is correct, as far as it goes.

Basically, though, you *can* prove that a string has an obvious
pattern in it. For example, if the string is 0101010101... one can say
that a wide variety of imaginary computers, that take a binary string
as an input program, would be able to generate an immense length of
that string from a much shorter input.

It is the converse that is not well-defined.

It's easy to prove that a stream cipher (or, indeed, any cipher) is
weak, because it fails to properly conceal the shortness of its key.
It is only proving that one is _strong_, proving that a sequence of
binary bits *lacks* a pattern, and is thus hard to compress, that is
essentially impossible to prove.

And all this impinges on the practical utility of results that derive
from measuring complexity; none of this undercuts their theoretical
validity. (I guess that counts as another "flaw in the above".)

John Savard
http://www.freenet.edmonton.ab.ca/~jsavard/index.html

------------------------------

Date: Mon, 04 Jan 1999 21:52:43 +0100
From: Dan <[EMAIL PROTECTED]>
Subject: REQ: Schneier's Applied Cryptography Source Disk

Hi,

Could anyone post the source code disk belonging to Bruce Schneiers
Applied Cryptography??

Thanx in advance

Dan


------------------------------

From: [EMAIL PROTECTED] (chris)
Subject: Sapphire II key length vs. US Export Law
Date: Mon, 04 Jan 1999 21:19:42 GMT


        if i generate a 16-bit key using a random number generator, then
permute that key, using a fixed combination or XORs, into 64-bits is
my app subject to the "56-bit encryption" limitation?

        not knowing much about crypto, i am assuming that i am adding
some, but not much, security by permuting the base key to 64-bits. 

        Sapphire II can take keys up to 2048 bits, if i read the doc's
correctly.

        totally confused, and just trying to get the damn thing out the
door...

        -c

------------------------------

From: Gary Howland <[EMAIL PROTECTED]>
Subject: Security through obscurity in the smartcard world
Date: 29 Dec 1998 17:58:21 GMT
Reply-To: Gary Howland <[EMAIL PROTECTED]>


Security through obscurity in the smartcard world


I've been getting into a lot of arguments recently with regard to
my defense of "security through obscurity" in the smartcard world.
Much of this has come in response to the GSM hack earlier this
year.  My arguments have been that at the end of the day there is
no such thing as hardware that cannot be reverse engineered, but
the aim is to keep the costs high enough as to make this one of
the stronger links in the chain (i.e. bribery might be a cheaper
option of breaking the system).  Given this, obscurity (the use of
proprietary protocols and algorithms) adds more time and expense
to an attack, and thus improves the security of the system.

Admittedly, in the case of the GSM hack, early peer review of the
system might have resulted in SIM cards that cannot be cloned at
the present time, since they (the GSM designers) might have used
a better algorithm due to the peer review.  But who's to say that
the pre-launch publication of the algorithms would have spotted
it, and that the earlier publication of the algorithms wouldn't
have allowed more sophisticated attacks by this time?  But, while
the GSM hack is to be applauded, it does not actually harm the GSM
security model, and will no way be a repeat of the old analogue
phone asttacks, since the security relies on much more than just
the SIMs (there is a unique id in the phones themselves for starters).

But I don't want to get into the GSM debate.  I want comment on
the priniciple of adding security (i.e. cost and expense) using
obscurity.  My reason for wanting debate, first on the question of
"is security through obscurity" valid, and second on "how?", is
because I intend to write a report on the benefits of obscurity,
and the techniques one should use when creating proprietary
algorithms.  So in addition to comments on the validity of the
proprietary algorithms, I would also like comments on the best ways
of creating such proprietary algorithms.  I am going to outline a
few of the techniques in my paper, such as the subtle modification
of existing algorithms, adding (better than removing) rounds,
dangers or mixing algorithms that have weak keys etc.  Any comments
on the design of proprietary algorithms based on tried and trusted
algorithms would be much appreciated.

In addition, I am after some examples of where modified examples
have been used in the past (e.g. the unix crypt() algorithm used
a modified DES algorithm to prevent hardware attacks).  Any examples
much appreciated.

I am doing this report in my spare time, since I think it is
important and needs doing, but could do it full time and finish it
faster for everyone if I could find a sponsor :-)


Best regards,

Gary Howland <[EMAIL PROTECTED]>

------------------------------

From: [EMAIL PROTECTED]
Subject: Academic Attacks on SAFER+
Date: Tue, 29 Dec 1998 18:16:27 GMT

=====BEGIN PGP SIGNED MESSAGE=====

[ To: AES SAFER+ Forum, sci.crypt,
Perry's Crypto List ##
  Date: 12/29/98 ##
  Subject: Academic Attacks on
SAFER+ ]


I believe I have found two attacks on the SAFER+ version
with
256-bit keys.  While these attacks aren't practical,
they demonstrate a
weakness in the SAFER+ key schedule
design, and may lead to more practical
attacks in the
future.

The first attack is a modified meet-in-the-middle
attack,
requiring 2 known plaintexts and their corresponding
ciphertexts,
2^{37} bytes of memory, and work equivalent to
about 2^{241} SAFER+
encryptions.  I have discussed this
attack with Massey, and am fairly
confident it really works.

The second attack is a related-key attack,
requiring 256
chosen plaintexts and their corresponding plaintexts
encrypted
under two related keys, and work equivalent to
about 2^{216} SAFER+
encryptions.  This is a much newer
attack (about two days old), and I am less
certain of it,
but I believe it works.

I will be writing these attacks up
for publication fairly
soon, but I wanted to announce them here to get the
word
out, and to see if anyone can either improve on them or find
problems
with them.

Both attacks exploit two useful properties of SAFER+.
First, I
will describe the two useful properties, then I
will describe the two attacks
very briefly.  The rest of
this note assumes familiarity with SAFER+.

1.0. 
The Two Useful Properties of SAFER+

Consider SAFER+ with a 256-bit key.  The
key schedule is
quite simple: We extend the key by a parity byte, getting a
33 byte extended key.  We then use the 33 byte extended key
to generate the
entire sequence of subkeys.  Each subkey
byte is determined by only one key
byte.  If you know a
given key byte, you know all the subkey bytes it
determines;
if you know a subkey byte, you know the key byte from which
it
was derived.

SAFER+ rounds use 32 bytes of subkey each, in two blocks of
16
bytes.  If the key bytes are k[0..32], then we have

First Round:
k[0..15]
k[1..16]

Second Round:
k[2..17]
k[3..18]

Third Round:
k[4..19]
k[5..20]
etc.

This means that it takes quite a while in the encryption
process for
the last couple key bytes to affect the
encryption.  SAFER+ with a 256 bit
key has 16 rounds and an
output transformation.  The output transformation
uses only
sixteen key bytes.

Now, consider how many key bytes have affected
the
encryption at all after each round:  After the first round
(round 0), 17
key bytes have been used.  After the second
round, 19 key bytes have been
used.  After the third, 21.
Continuing, we can see that it takes 9 (out of
16) rounds
before SAFER+ has used all 32 key bytes.  This allows both
of my
attacks.

0   17
1    19
2    21
3    23
4    25
5    27
6    29
7    31
8   
all

The other useful property is that we don't have to know all
the key
bytes to be able to recognize an output from a round
of SAFER+ as being
correct.

Consider the situation where we know all but the last byte
of the
key of some round, and we know its input block. Each
SAFER+ round consists of
the keyed byte substitution layer,
and the mixing layer.  If we know
k[0..14], but not
k[15..16], then we end up knowing all but two bytes of the
input into the mixing layer.  The result is that we end up
knowing *none* of
the bytes of the output.  However, we
still know relationships between the
bytes of the output.
The matrix that describes the mixing layer makes it easy
to
see how to combine the output bytes into values that are not
dependent on
the unknown bytes of input.

Now, consider a situation where we know all but
the last two
key bytes for round 0, and all but the first two key bytes
for
round 1.  We can go backward through the mixing layer
of round 1 (it's
unkeyed), and then go back through the
keyed byte substitution layer,
learning all but two bytes of
the output from round 0.  We then make use of
the trick
described above.  We will know relationships between the
remaining
known bytes of output from round 0, regardless of
the unknown key bytes.
2.0.   The Meet-in-the-Middle Attack

The real insight that makes the
low-memory attack work is
that we can do meet-in-the-middle with two rounds
whose key
material we don't know all of.  That is, we guess the keys
for
rounds 1-7 (numbering from one), which gives us all but
two of the bytes for
round 8. That's 29 key bytes guessed.
We then compute some expressions in the
output bytes of
round 8 that don't rely on knowing the two unknown bytes of
key, and that don't rely on knowing the two bytes of round
8's output that
will depend on the two unknown key bytes
when doing the guess on the other
side. We guess the keys
for the output transformation (16 bytes), plus the
keys for
rounds 10-16.  That means 30 bytes of key guessed, and it
also gives
us enough information to get knowledge of all but
two bytes of the output of
round 8.  We then compute those
same expressions in those bytes.

The result
is that we can do the meet-in-the-middle at the
output from round 8, despite
not knowing all the key bytes
used in round 8 or 9.

Round | Key Bytes
Guessed
1               17
2            19
3            21
4            23
5 
          25
6            27
7            29

8           29  (two unknown
key bytes)
- --------------  (this is where the meet-in-the-middle happens)
9
         30  (two unknown key bytes)

10         30
11           28
12       
   26
13           24
14           22
15           20
16           18
OX     
     16

That is, we can compute a set of bytes that are dependent
only on
the 29 bytes of key guessed from the top, or the 30
guessed from the bottom.
Now, this would seem to take up a lot of memory to do this
meet-in-the-middle
attack.  Fortunately, however, most of
the key bytes guessed from the top are
also guessed from the
bottom.  We thus mount the attack by first guessing all
key
values in common from the top and the bottom.  We then do
the
meet-in-the-middle by trying all possible 2^{24} values
from one side, and
all possible 2^{32} values from the
other.  We compute this intermediate
value that doesn't need
any other key material from both sides, store our
results
in a sorted list, and look for duplicates from the top and
bottom. 
With two plaintexts, it's easy to get more than 32
bytes of intermediate
values, meaning that we don't expect
to see any matches from incorrect
guesses.

3.0.        The Related-Key Attack

The related-key attack works on
a similar principle.


My current attack is very simple:  We choose a pair of
keys,
K,K^*, such that the keys differ only in the first two
bytes; in these
two bytes, we have a difference of 0x80.
Under K we encrypt 256 plaintexts,
P[i], each identical
except with a different leftmost byte.  Under K^* we
encrypt
256 plaintexts, P[i]^*, with some fixed difference D<>0x80
from P[i]
in the leftmost byte, and with fixed difference
0x80 in the next byte over.
With probability about 1/2, we get at least one pair of
plaintexts
P[i],P[i]^*, whose values after two rounds are
identical under the different
keys.  When this happens,
their values are also identical after nine rounds. 
Now, we
do our trick from the meet-in-the-middle attack, again, and
guess the
last 26 bytes of relevant key material.  This lets
us peel off the output
transformation and the last five
rounds, leaving us with access to the output
from the
eleventh round.  We can peel off the PHT layer, since we
know it and
its inverse. We can then learn all but two of
the bytes input to the eleventh
round.

We now know all but two bytes of the output from the tenth
round, and
we know a pair of texts for which only the last
two bytes of the input to the
tenth tound had changed.  We
can check to see if the values we've computed as
being the
outputs from the tenth round are consistent with this
situation,
and thus are consistent with this being a right
pair.

We expect to have to
check 256 different pairs of texts in
this way, each with work equivalent to
2^{208} SAFER+
encryptions.  This leaves us with work equivalent to about
2^{216} encryptions, total, to learn 208/256 of the SAFER+
key bits.  The
remaining 48 bits can be brute-forced after
these are known.

We thus break
SAFER+ with a differential related-key attack,
using 2 related keys, 512
plaintexts encrypted under each
key, and 2^{216} work.

Comments?

- --John
Kelsey, [EMAIL PROTECTED] / [EMAIL PROTECTED]
NEW PGP print =  5D91 6F57
2646 83F9 6D7F 9C87 886D 88AF

-----BEGIN PGP SIGNATURE-----
Version: 2.6.2
iQCVAwUBNok1riZv+/Ry/LrBAQHq/QP/bNhFMT1ZWxDUdybsamXK9hiwf01SIQWK
NQ7BznqPZZaDzyGC8I0EOZ4Pw1LNvkE34KUnE26/r95A0noLLbf9X2qcGgvK9AVp
4wRa3MXXbq59hQe11P2mBrGcoppV1ZA+HQmAysZ6dYM4lNXrxPaDohXJ5pSMflVS
s3z/iOgfZKc=
=ILee
-----END PGP SIGNATURE-----

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to