Cryptography-Digest Digest #462, Volume #14      Mon, 28 May 01 12:13:01 EDT

Contents:
  Re: Best, Strongest Algorithm (gone from any reasonable topic) (Roger Fleming)
  Re: DES Crypto Myth?? (Roger Fleming)
  Re: The HDCP Semi Public-Key Algorithm (ammendment) (John Savard)
  Re: Good crypto or just good enough? (Eric Lee Green)
  A new technology for internet security? (Mok-Kong Shen)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) (SCOTT19U.ZIP_GUY)
  Re: Medical data confidentiality on network comms ("Roger Schlafly")
  Re: Good crypto or just good enough? (SCOTT19U.ZIP_GUY)
  Re: A generic feistel cipher with hash and gf(257) mixers (Simon Johnson)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Roger Fleming)
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Date: Mon, 28 May 2001 13:57:18 GMT

 [EMAIL PROTECTED] wrote:
[...]
>What are the problems from "the conventional viewpoint" that are pointed
>out with the use of big keys?

1. Ensuring good key diffusion becomes difficult. If good key diffusion does 
not occur, the cipher may become vulnerable to 'meet in the middle' style 
attacks or 'correlation attacks' that attck parts of the key piecemeal, making 
the cipher strength partially linear in the key instead of exponential. This 
is BAD.

2. The sort of ciphers he was talking about often have HUGE keys, eg we all 
know of one that has keys of over 1 MB. Generating such keys becomes a serious 
problem. True random number generators take ages to produce new keys of this 
size, so invariably the key will be puffed up to size by some sort of PRNG. 
This means the cipher strength is actually based on the PRNG and its small 
seed, and the huge key is illusory. Further, since generating keys is such a 
pain, it is likely to be done less often, so the users have permanent keys 
instead of session keys, so key compromises are complete disasters.

>Are you talking about key distribution problems?  Or about an assertion

3. Transmission and storage of such keys is a serious problem. The whole point 
of ciphers is that a large secret (the message) can be protected by a small 
secret (the key). When the 'small' secret is 1 MB, how do you protect _it_? 
The answer usually offered seems to be, put it on a floppy and lock it in your 
safe (if you have one) - in which case, why not leave your files in the 
safe... You end up with a cipher which offers only-as-good-as-your-locks 
security, is nearly useless for network applications, and isn't even portable 
enough to lug around on your laptop.

>that keys much bigher the 128 bits are useless overkill?
>Key distribution is indeed a problem - but I don't think anyone could 
>convinvingly argue that 128 bits is enough and more is pointless.

OK, I will attempt to convincingly argue that 128 bits is (nearly always) 
enough and more is (usually) pointless - for a few decades, anyway.
First, a benchmark: the massively parallel Deep Crack machine, which averages 
about 3 days to find a 56 bit DES key, cost US$250, 000 to build but could 
possibly be built in "bulk" for US$70, 000 for each additional unit. If it 
works on new keys constantly, and the bank charges its owers 10% interest, it 
has to earn $57 per key to pay for itself. Messages worth less than that are 
uneconomical; messages worth more are juicy targets.

Now, a 128 bit key will be 2^72 (or 4.7e21) times harder to crack, all else 
being equal. That's obviously totally and utterly out of the question at the 
moment (each message would cost trillions of times more than the total wealth 
of the entire world), but computers are getting more powerful all the time. 
How can we possibly estimate what will be secure in the future? We can really 
do little better than extrapolate long-standing current trends (Moore's 
'Law'), and add a highly conservative "fudge factor". I'll take my fudge 
factor as 20 bits (a factor of a million times, _extremely_ conservative 
engineering), but I also want my messages to be safe up to, say, $1 million. 
$1 million vs. $57 takes another 14 bits (= log_2 (1000000/57)) from my 
margin, leaving me 72 -14 -20 = 38 bits to play with. By Moore's Law, that 
represents about 38 x 1.5 = 57 years of hardware development. I'll probably be 
dead by then. So my million dollar secrets will be safe until after I'm gone.

What if I want to run for high political office in 25 years' time, and my 
opponent tries to dig up muck by decrypting my old emails. In 2026 he better 
be prepared to pay US $2.5 TRILLION (of today's money), _per_email_, to hope 
to succeed. Remember, these numbers are only guesstimates, but they include a 
fudge factor of a million times; I'm really estimating that in 25 years' time, 
it will cost $2.5 pentillion per message. (A pentillion is a 1 followed by 18 
zeroes. It's about 6 orders of magnitude more than the US national budget.) 
Almost certainly, there will be much cheaper ways to get my secrets than 
attacking such a large key as 128 bits.

So yeah, unless you're protecting very long lived, very high value secrets 
like nuclear weapon designs, I think even 128 bits is overkill. But it's 
affordable overkill, so why not use it. 1000 bit keys are not affordable 
overkill, so don't use them.

As an aside, there is considerable circumstantial evidence that a lot of 
classified military systems use keys much shorter than 128 bits: Hughes 
XPD/KPD = 71 bits; the Data Authenticator for Seismic Verification (intended 
for use in a nuclear test ban treaty) = 79 bits; Skipjack (approved for SECRET 
data) = 80 bits, etc.

>AFAICS, allowing variable size keys is not terribly painful in terms
>of fattening up the key schedule.  It seems like a good idea to me.

It can be a good idea if it is done carefully, but most algorithms that get it 
right seem to use very slow, complex key schedules that limit their utility 
for some purposes. For example Blowfish is great for file encryption, email, 
etc etc but would be considerably less useful for a multiplexed network 
encryptor, and is useless as the basis for a hash function.

>....or are you talking about keys that are much bigger than the messages? ;-)

------------------------------

From: [EMAIL PROTECTED] (Roger Fleming)
Subject: Re: DES Crypto Myth??
Date: Mon, 28 May 2001 14:04:57 GMT

 [EMAIL PROTECTED] wrote:
[...]
>It's funny how many Schneier quotes I've seen since I've been following
>this newsgroup. 
[...]
>I think my reading time was better spent with HAC. Why is Schneier such a
>popular cryptographer? Is it charisma or is he a real innovator
>(compared to other professional cryptographers)?[...]

I think he is quoted so often because, as well as being a competent 
cryptographer, he is a good writer; there are many crypto "truths" that are 
succintly summarised by his pithy one-liners and thus very amenable to using 
on usenet.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: The HDCP Semi Public-Key Algorithm (ammendment)
Date: Mon, 28 May 2001 15:23:05 GMT

On 28 May 2001 06:59:31 -0700, [EMAIL PROTECTED] (Munro
Saunders) wrote, in part:

>I have used implicitely above the observation (guess) that the shuffle
>network does not chance the cycle length of the LFSR network.

That's correct; after about 12 clocks, the probability that the
shuffle network has no initial state bits in it gets reasonably high.

Since the XOR of four LFSRs is as easy to solve as the output of one
longer LFSR, a correlation attack would be based on the 11% chance of
a certain value of the delay.

John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm

------------------------------

From: [EMAIL PROTECTED] (Eric Lee Green)
Subject: Re: Good crypto or just good enough?
Reply-To: [EMAIL PROTECTED]
Date: Mon, 28 May 2001 15:35:12 GMT

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

On Mon, 28 May 2001 15:19:08 +0200, Stefan Lucks <[EMAIL PROTECTED]
mannheim.de> wrote:
>On Sun, 27 May 2001, Tom St Denis wrote:
>> > I have recently seen an application where people encrypted their
>> > plaintexts using triple DES in OFB mode. As it turned out, what they
>> > really wanted was authentication, and they actually did not care much
>> > about confidentiality. So using a single DES based CBC-MAC would have been
>> > much better than encrypting under triple DES.
>> 
>> For authentication I would have used a hash first ...
>
>What is the purpose of a hash here?

A hash by itself simply verifies that the message came through. If you
then encrypt the hash, or use a cryptographic hash such as SHA1 and
add your key in as the last thing hashed, then you have a signed message.

>However, picking between  triple DES and Rijndael is a different game ...

Unless performance is an issue in your particular application. 3DES is bog-
slow. Rijndael is blazingly fast. 


=====BEGIN PGP SIGNATURE=====
Version: GnuPG v1.0.5 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE7Em7/3DrrK1kMA04RAj98AJ9iPJ9gqVySytao7ilTwF6LPWJw0gCgtleN
aySLhuaS9dLQ5IvCAAGfb10=
=mlug
=====END PGP SIGNATURE=====

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: A new technology for internet security?
Date: Mon, 28 May 2001 17:32:30 +0200


A US firm claims to have developed a new technology for
internet security though varying the IP addresses:

http://dailynews.yahoo.com/h/nm/20010521/wr/tech_security_dc_1.html

I don't yet clearly see how this actually functions, since 
one could have only a limited number of IP addresses at
one's disposal, if I don't err.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Date: 28 May 2001 15:27:04 GMT

[EMAIL PROTECTED] (Roger Fleming) wrote in
<[EMAIL PROTECTED]>: 

> [EMAIL PROTECTED] wrote:
>[...]
>>What are the problems from "the conventional viewpoint" that are
>>pointed out with the use of big keys?
>
>1. Ensuring good key diffusion becomes difficult. If good key diffusion
>does not occur, the cipher may become vulnerable to 'meet in the middle'
>style attacks or 'correlation attacks' that attck parts of the key
>piecemeal, making the cipher strength partially linear in the key
>instead of exponential. This is BAD.
>
>2. The sort of ciphers he was talking about often have HUGE keys, eg we
>all know of one that has keys of over 1 MB. Generating such keys becomes
>a serious problem. True random number generators take ages to produce
>new keys of this size, so invariably the key will be puffed up to size
>by some sort of PRNG. This means the cipher strength is actually based
>on the PRNG and its small seed, and the huge key is illusory. Further,
>since generating keys is such a pain, it is likely to be done less
>often, so the users have permanent keys instead of session keys, so key
>compromises are complete disasters. 

    I assume your talking about my cipher which allows keys up to
over one million bytes.  This does not mean that one has to generate
a key of a million bytes. I supose it depends on how secure one wants
to be.  But this key your talking about should be viewed as just
an extra piece of code. You don't have to change it very often.

   Just as when you generate a big PGP key. You protect it with a
password. You in reality use a small key on your computer that is
your password phrase.  The million byte keyenc.key file is just
there when you need a major change.

>
>>Are you talking about key distribution problems?  Or about an assertion
>
>3. Transmission and storage of such keys is a serious problem. The whole
>point of ciphers is that a large secret (the message) can be protected
>by a small secret (the key). When the 'small' secret is 1 MB, how do you
>protect _it_? The answer usually offered seems to be, put it on a floppy
>and lock it in your safe (if you have one) - in which case, why not
>leave your files in the safe... You end up with a cipher which offers
>only-as-good-as-your-locks security, is nearly useless for network
>applications, and isn't even portable enough to lug around on your
>laptop.

   And ciphers with very very large keys can be use exactly as a small
key cipher. The advantage being you can change the big key much less
frequently.
 
>
>>that keys much bigher the 128 bits are useless overkill?
>>Key distribution is indeed a problem - but I don't think anyone could 
>>convinvingly argue that 128 bits is enough and more is pointless.
>
>OK, I will attempt to convincingly argue that 128 bits is (nearly
>always) enough and more is (usually) pointless - for a few decades,
>anyway. First, a benchmark: the massively parallel Deep Crack machine,
>which averages about 3 days to find a 56 bit DES key, cost US$250, 000
>to build but could possibly be built in "bulk" for US$70, 000 for each
>additional unit. If it works on new keys constantly, and the bank
>charges its owers 10% interest, it has to earn $57 per key to pay for
>itself. Messages worth less than that are uneconomical; messages worth
>more are juicy targets.


  The advantage is if you limited to a method with 128 bit keys. Then if
that method is broken you need another method. With a large key method
designed correctly the attacker has less to work with. Its like your
method is like an unknown 128bit system. By the time the attacker
had found a possible break. You then change the BIG key and its like
an entirely new encryption system.
 
>
>Now, a 128 bit key will be 2^72 (or 4.7e21) times harder to crack, all
>else being equal. That's obviously totally and utterly out of the
>question at the moment (each message would cost trillions of times more
>than the total wealth of the entire world), but computers are getting
>more powerful all the time. How can we possibly estimate what will be
>secure in the future? We can really do little better than extrapolate
>long-standing current trends (Moore's 'Law'), and add a highly
>conservative "fudge factor". I'll take my fudge factor as 20 bits (a
>factor of a million times, _extremely_ conservative engineering), but I
>also want my messages to be safe up to, say, $1 million. $1 million vs.
>$57 takes another 14 bits (= log_2 (1000000/57)) from my margin, leaving
>me 72 -14 -20 = 38 bits to play with. By Moore's Law, that represents
>about 38 x 1.5 = 57 years of hardware development. I'll probably be dead
>by then. So my million dollar secrets will be safe until after I'm gone. 
>

  But the agency that broke might yet takes it out on your kids
if you bothered to reproduce any.


>As an aside, there is considerable circumstantial evidence that a lot of
>classified military systems use keys much shorter than 128 bits: Hughes 
>XPD/KPD = 71 bits; the Data Authenticator for Seismic Verification
>(intended for use in a nuclear test ban treaty) = 79 bits; Skipjack
>(approved for SECRET data) = 80 bits, etc.

   Last I heard SKIPJACK is approved for SECREST data. I think you
dreaming. Part of encrypting secret data is to keep method secret
which is what using a vert large key syetem can do. By have this
million bytes encrypted key file. So that you can use a short
password.

 Also the big key since it has entopy to spare could be the result
of some big files that you and sender have in common. They 
themselves don't have to be secret. In the sense that the
keys you use as your password needs to be secret so that
it could be 128 bits or what ever level you really want.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: "Roger Schlafly" <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: Medical data confidentiality on network comms
Date: Mon, 28 May 2001 14:45:09 GMT

"Larry Kilgallen" <[EMAIL PROTECTED]> wrote
> > Personally, I'd be quite happy with 56-bit ciphers and 512-bit PK keys
> > if the various other privacy leaks were plugged. The real medical
privacy
> > threats have almost nothing to do with crypto.
> But some of them are susceptible to cryptographic controls.
> Consider the issue of delegation.  My doctor can see my
> medical records.  My doctor should be able to delegate
> the ability to see those records to a specialist for a
> limited amount of time, but without delegating unlimited
> rights to further delegation.  Some number of emergency
> room doctors should be able to unseal my records in the
> absence of my doctor if they all agree and the access is
> strongly audited (alarmed) with guaranteed notification
> to my doctor and me.  These are all issues where there
> might be some cryptographic assistance as part of the
> total solution.

There would have to be a culture change among physicians if anything
like that were to take effect. US physicians just don't believe
in those sorts of limits. And patients just don't have enough
power to ask for limits like that.





------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Good crypto or just good enough?
Date: 28 May 2001 15:52:55 GMT

[EMAIL PROTECTED] (Eric Lee Green) wrote in
<[EMAIL PROTECTED]>: 

>
>>However, picking between  triple DES and Rijndael is a different game
>>... 
>
>Unless performance is an issue in your particular application. 3DES is
>bog- slow. Rijndael is blazingly fast. 
>

   If one is paranoid about picking between triple DES and RIJNDAEL
why not pick both.

  YOU can use BICOM for the RIJDNEAL part then a
bijective transform to match the block lengths for DES
then use your DES
ant then anothe bijective transform to get it to ordinary files.

  Or you can roll you own if your have the time and worry one 
may be safe and the other not safe. 

 But the main point is if you combine in series more than one
encryption method keep the interfaces to each one bijective so
that an attacker can't get a good toe hold on method making it
easyer to break.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: [EMAIL PROTECTED] (Simon Johnson)
Subject: Re: A generic feistel cipher with hash and gf(257) mixers
Date: 28 May 2001 09:01:23 -0700

Tom St Denis <[EMAIL PROTECTED]> wrote in message 
news:<[EMAIL PROTECTED]>...
> Tom St Denis wrote:
> > 
> > Jim Steuert wrote:
> > >
> > >  Does anyone have an opinion on the security of
> > >  generic feistel ciphers like this?  The key is mixed
> > >  in the middle rounds in a fairly simple manner.
> > >  Does this create any weakness?
> > >
> > >  This generic feistel cipher is based on the efficient
> > >   multipermutation hash mixer routine of Bob Jenkins.
> > >   I modified his  mixer algorithm to use 32-bit rotates
> > >   instead of shifts, and then I tested the statistics.
> > >   I also added a gf(257) 32-bit byte-wise multipermutation
> > >   mixer. (1 is represented by 8-bit 0x00,...,256 by 0xff)
> > >
> > >    A multipermutation feistel mixer operation: c = a op b
> > >    is invertible, in that by fixing any input a, varying the
> > >    second input b will cause all possible values of
> > >    the output c. This preserves the equal-likelihood
> > >    of all output values, in that any single output
> > >    value is caused by exactly (2^n) different input (a,b)
> > >    pairs, out of (2^n)*(2^n) possible input pairs.
> > >    This makes each output value have prob = 1/(2^n).
> > >    Of course, the other important (avalanche,etc) qualities are
> > >    due to the properties of the gf and Bob Jenkin's mixers,
> > >    in particular his use of combined 32-bit add/sub and xor.
> > >    This was compiled with -O5 with the mingw version of gcc.
> > 
> > This is wrong.  Nice immunity to GF(2^n) differentials comes from
> > GF(2^n) decorrelated functions (it's simple to prove it too).  In
> > GF(257) you will see GF(2^8) differentials with probs upto about 12/256
> > if I am not mistaken.
> 
> In GF(257) inversion ...
> 
> To be more precise Pr[255 => 255] is 256/256, there are some 16/256,
> 12/256 and alot of 2,4,8/256.
> 
> So this is not a good "fixed" sbox.   Note that using mults in GF(257)
> by random values is good to a point as the average DP value for any
> xor-pair (over all unique multiplicands (all 127*255 of them)) is fairly
> low (this is a wild guess I should really check sometime).
> 
> I would bet for all mults though diffs by 255 would be a source of
> weakness... (again wild speculation)
> 
> Tom

Tom's point is correct if a difference function XOR is used D255 goes
to D255 with probability one. However, if the difference is measured
with subtraction in GF(257) the function will have a 'random'
distribution of differences like it's GF(2^w)/p(x) counter-part.

However, since your cipher appears to be a standard Fiestel, where the
left and right halves are combined using XOR it is unlikely that using
subtraction in GF(257) is a clever thing to do... so your knacked.

Simon.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to