Cryptography-Digest Digest #698, Volume #9       Fri, 11 Jun 99 17:13:02 EDT

Contents:
  Re: Cracking DES (Terry Ritter)
  Re: cant have your cake and eat it too (John Savard)
  Re: Cracking DES (Patrick Juola)
  MD5 test data (Tony Lezard)
  Re: Fw: The Mathematics of Public-Key Cryptography, June 12 - June 17, 1999 ("Mike 
Murray")
  Re: Cracking DES (David Wagner)
  Re: cant have your cake and eat it too (David Wagner)
  Re: Slide Attack on Scott19u.zip (David Wagner)
  differential cryptanalysis ([EMAIL PROTECTED])
  Re: Cracking DES (John Savard)
  Re: huffman code length (Andras Erdei)
  Re: One Time Pad (Steve Rush)
  Re: cant have your cake and eat it too (David Wagner)
  Re: Cracking DES (John Savard)
  Re: Cracking DES (Patrick Juola)
  Re: Cracking DES (Eric Norman)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Cracking DES
Date: Fri, 11 Jun 1999 18:33:12 GMT


On Fri, 11 Jun 1999 11:07:47 GMT, in <7jqqm0$cut$[EMAIL PROTECTED]>, in
sci.crypt [EMAIL PROTECTED] wrote:

>Terry Ritter wrote:
>[...]
>> So for n ciphers, we have n times the attack effort, and 1/n reward
>> for any success, which makes the reward/effort ratio 1/n**2.  This is
>> not linear, and that should make the attack business somewhat more
>> discouraging.
>
>No.  First the simple error: "n times the effort" is the
>effort to attack all n ciphers;  1/n reward is for breaking
>one of them (more on that below).  We expect the number of
>ciphers broken to increase with the number of ciphers
>attacked, and thus the ratio stated is not the reward/effort
>ratio.

A reward/effort ratio of 1/n**2 is the ideal which we can only
approach, assuming that cryptanalysts can identify our weak ciphers so
we will not use those.  The better our cryptanalysts are, the more we
can approach that value.  And if we cannot approach that value, our
cryptanalysts are not keeping up with the opponents.  


>Two other points indicate the assumptions are false.  First,
>the reward is proportional to the intelligence value of the
>recovered plaintext and not the volume of plaintext.  Five
>percent of the traffic carries much more than five percent
>of the value, since real traffic is redundant.  

Where does this come from?  Cite please.  Under what assumptions?

The next time you want to read a 200-page book, rip it apart and
reconstruct the writing from any random 10 pages.  So now "reading a
book" takes only 5% of the time it did.  That should be a boon for
busy executives everywhere!

If we consider the essence of a book to be the main plot, you might
get lucky, although even that seems dicy.  But if the essence of the
book is the experience of immersion in a different situation, you can
not get that from 10 pages.  End of story.  

If cryptography protects customer orders, we might get one of those,
and thus know the customer name and the contents of one order.  If all
orders are alike, we have the universe.  But if all orders were alike,
the message would be "I'll have my usual" and if we expose that, we
may not know much more than we did.  

If cryptography protects love letters, breaking a cipher may mean that
we get a love letter.  If the opponents can and do pay the n**2 rate,
they eventually will expose one.  But any enterprise which cannot
tolerate even one exposure must question the use of cryptography
itself, since cryptography simply does not have provable strength.  


>Second, an
>intelligent attacker would first try to determine what ciphers
>his methods have a good chance of breaking, and concentrate
>on those.  Developing differential cryptanalysis took years;
>determining whether it applies to a new cipher takes hours or
>sometimes minutes.

But even if an analysis technique "applies," it still must be applied.
Knowing that something "applies" does not mean that success is
available; it just means that it *might* be, with a lot of effort,
expertise, and luck.  But effort and expertise are limited resources.
That is what the n**2 term means.  If one is willing to pay the
freight, one gets the result.  

Then we have the issue of what an "attack" really means.  Differential
Cryptanalysis was developed for DES.  In that sense, DES is "broken."
And yet, in practice, we are very unlikely to ever have the data
needed to perform such an attack.  So we have an attack which does
apply and is fully known and it still does not help us expose messages
under that cipher.  How has that helped the opponents?


>I know Terry Ritter disagrees, but I find that as the number
>of ciphers an enterprise uses increases, the attackers
>reward/effort ratio goes _up_.  He picks off the low hanging
>fruit.  I grant that the ratio goes down if the attacker needs
>all the encrypted information, but gaining a significant
>portion of the intelligence value gets easier.

Cite it.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: cant have your cake and eat it too
Date: Fri, 11 Jun 1999 18:23:49 GMT

[EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote, in part:

>  Because you can't prove that 128 bits is secure. Oh yes maybe it is
>secure against a blind no intelligent brute force attack. But a blind dumb
>brute fore attack is not everything.

Yes, but all that means is that you can't prove that *a particular
cipher* with a 128-bit key is secure.

If a 128-bit key is enough to be secure against a brute-force attack,
then it is *possible* to construct a secure cipher with a 128-bit key,
since only a brute-force attack is applicable to _all_ ciphers with a
128-bit key or any other particular key length.

John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Cracking DES
Date: 11 Jun 1999 14:43:27 -0400

In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]> wrote:
>
>On Fri, 11 Jun 1999 11:07:47 GMT, in <7jqqm0$cut$[EMAIL PROTECTED]>, in
>sci.crypt [EMAIL PROTECTED] wrote:
>
>>Terry Ritter wrote:
>>[...]
>>> So for n ciphers, we have n times the attack effort, and 1/n reward
>>> for any success, which makes the reward/effort ratio 1/n**2.  This is
>>> not linear, and that should make the attack business somewhat more
>>> discouraging.
>>
>>No.  First the simple error: "n times the effort" is the
>>effort to attack all n ciphers;  1/n reward is for breaking
>>one of them (more on that below).  We expect the number of
>>ciphers broken to increase with the number of ciphers
>>attacked, and thus the ratio stated is not the reward/effort
>>ratio.
>
>A reward/effort ratio of 1/n**2 is the ideal which we can only
>approach,

... or it would be if the mathematics were at all correct.

"And if my grandmother had balls, she'd be my grandfather."

        -kitten

------------------------------

From: Tony Lezard <[EMAIL PROTECTED] SPAM PLEASE>
Crossposted-To: comp.security.misc
Subject: MD5 test data
Date: 11 Jun 1999 19:28:15 +0100

I need to test the output of an MD5 implementation. Could someone with
a known correct version of MD5 hash some strings and post/email them?

Thanks,

-- 
Tony Lezard <[EMAIL PROTECTED] NO SPAM PLEASE>
-- 
== Tony Lezard ==  |  PGP public key 0xBF172045 available from keyservers
[EMAIL PROTECTED]  |  or from my home page, http://www.mantis.co.uk/~tony/

------------------------------

From: "Mike Murray" <[EMAIL PROTECTED]>
Subject: Re: Fw: The Mathematics of Public-Key Cryptography, June 12 - June 17, 1999
Date: Fri, 11 Jun 1999 14:50:53 -0700

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

Will post reports as warranted... I'm looking forward to it... think
I'll probably learn a whole pile... and I'll probably be the only
undergrad there... *grin*

            Mike
=====BEGIN PGP SIGNATURE=====
Version: PGP 5.5.5

iQA/AwUBN2GEvP5WqcMdbVvFEQKlggCfR7rvD9FLzRM8/0y4N+5tO6boCmwAoI2P
joWTaVp9wJlX0wuxXyyHzlHz
=/K2V
=====END PGP SIGNATURE=====




------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Cracking DES
Date: 11 Jun 1999 12:46:28 -0700

In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]> wrote:
> >Two other points indicate the assumptions are false.  First,
> >the reward is proportional to the intelligence value of the
> >recovered plaintext and not the volume of plaintext.  Five
> >percent of the traffic carries much more than five percent
> >of the value, since real traffic is redundant.  
> 
> Where does this come from?  Cite please.  Under what assumptions?
> 
> The next time you want to read a 200-page book, rip it apart and
> reconstruct the writing from any random 10 pages.  So now "reading a
> book" takes only 5% of the time it did.  That should be a boon for
> busy executives everywhere!

Here's an example.  Consider differential cryptanalysis, before it was
widely known to the public.  A book on differential cryptanalysis is
presumably something the intelligence agencies would want to encipher
well, to prevent the techniques falling into enemy hands.

I claim that any five consecutive pages from Biham and Shamir's classic
book on differential cryptanalysis would probably be enough to reproduce
most of the ideas, with a bit of work.  (Any one of their umpteen pictures
of differential characteristics would likely be enough to give away the
whole game.)

And if a page or two from the index falls into your hands, too, that's
just icing on the cake, because it tells you which ciphers to try breaking
with the new techniques.

I don't think this is a particularly contrived example.  Is this what
you were asking for?

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: cant have your cake and eat it too
Date: 11 Jun 1999 13:00:28 -0700

In article <[EMAIL PROTECTED]>, John Myre  <[EMAIL PROTECTED]> wrote:
> Has anyone every done an "existence proof" of a secure cipher
> design?

Not that I know of.

Note that such an existence proof seems likely to imply that P != NP.
Thus, as a heuristic, such an existence proof is probably hard to find.

(Rough argument: Any cipher can be broken by an efficient non-deterministic
algorithm, which just non-deterministically guesses the key and then tests
the guess with a few trial decryptions.  Note that if P = NP, every efficient
non-deterministic algorithm can be converted to an efficient deterministic
algorithm, which suggests that no cipher can be secure.  This is just a rough
informal argument, and there are some holes you can drive a truck through,
but they don't contradict my point that such an existence proof is probably
hard to find.)

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Slide Attack on Scott19u.zip
Date: 11 Jun 1999 12:40:02 -0700

Nice observations.

It seems to me your attack can be substantially improved.
The slide attack _does_ work, and it is very efficient.

I conclude that Scott's cipher seems to be thoroughly broken.

1. A too-pessimistic estimate.  You state that, for a 38-bit block, you
   need 2^37 known texts to get 2^18 slid pairs.  I think this is too high.
   The correct number should be sqrt(2 * 2^18 * 2^38) = 2^{28.5} known texts.
   (Check: with 2^{28.5} texts, you get 2^{28.5}*(2^{28.5} - 1)/2 = 2^56
   pairs, so 2^56/2^38 = 2^18 of them should be slid pairs.)

2. A missed opportunity for optimization.  If you can make chosen-plaintext
   queries, the complexity can be substantially reduced.  You can get 2^18
   slid pairs with about 2^{19.5} chosen plaintexts, if I am not mistaken.
   See the slide attack paper for the technique (an adaptation of a neat
   trick due to Eli Biham).

3. Application to other block sizes.  The chosen-plaintext attack applies to
   any and all block sizes, with no increase in complexity.  (In contrast,
   the complexity of the known plaintext attack goes up with the square root
   of the size of the blockspace, since it requires a birthday collision.)

4. A mistake.  You claim that the slide attack doesn't work because usually
   different S-box entries are used in the first and last round (``no key
   bits shared'', in your terminology).  This is wrong.
   In particular, you can detect a slid pair by the fact that the plaintexts
   agree in all but 19 bits (except for a rotation), and similarly for the
   ciphertexts.  It doesn't matter that different S-box entries are used.

I could be mistaken.  You certainly understand the scott* algorithms
better than I do.  But based on your comments to sci.crypt, it appears as
though the slide attack does actually work, and work quite well.


By the way, for the history buffs, you might be interested to read
  http://www.deja.com/[ST_rn=ps]/getdoc.xp?AN=232236643&fmt=text
where I posted an early warning (> 2 years ago) that Scott's cipher looks
potentially vulnerable to slide-style attacks.
However, I didn't try to work out all the details; I'm glad you've decided to.

Of course, this was back when I thought David Scott might actually listen
to reason, rather than just reply with gratuitous personal insults.  Given
that David Scott doesn't seem interested in learning, I don't think I will
devote any more time to this.  I'll let you work out all the details.

------------------------------

From: [EMAIL PROTECTED]
Subject: differential cryptanalysis
Date: Fri, 11 Jun 1999 20:14:11 GMT

In an earlier thread (DES) Tomstdenis said that in diff analysis you
take A (=P-P') and B (=C-C') and find a difference which is more
prominent than it should be.  Cool, but I'm something of a simpleton
-- how exactly is such a thing done? I assume you take lots and lots
of known p + c, and look for a trend in A and B .... but how
precisely? 
        If you have P_1-20 and C_1-20, all known. How do you go from
that  to a trendable list of A and B? 

Gracias;
[EMAIL PROTECTED]

[yes, i am Jim_101_202; i ditched deja and am now USENETed.]




------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Cracking DES
Date: Fri, 11 Jun 1999 20:02:44 GMT

[EMAIL PROTECTED] (Terry Ritter) wrote, in part:

>So for n ciphers, we have n times the attack effort, and 1/n reward
>for any success, which makes the reward/effort ratio 1/n**2.

There were two posts claiming you were guilty of an arithmetic error.
That does appear to be the case, but instead of admitting a small
slip, you have defended this ratio. Thus, I'll allow that you may have
meant more than you appeared to mean to those who noted the apparent
error, and I'll give this a more detailed examination.

If there is 1 cipher, the *anticipated* reward for breaking that
cipher is X.

If there are n ciphers, the *anticipated* reward for breaking one of
them is X/n (assuming we don't have to worry about diminishing returns
as the adversary reads more of our messages, so that a small glimpse
reveals more than a proportionate amount of our secrets, as someone
noted).

To earn that reward, though, we don't have to make n times the effort;
we can concentrate on just one cipher.

If we examine all n ciphers, so as to find the weakest one to attack,
we do so in order to _decrease_ the total amount of effort we expend
in order to break the first of the n ciphers. (If we chose one of the
n ciphers to attack at random, we would not be following an optimal
strategy, but we would be no worse off, except for the 1/n reduced
reward, than we were in the case of one cipher.)

Thus, rather than our effort being n, except for the overhead of a
preliminary study of all n ciphers, the effort required to break one
cipher is at most 1; it can be less if the ciphers are considerably
different in strength, and if several of the ciphers are closely
related, the effort required to break those can be close to 1.

However, the preliminary study, even if it is a term with a small
coefficient, is still of order n, and so as n approaches infinity, you
could be correct by saying 1/(n^2).

But for small n, it looks as though the reward to effort ratio indeed
has only been diminished by 1/n, or less, since reward is increased by
the clues some messages give to others, and effort is decreased by the
range of strengths found in an ensemble of ciphers.

However, while I think that the charge of an arithmetic error has been
substantiated, I think you may have been led into that error by a
sound intuition.

If the ratio of reward to effort has been diminished by 1/n, the
likelihood of an adversary _pursuing_ that diminished reward with his
efforts may well have been reduced by something larger. Not just
1/(n^2), but more like 1/(e^n).

John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (Andras Erdei)
Crossposted-To: comp.compression,alt.comp.compression,sci.math
Subject: Re: huffman code length
Date: Fri, 11 Jun 1999 18:18:36 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

>The Huffman encoding is such that a uniform frequency distribution
>gives a balanced tree while extremely non-uniform distribution
>gives a tree very remote from being balanced. 

p0 = epsilon , p1 = 1.0 - epsilon seems to be non-uniform enough,
and still the tree ((0)(1)) seems to be "balanced" enough :)

>Question: Given an arbitrary Huffman tree, how can we say something
>useful in quantitative terms concerning the possilbe frequency
>distributions that correspond to it?

(1) You can something about the frequency distribution
without using *any* information other than that you have a
frequency distribution; if you have reordered your alphabet 
so that the symbol probabilities are in non-descending order

  0 < P0 <= P1 <= ... <= Pn-1 <= 1

then

  Pi <= 1 / ( n - i ) 

<<hint:  sum Pi = 1 thus  (n-i)*Pi <= 1>>

(2) We dont know the probabilities, just the Huffman 
code-lengths. AFAIK all you can say is that a longer code
belong to *less or equally* probable symbol:

  Hi > Hj   -->  pi <= pj

<<Hi is the length, pi is the probability of the i-th Huffman
code>>
<<hint: indirect proof>>

Let's reorder the alphabet (creating subsets):

  H0 = H1 = ... = Hi(1) > Hi(1)+1 = .... > Hi(k) + 1 = ... = Hn-1

<<the j-th set is from Hi(j-1)+1 to Hi(j), 
let p-1 = 0 and i(0) = -1; i(k+1) = n, pn = 1>>
<<warning: it does *not* follow that pi(j) < pi(j)+1,
just that pi(j) <= pi(j)+1, and we know nothing
about the order of the symbols inside a set>>

Now we can prove that for all q: i(j-1) < q <= i(j) there
exists at least q - i(j-1) symbol i(j-1) < m <= i(j) that

  (i.a) pm <= Pq <= 1 / ( n - q )

<<hint: from (1)>>

Thus in the j-th set all elements have
probability p such that

  (i.b)  p <= Pi(j) <= 1 / ( n - i(j) ) 

<<also Pi(j-1) <= p, but we have only an upper bound
on Pi(j-1), so it is of no much use>>

and

  (ii) max{ pm : m <= i(j-1) } <= p

<<hint: same proof as at the beginning of (2)>>

<<spec: also pn-1 = 1 - (p0 + ... + pn-2)>>

(i.a) and (i.b) is a restricted version of (1)
for the case when you don't know the probs (only
the Huffman code-lengths), and want to give a bound
for a specific symbol. (ii) seems to be pretty
useless to me -- but i don't know what was the 
intent behind your question, so maybe you can find 
some use of it. 

(3) Does not have the time, but i guess you
can derive a lower bound for the frequency of the
(most probable) symbol(s) based on the Fibonacci 
series; my guess would be something like

  pn-1 > F(k-1) / N

where N is the length of the message.
(So e.g. if you have four different code-lengths
then the most frequent symbol have to occur at least
F(3)+1=4 times.)

EXAMPLES:

>For a balanced tree
>of 4 symbols, for example, a frequency distribution of 0.20, 0.20,
>0.21, 0.39 is possible but much more extreme distributions are not
>possible.

In the example above:  H0 = H1 = H2 = H3
k=1, i(1) = 3, thus (i.b) for all h=0..3:  Ph < 1
(or (i.a) one symbol is in (0,1/4], two is in
(0,1/3], three is in (0,1/2] and all four is in (0,1])
which does not say much. :(

"Mr. X" <[EMAIL PROTECTED]> wrote:
> Some possible symbol sets for code set [0,10,11]
> [.33,.33,.33] [.34,.33,.32] [.8,.1,.1]

H0 = H1 > H2
k=2, i(1)=1, i(2)=2

j=1: at least one of P0 or P1 is in (0..1/3] , (i.a)
     both P0 and P1 is in (0..1/2] ; (i.b)
j=2: P2 = 1 - (P0+P1) is in [max(P0,P1)..1)  (ii)

  Br,

  Andras Erdei
  [EMAIL PROTECTED]


------------------------------

From: [EMAIL PROTECTED] (Steve Rush)
Subject: Re: One Time Pad
Date: 11 Jun 1999 20:13:00 GMT

>One Time Pad should not be able to be cracked, i think.
>The weak part is the key, right?
>
>So, what ist the point to start a usefull attack on One Time Pad?
>
>What is needed?
>One or more Cyphertext, one ore more parts of Plaintext, maybe a Key?

The only attack I can think of for a *true* OTP (the key is *really* random,
never re-used, and delivered by secure means) uses burglary, not cryptography:
steal the key, copy it, and replace it without being detected.  That makes the
main problem one of physical security, which is in a whole different universe.


**********************************************************************
If it's spam, it's a scam.  Don't do business with Net abusers.


------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: cant have your cake and eat it too
Date: 11 Jun 1999 13:11:02 -0700

In article <7jrp18$j1t$[EMAIL PROTECTED]>,
Patrick Juola <[EMAIL PROTECTED]> wrote:
> I don't know whether or not it's been done before, but it's doesn't
> seem that hard a proof.  Under the theory (and mathematics) of
> Kolmogorov-Chaitin complexity, ``most'' strings are random, in the
> sense that there is no algorithmic method to reduce them.  Handwaving
> rapidly, I assert that a string can be (algorithmically) converted
> into a permutation over an appropriate block size, which is necessary 
> and sufficient to define a block cypher.

The problem is that you have to be able to efficiently compute the
permutation, much more efficiently than the adversary can break it.

``Most'' strings are indeed unbreakable, but also ``most'' strings
cannot be computed efficiently, so this argument breaks down.

For some interesting results on this subject, you might want to see
Jim Massey's invited lecture at Eurocrypt'96.  Slides online at
  http://www.iacr.org/publications/dl/massey96/massey96.html
There's some heavy math in there, but the implications of the math are
also summarized nicely.  See especially his written explanation that
goes with the slides.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Cracking DES
Date: Fri, 11 Jun 1999 20:22:21 GMT

[EMAIL PROTECTED] (Terry Ritter) wrote, in part:

>>I know Terry Ritter disagrees, but I find that as the number
>>of ciphers an enterprise uses increases, the attackers
>>reward/effort ratio goes _up_.  He picks off the low hanging
>>fruit.  I grant that the ratio goes down if the attacker needs
>>all the encrypted information, but gaining a significant
>>portion of the intelligence value gets easier.

>Cite it.  

Subject to certain assumptions - which it has already been established
are not true for your proposal - this is a sufficiently obvious result
that it can be demonstrated again.

If each message is enciphered in only one of the n ciphers, and each
message clearly identifies, without encryption, which one of the n
ciphers was used with it, then one could choose one cipher at random,
and reproduce the one-cipher case, except for the fact that you
wouldn't be able to read all the messages, just a fraction. The
opportunity to choose which cipher to attack is a bonus.

And note that if the ciphers are all very similar, research to find an
attack on one will help in attacking them all: if they're all very
different, it is likely that they will also be different in strength.

As for the fact that a law of diminishing returns applies when an
attacker goes from communications intelligence to reading a few
messages to reading them all, this is borne out by a great deal of the
historical literature on cryptanalysis, including David Kahn's The
Codebreakers. However, I will admit that this is not a law of nature;
it depends on the kind of communications being attacked, and for some
types of transaction-based systems, the value of the intelligence can
indeed come close to being proportional to the number of messages
read.

An oversimplified model can be constructed as follows:

Let there be N messages, of which M messages contain a specific item
of information that an attacker is interested in.

If one can read all N messages, the probability is 1 that one will
find that information. However, if the number of messages that one can
read is small, the probability is now M/N, rather than 1/N, times the
number of messages you can read that you will learn what you seek.

So the proportionality law is still linear, except for a factor
depending on how widely mentioned the data you seek is.

John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Cracking DES
Date: 11 Jun 1999 14:41:51 -0400

In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]> wrote:
>
>On 11 Jun 1999 09:49:35 -0400, in <7jr45f$hf5$[EMAIL PROTECTED]>,
>in sci.crypt [EMAIL PROTECTED] (Patrick Juola) wrote:
>
>>>[...]
>>>There are several different effects to having multiple ciphers:  One
>>>effect is that each cipher must be detected, acquired, attacked, and
>>>broken for future use (if possible).  So the more ciphers there are,
>>>the more effort which must be expended, and this is linear.
>>>
>>>Another effect of multiple ciphers is that the universe of information
>>>is partitioned into multiple channels, which means that the reward for
>>>breaking any cipher is proportionately less.  The more ciphers there
>>>are, the less information any one of them can expose, and this is also
>>>linear.  
>>
>>That isn't 'another effect', that's the same effect in disguise. 
>>
>>Think of it this way.  I hire one group of cryptanalysts to analyze
>>one algorithm and break everything.  That costs me $X -- and I have
>>access to everything in the universe of a single cypher.
>>
>>I now hire N groups to analyze one algorithm and break everything.
>>This costs me <= N*$X, and I still have access to
>>everything in the universe.
>
>The problem with that logic, of course, is the assumption that the
>cryptanalysts can break everything.  That is unwarranted and
>unrealistic.  Under more reasonable assumptions, effort must be spent
>on every cipher, but only some ciphers will return a reward.  
>
>If we have one cipher, we attack one cipher; if we break it, we get
>the universe.
>
>If we have n ciphers, we attack n ciphers; if we break one, we get 1/n
>of the universe.
>
>If we have n cipher, we attack n ciphers; if we break m, we get m/n of
>the universe.

Still doesn't work, Mr. Ritter.  Let's formalize this -- assume that each
cypher has a probability p of being broken.

A single-cypher universe yields p*1 of the universe when you attempt
to break the (unique) cypher.

An N-cypher universe yields p * 1/N of the universe when you attempt
to break one of the N cyphers (at the same cost).

An N cypher universe yields N * p * 1/N = p*1 of the universe when you
attempt to break them all.  Cost is N times as much, yield is identical.

No n^2 terms at all.

And, of course, the next obvious question is whether p is constant.
Assuming that it's not, then you'd be better off encrypting everything
with the single cypher for which p is smallest than using *any*
mixture -- but I'll omit the mathematical proof for brevity.

        -kitten

------------------------------

From: Eric Norman <[EMAIL PROTECTED]>
Subject: Re: Cracking DES
Date: Fri, 11 Jun 1999 15:34:05 -0500

Terry Ritter wrote:
> 
> On Fri, 11 Jun 1999 11:07:47 GMT, in <7jqqm0$cut$[EMAIL PROTECTED]>, in
> sci.crypt [EMAIL PROTECTED] wrote:

> >Two other points indicate the assumptions are false.  First,
> >the reward is proportional to the intelligence value of the
> >recovered plaintext and not the volume of plaintext.  Five
> >percent of the traffic carries much more than five percent
> >of the value, since real traffic is redundant.
> 
> The next time you want to read a 200-page book, rip it apart and
> reconstruct the writing from any random 10 pages.  So now "reading a
> book" takes only 5% of the time it did.  That should be a boon for
> busy executives everywhere!

The argument I hear from Bryan would be closer to this:  assume
we have 1000 different 200 page textbooks about American history.
If we ripped them apart and only looked at a random sample of
10000 pages, we would probably know more about American history
than if we only did this with one.

The actual magnitude of the numbers is speculative, methinks.

-- 
Eric Norman

        "Congress shall make no law restricting the size of integers
        that may be multiplied together, or the number of times that
        an integer may be multiplied by itself, or the modulus by
        which an integer may be reduced".

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to