Cryptography-Digest Digest #503, Volume #14       Sun, 3 Jun 01 13:13:01 EDT

Contents:
  Re: Uniciyt distance and compression for AES (SCOTT19U.ZIP_GUY)
  Re: Luby-Rackoff Theorems? (Nicol So)
  Re: Dynamic Transposition Revisited Again (long) (Mok-Kong Shen)
  Re: Uniciyt distance and compression for AES (Tim Tyler)
  Re: UK legislation regarding cryptography (Mok-Kong Shen)
  Re: Uniciyt distance and compression for AES (Tim Tyler)
  Re: Uniciyt distance and compression for AES (SCOTT19U.ZIP_GUY)
  Re: Uniciyt distance and compression for AES ("Tom St Denis")
  Re: Uniciyt distance and compression for AES ("Tom St Denis")
  Re: Uniciyt distance and compression for AES (SCOTT19U.ZIP_GUY)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Uniciyt distance and compression for AES
Date: 3 Jun 2001 14:58:26 GMT

[EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote in 
<[EMAIL PROTECTED]>:

>[EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:
>
>>That's not /quite/ what Shannon's perfect secrecy implies:
>>
>>Perfect secrecy means that the cyphertext contains no information about
>>the plaintext.  This can happen if one cyphertext maps to one plaintext -
>>but can also happen if exactly ten cyphertexts map to every plaintext.
>   I have to look it up but there a difference between ideal and perfect.
>I thought in perfect every key tested has to lead to a unique input
>message so the bijection is between the set of 3 the key plaintext
>and message. But of course many ciphertexts map to same input just
>change the key. 
> 
>>
>>I don't think perfect secrecy logically implies 1-1 compression.
>
>  I think your right. The compressor would not always have to be
>bijective to work. A simple example  is take a bijective compressor
>add the letters 'PK" to front of text. Encrypt the message. To
>get a cipher text. Now examine what happens if we test every key.
>If every key leads to a file of form with "PK" in front of it,.Then
>the over all system could still be a bijective compression encryption
>system even thought the compressor was not bijective. But you realize
>strange capibilites would have to be added to the encryptor to get
>a system like this. For one the encryption part of system itself would not
>be bijective since each cipher text would have to decrypt to something
>with "PK" in the front two words.
>
>  But its clear that for a given possible cipher text. That when one
>forms the set of possible inputs to the encryptor portion. Which is
>the set created by encrypting every message and then on each cipher
>text message you test every key to get the input form of data to
>encryptor. Many such message will be the result of different 
>key,cipher text compbinations and many repeated. But when you have
>this thid set its clear any member of this set when decompressed and
>recompressed back has to come back to itself. If it does not
>then information about plain text gained. This feature is lacking
>in most compressors. But your right it does require true bijection
                                     meant DOES NOT REQUIRE
>in the since I use. But it requires if not bijection a very strange
>relationship between the compressor and encryptor portion of system.
>
>  However if the ecnryptor portion is bijective from some input
>form to binary output files. THe compressor would have to be bijective
>from its input set to a form the encryptor expects as input.
>
>
>David A. Scott


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: Nicol So <[EMAIL PROTECTED]>
Subject: Re: Luby-Rackoff Theorems?
Date: Sun, 03 Jun 2001 11:27:54 -0400
Reply-To: see.signature

David Wagner wrote:
> 
> I think you might have gotten caught by two subtle pitfalls.
> If I'm not mistaken, they will account for the difference between
> our two results, and correcting for them will give what I claimed.
> But tell me whether you agree or not---this is tricky stuff,
> and who knows, maybe I went wrong somewhere.

I agree that this is tricky stuff. I think I spotted some problems in
your argument. Please check my reasoning.

> 1. The Luby-Rackoff paper uses 2n for the block size, and proves
> security so long as m^2/2^n is small, where m is the number of
> texts.  Thus, this provides security up to m ~ 2^{n/2}.  If we
> let n' = 2n be the block size (n' is "my n" in my earlier post),
> then this provides security for up to 2^{n'/4} texts.  With a
> 128-bit block cipher, this is security up to 2^32 texts.

I don't think that's the correct interpretation of the "main lemma" (the
lemma that I quoted in my last message). In the kind of complexity
theory-based approach used by Luby and Rackoff (and many others),
security is defined in terms of the non-existence of efficient computers
(e.g. PPTM, family of polynomial-size circuits) that can distinguish
between two distributions (in this case, uniformly distributed functions
in F^2n and permutations implemented by a 3-round Feistel structure).
The construction is secure iff *asymptotically* the distinguishing
probability

    |Pr[C_2n(F^2n)] - Pr[C_2n(H(F^n,F^n,F^n))]|

converges to 0 faster than n^-c for any c > 0.

The intuition behind it is that if you keep increasing the security
parameter, *eventually* the distinguishing probability will become
small, and more importantly, become small so fast that you can't use
polynomial-time tricks to magnify it to any constant level. 

So... what matters is the order of growth of the upper bound, but not
the exact value of the upper bound. This is a subtle but *very*
important point. We cannot take the expression of an upper bound, and
then argue that if you choose m to violate it, the construction suddenly
becomes insecure. The upper bound in the main lemma represents *a*
sufficient condition for security, but there're an infinite number of
others that can be used in place of it. Each one of these would yield a
different safety threshold for m, if we used your line of argument.

I think there's another issue in the way the main lemma is interpreted
in your argument. It seems to assume that it is OK so long as the *upper
bound* on the distinguishing probability is <= 1. But the distinguishing
probability, as the absolute difference between two probabilities, can
never exceed 1, so worrying about an upper bound getting beyond 1 is not
meaningful.

On the other hand, allowing the distinguishing probability to reach
*any* constant threshold is bad. Actually, allowing the distinguishing
probability to reach *any* threshold of the form n^-c (c > 0) is bad
because you can magnify it to a constant level in polynomial time.

I think the interpretation in your argument is at once too pessmistic in
one regard, and not conservative enough in another.

> 2. The Luby-Rackoff paper is actually not quite enough if you want
> to use their construction recursively.  You need to look to modern
> generalizations.  In particular, LR prove
>   If the Feistel function is truly random, then
>   the 4-round cipher is pseudorandom.
> Note that they require a truly random (unconditionally secure)
> Feistel function, so if you try to instantiate the Feistel function
> using a recursive LR construction, the original LR theorem doesn't
> apply.  Since then, others have made the slight generalization to
> the case where the Feistel function is assumed only to be pseudorandom
> (e.g., Maurer, Lucks, Naor/Reingold, etc.).  Let q-pseudorandom
> refer to a keyed function that is indistinguishable from its idealized
> version if the adversary has access to at most q texts.  Then the
> modern form of the Luby-Rackoff theorem is
>   If the Feistel function is q'-pseudorandom, then
>   the 4-round cipher is q-pseudorandom.
> In particular, q is related to q' by q ~ min(q',2^{n/2}) where
> n is as above, i.e., q ~ min(q',2^{n'/4}).  This is why the q'
> appears in my statement of the theorem but does not appear in
> the original Luby-Rackoff paper.

I'm not faimilar with the newer results so I can't discuss them
intelligently, but I'd appreciate some pointers so that I can get
updated on the developments.

You don't need to use newer and more precise results to argue about the
"security" of recursive Luby-Rackoff constructions. It's true that the
main lemma was proved assuming the F functions are truly random. But the
security of a recursive construction comes from the definition of
psuedorandomness. Basically if you can find a distinguisher for the
recursive construction, you can turn it into a distinguisher for the
inner construction, contradicting the main lemma.

The preceeding is akin to the argument that if you can securely extend a
random bit string by 1 bit, you can turn it into a pseudorandom
generator by repeatedly use the primitive to extend the string by a
polynomial amount.

At this level of precision, we are still talking about asymptotic
behavior; many practically relevant considerations are swept under the
rug. I'm not surprised if somebody refined the model to expose practical
differences between the Luby-Rackoff construction and its recursive
application.

-- 
Nicol So, CISSP // paranoid 'at' engineer 'dot' com
Disclaimer: Views expressed here are casual comments and should
not be relied upon as the basis for decisions of consequence.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited Again (long)
Date: Sun, 03 Jun 2001 17:44:34 +0200



[EMAIL PROTECTED] wrote:
> 
> This is the result of running the sci.crypt FAQ (all 10
> sections) through a byte/bit counter.  (NB DOS format ie CRLFs at
> end of lines)
> 
> Maximum bits in file = 1060616 (132577 bytes)
>      Actual bits set =  467377 (44.067%)
>                Bit 7 =       0 ( 0.000%)
>                Bit 6 =   94724 (71.448%)
>                Bit 5 =  120046 (90.548%)
>                Bit 4 =   38573 (29.095%)
>                Bit 3 =   46584 (35.137%)
>                Bit 2 =   59031 (44.526%)
>                Bit 1 =   46759 (35.269%)
>                Bit 0 =   61660 (46.509%)
> 
> So as the bias is not too bad for ASCII text, why not (pseudorandomly)
> invert half the bits and, assuming large blocks, this should correct
> the bias?

My article 'On encryption through bit permutations' posted
on 6th March may interest you.

M. K. Shen

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Reply-To: [EMAIL PROTECTED]
Date: Sun, 3 Jun 2001 15:42:29 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> Tom St Denis <[EMAIL PROTECTED]> wrote:

:> : The original reply does make sense.  You are not switching languages
:> : just the representation.  I.e I will swap A with B, B with C, C with
:> : D, etc... The words look different but it's basically the same
:> : language.
:>
:> Uh - compression before encryption increases the unicity distance.
:> Surely you are not claiming otherwise...?

: Not really.  Think about it.  You take a 100 byte message and pack it into
: 16 bytes (just an example).  Now I try all the keys to decrypt the 16 bytes.
: For some of the keys the material will actually decompress, for those I can
: still check for the biases in english.

: Think of compression in this case as just transposing the alphabet to an
: isomorphic alphabet (i.e equivalent).  You're not making the original
: language less biased, you're just changing it's representation and adding a
: layer.

This is appears to be a misconception on your part.

Let's pretend the messag in question consist of a string of bytes,
each of which lies between 0 and 40.

The bytes in the messages are chosen at random from the set [0,1,...,40].

The back of my envelope makes the unicity distance for the original
message under a 128 bit key about 19 bytes.  It is a small number, not
that much bigger than the key.

Now, consider a compressor that (essentially) represents the messages in
base 41, rather than base 256, and outputs the result as a byte stream.

The unicity distance of these messages under a 128-bit key is infinite.
No matter how much cyphertext you're given, it helps not-in-the-slightest
in recovering the plaintext.

This example should illustrate how compression can raise the unicity
distance.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: UK legislation regarding cryptography
Date: Sun, 03 Jun 2001 17:51:58 +0200



demon news wrote:
> 
>     Could anybody let me know what the current legislation is for
> cryptographic products in the UK?.
> 

Maybe http://www.epic.org/ is of some interest to you.

M. K. Shen

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Reply-To: [EMAIL PROTECTED]
Date: Sun, 3 Jun 2001 15:58:27 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> [EMAIL PROTECTED] wrote:

:> : Then again, compression would seem to reduce the number of ciphertext
:> : characters required for a unique, meaningful decompressed decipherment
:> : so maybe it *reduces* the unicity distance, which is a benefit to the
:> : cryptanalyst.
:>
:> : Is this right?
:>
:> Not at all.  Lots of correct looking decrypts *hinders* the cryptanalyst -
:> since he has no idea which one is the real message.
:>
:> Similarly, the best place to hide a tree is in a forest.
:>
:> The cryptanalyst's task is easiest when there's only one correct looking
:> decrypt, and all the others look like random garbage.
:>
:> Compression increases the unicity distance, by decreasing the redundancy
:> in the inputs to the cypher.

: Not really.  Again think of this from an attackers POV.

: I'm going to guess a key, I'm going to decrypt, I'm going to try and
: decompress, I'm going to check for english

: vs

: I'm going to guess a key, I'm going to decrypt, I'm going to check for
: english

: You've just added a step in there.  I'm still quite capable of doing the
: rest of the attack.  Rememeber that most compression codecs are
: deterministic so the transformation doesn't change the meaning of the
: message just it's format.

So what are you going to do if the compression is so good that *all* the
messages appear to be in English?

You really don't show any signs of having thought this through.

I'm happy that now our long-standing disagreement in this area has boiled
down to something simple, concrete, where you appear to be demonstrably in
the wrong ;-)

Compression increases the unicity distance - and can even do so
unboundedly.
-- 
__________
 |im |yler  Try my latest game - it rockz - http://rockz.co.uk/

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Uniciyt distance and compression for AES
Date: 3 Jun 2001 16:03:09 GMT

[EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:

>Tom St Denis <[EMAIL PROTECTED]> wrote:
>: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
>: news:[EMAIL PROTECTED]... 
>:> Tom St Denis <[EMAIL PROTECTED]> wrote:
>
>:> : The original reply does make sense.  You are not switching
>:> : languages just the representation.  I.e I will swap A with B, B
>:> : with C, C with D, etc... The words look different but it's
>:> : basically the same language.
>:>
>:> Uh - compression before encryption increases the unicity distance.
>:> Surely you are not claiming otherwise...?
>
>: Not really.  Think about it.  You take a 100 byte message and pack it
>: into 16 bytes (just an example).  Now I try all the keys to decrypt
>: the 16 bytes. For some of the keys the material will actually
>: decompress, for those I can still check for the biases in english.
>
>: Think of compression in this case as just transposing the alphabet to
>: an isomorphic alphabet (i.e equivalent).  You're not making the
>: original language less biased, you're just changing it's
>: representation and adding a layer.
>
>This is appears to be a misconception on your part.
>
>Let's pretend the messag in question consist of a string of bytes,
>each of which lies between 0 and 40.
>
>The bytes in the messages are chosen at random from the set
>[0,1,...,40]. 
>
>The back of my envelope makes the unicity distance for the original
>message under a 128 bit key about 19 bytes.  It is a small number, not
>that much bigger than the key.
>
>Now, consider a compressor that (essentially) represents the messages in
>base 41, rather than base 256, and outputs the result as a byte stream.
>
>The unicity distance of these messages under a 128-bit key is infinite.
>No matter how much cyphertext you're given, it helps
>not-in-the-slightest in recovering the plaintext.
>
>This example should illustrate how compression can raise the unicity
>distance.

   I don't think this will help TOM he will view it as some of the
41 vlaues when strung together don't make since. I have aruged with
him repeatedly and he does not want to learn. So I doubt he can
understand your example. So incase he is reading. The 41 symbols
in this case don't represnet CHARACTERS. But something more pure.
In that any combination of them make a message. I hope this helps
but I don't hold out much hope. I do think if Wagner wished since
TOMMY kind of idoilizes him. He could step in an explain how unicity
distance works. And its really such a basic thing he should. TOMMY
would belive him and I don't think Wagner would twist this topic
to much since many here have a good understanding of unicity distance.
It just that more modern crypto people ignore ii in favor of saying
safty is better based on hoping things take to long to reverse.
 So real security based on information content is of little value
to there current world view of crypto.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Date: Sun, 03 Jun 2001 16:14:09 GMT


"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> :> Tom St Denis <[EMAIL PROTECTED]> wrote:
>
> :> : The original reply does make sense.  You are not switching languages
> :> : just the representation.  I.e I will swap A with B, B with C, C with
> :> : D, etc... The words look different but it's basically the same
> :> : language.
> :>
> :> Uh - compression before encryption increases the unicity distance.
> :> Surely you are not claiming otherwise...?
>
> : Not really.  Think about it.  You take a 100 byte message and pack it
into
> : 16 bytes (just an example).  Now I try all the keys to decrypt the 16
bytes.
> : For some of the keys the material will actually decompress, for those I
can
> : still check for the biases in english.
>
> : Think of compression in this case as just transposing the alphabet to an
> : isomorphic alphabet (i.e equivalent).  You're not making the original
> : language less biased, you're just changing it's representation and
adding a
> : layer.
>
> This is appears to be a misconception on your part.
>
> Let's pretend the messag in question consist of a string of bytes,
> each of which lies between 0 and 40.
>
> The bytes in the messages are chosen at random from the set [0,1,...,40].
>
> The back of my envelope makes the unicity distance for the original
> message under a 128 bit key about 19 bytes.  It is a small number, not
> that much bigger than the key.
>
> Now, consider a compressor that (essentially) represents the messages in
> base 41, rather than base 256, and outputs the result as a byte stream.
>
> The unicity distance of these messages under a 128-bit key is infinite.
> No matter how much cyphertext you're given, it helps not-in-the-slightest
> in recovering the plaintext.
>
> This example should illustrate how compression can raise the unicity
> distance.

You're argument is only valid if we are trying to find randomly compressed
data streams.  We aren't.  Were looking for decompressed strings that
resemble the language (i.e English).

Thus, even if you're codec ratio approaches 0 bpb, as long as you can
decompress and check you haven't changed the problem.  To prove it gimme a
RC5-32 CBC encoded message with a small key (say 32-bits) where the
plaintext is BZIP2 (or whatever) compressed text.  I will bet you 100
dollars that given at least 50 blocks of ciphertext (adjacent blocks to make
this simple) I can find the key within a week.

Tom



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Date: Sun, 03 Jun 2001 16:15:45 GMT


"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> :> [EMAIL PROTECTED] wrote:
>
> :> : Then again, compression would seem to reduce the number of ciphertext
> :> : characters required for a unique, meaningful decompressed
decipherment
> :> : so maybe it *reduces* the unicity distance, which is a benefit to the
> :> : cryptanalyst.
> :>
> :> : Is this right?
> :>
> :> Not at all.  Lots of correct looking decrypts *hinders* the
cryptanalyst -
> :> since he has no idea which one is the real message.
> :>
> :> Similarly, the best place to hide a tree is in a forest.
> :>
> :> The cryptanalyst's task is easiest when there's only one correct
looking
> :> decrypt, and all the others look like random garbage.
> :>
> :> Compression increases the unicity distance, by decreasing the
redundancy
> :> in the inputs to the cypher.
>
> : Not really.  Again think of this from an attackers POV.
>
> : I'm going to guess a key, I'm going to decrypt, I'm going to try and
> : decompress, I'm going to check for english
>
> : vs
>
> : I'm going to guess a key, I'm going to decrypt, I'm going to check for
> : english
>
> : You've just added a step in there.  I'm still quite capable of doing the
> : rest of the attack.  Rememeber that most compression codecs are
> : deterministic so the transformation doesn't change the meaning of the
> : message just it's format.
>
> So what are you going to do if the compression is so good that *all* the
> messages appear to be in English?

How is that possible?  That form of compression would not be practical.  Not
all decompressed strings from even a "bijective" super codec that outputs
only A-Z will form english looking strings.

I'm sorry but ASFDKFSFWEKRXCVSVWERWOIYGDN is not going to rate too highly as
english.

> You really don't show any signs of having thought this through.

Um i could say the same for you.

> I'm happy that now our long-standing disagreement in this area has boiled
> down to something simple, concrete, where you appear to be demonstrably in
> the wrong ;-)
>
> Compression increases the unicity distance - and can even do so
> unboundedly.

No it doesn't.

Tom



------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Uniciyt distance and compression for AES
Date: 3 Jun 2001 16:11:40 GMT

[EMAIL PROTECTED] (Tim Tyler) wrote in <[EMAIL PROTECTED]>:

>Tom St Denis <[EMAIL PROTECTED]> wrote:
>: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
>: news:[EMAIL PROTECTED]... 
>:> [EMAIL PROTECTED] wrote:
>
>:> : Then again, compression would seem to reduce the number of
>:> : ciphertext characters required for a unique, meaningful
>:> : decompressed decipherment so maybe it *reduces* the unicity
>:> : distance, which is a benefit to the cryptanalyst.
>:>
>:> : Is this right?
>:>
>:> Not at all.  Lots of correct looking decrypts *hinders* the
>:> cryptanalyst - since he has no idea which one is the real message.
>:>
>:> Similarly, the best place to hide a tree is in a forest.
>:>
>:> The cryptanalyst's task is easiest when there's only one correct
>:> looking decrypt, and all the others look like random garbage.
>:>
>:> Compression increases the unicity distance, by decreasing the
>:> redundancy in the inputs to the cypher.
>
>: Not really.  Again think of this from an attackers POV.
>
>: I'm going to guess a key, I'm going to decrypt, I'm going to try and
>: decompress, I'm going to check for english
>
>: vs
>
>: I'm going to guess a key, I'm going to decrypt, I'm going to check for
>: english
>
>: You've just added a step in there.  I'm still quite capable of doing
>: the rest of the attack.  Rememeber that most compression codecs are
>: deterministic so the transformation doesn't change the meaning of the
>: message just it's format.
>
>So what are you going to do if the compression is so good that *all* the
>messages appear to be in English?
>
>You really don't show any signs of having thought this through.
>
>I'm happy that now our long-standing disagreement in this area has
>boiled down to something simple, concrete, where you appear to be
>demonstrably in the wrong ;-)

   This means he will change the topic. Several times I got to
the point where there is really no manevouring room. It is at that
point he goes off on anther subject. And then spouts that nothing
was ever proved. I fear that his mind is incapable of doing certain
things. I think its a flaw in his character. As he gets older it
will beocome perment flaw in his ability to do logic. He is young
so there is hope but it dwindling quite fasst.

>
>Compression increases the unicity distance - and can even do so
>unboundedly.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to