Cryptography-Digest Digest #517, Volume #14       Mon, 4 Jun 01 18:13:00 EDT

Contents:
  Re: Welcoming another Anti-Evidence Eliminator stooge to USENET  (P.  Dulles / AKA 
Loki) (Dave Howe)
  Re: Def'n of bijection (Mok-Kong Shen)
  Re: benefits of compression for security (Mok-Kong Shen)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler)
  Re: Def'n of bijection (SCOTT19U.ZIP_GUY)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler)
  Re: Dynamic Transposition Revisited Again (long) (Mok-Kong Shen)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler)

----------------------------------------------------------------------------

From: Dave Howe <[EMAIL PROTECTED]>
Crossposted-To: 
alt.privacy,alt.security,alt.security.pgp,alt.security.scramdisk,alt.privacy.anon-server
Subject: Re: Welcoming another Anti-Evidence Eliminator stooge to USENET  (P.  Dulles 
/ AKA Loki)
Date: Mon, 04 Jun 2001 22:12:38 +0100

In our last episode (<alt.security.pgp>[Mon, 04 Jun 2001 20:08:16
GMT]), "Tom St Denis" <[EMAIL PROTECTED]> said :
>"Dave Howe" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>> In our last episode (<alt.security.pgp>[Mon, 04 Jun 2001 01:59:03
>> GMT]), "Tom St Denis" <[EMAIL PROTECTED]> said :
>> >Take all primes and form a composite N.  Add one to N.  Now N is not
>> >divisible by any of the "known" primes.  Thus N+1 is a new prime not in
>> >the list.  Proof by contradiction.  We proved that "there are finite 
>> >number of primes" is false.
>> <pedant>
>> or is divisible by a prime not in the original list
>> </pedant>
>That's not possible.  Since we already have all consecutive primes...
>3*5*7+1 = 106
which does not prove that 106 (or N+1, as omitting 2 has had some
strange results on your example :) is prime; it just proves a prime
exists which is not in your original list.
however, it is (as I admitted at the time) pedantry; you have
successfully proved the proposition "there are no primes above x" is
false,  given the list of primes below x.
example.
2*3*5*7*11*13 = 30030
30030+1=30031 = 59 x 509
(interestingly, 30029 IS prime, so....:)

and of course, this is another example of proving a statement is false
;)

--== DaveHowe ( is at) Bigfoot dot com ==--

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Def'n of bijection
Date: Mon, 04 Jun 2001 23:25:35 +0200



Phil Carmody wrote:
> 
> David (Scott) 's term "Bijection" is a mapping such that the domain
> (compressed files) is the totality of the set of finite files
> representable on the file system
> or
> Every representable file (in the file system) is the image of a (unique)
> source file under the compression mapping.
[snip]

I am not very sure that the above sufficiently well help 
people capture Scott's compression idea. Since the topic 
has in the course of time led to quite a number of threads 
in the group, each time with a few posts apparently based 
in part on certain misunderstandings (in my humble view), 
I venture to take the liberty to herewith offer an 
explanation in some detail (in order to get corrections 
from the readers, in case I am wrong.) 

Scott's compression assumes that the compressed file be 
on a byte boundary (a commonly reasonable requirement). 
So any compressed file has n (whole) bytes, with the 
original file having m (whole) bytes. For any fixed n, 
there are 2^n different (possible) files of n byte 
length. On decompression with a given compression 
algorithm, these will in general lead to bit sequences 
of different lengths. Let's denote the length of a 
typical such sequence by g bits. If the condition 
g = 0 mod 8 always holds, i.e. the decompression of all 
(possible) 2^n different files always ends on (in 
general different) byte boundary and this is true for 
arbitrary values of n, then Scott's 'bijective property' 
is fulfilled, otherwise not. His argument of the 
desirability of that property for a compression algorithm
is the following: Suppose the plaintext P of m bytes is 
first compressed by a compression algorithm to give a 
result H of n bytes (somehow done to achieve byte 
boundary for the next encryption step) and then encrypted 
to n bytes by an encryption algorithm with a key K. The 
opponent, who guesses a (wrong) key K' will get a H' of 
n bytes, but with H' not identical to H. If the compressor 
does not have his 'bijective property', then H' will on 
decompression give a P' that in general does not end 
on byte boundary, i.e. P' will not be of m' (whole) byte 
length, but having some bits more (equivalently less). 
That would however give indication to the opponent that 
his guess K' is wrong.

I suppose that there are two tiny points that may be
mentioned. First, which is very trivial, is that the 
requirement of 'bijective property' presumes that one 
chooses to use compression. If one doesn't use 
compression, then of course the requirement doesn't 
arise from the outset. Second, it is assumed that the 
compression algorithm doesn't have a secret 'parameter' 
(a 'key' or equivalent), so that the opponent is able 
to do correct decompression just like the legitimate
receiver. If this assumption doesn't hold (e.g. in the
case of an adaptive Huffman that is primed by a secret
sequence before starting doing the proper compression
operation), then the opponent cannot know that his
guess K' is wrong in the manner described previously,
since now the failure of the decompressed P' from 
ending at byte boundary (assuming a 'non-bijective' 
compressor is used) could well be the result of his 
not having employed the correct 'parameter' of the 
compression algorithm.

M. K. Shen
===========================
http://home.t-online.de/home/mok-kong.shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: benefits of compression for security
Date: Mon, 04 Jun 2001 23:42:31 +0200



John Savard wrote:
> 
> "Tom St Denis" <[EMAIL PROTECTED]> wrote:
> 
> >So yes, I think it's possible if the dictionary is pre-built to increase the
> ># of possible messages per block as compared to plain ASCII plaintext.
> 
> You are quite correct; in the case of short messages, a pre-built
> dictionary in some sense is important.
> 
> One can combine a pre-built dictionary with an adaptive compression
> scheme, however.
> 
> For example, if one is compressing English text, one can use a
> multi-state Huffman compression scheme built around the
> characteristics of the English language. One begins in State 1, where
> symbols represent word lengths; in state 2, symbols represent letters.
> (Punctuation at the ends of words is coded in state 1 symbols; a third
> state is used to account for things like numbers.)
> 
> One can use a rule that the additional overhead in a State 1 symbol to
> indicate a dictionary word is not introduced until after two words of
> the same length and starting with the same letter appear in the text.
> (Or one can do it properly and wait until the second appearance of an
> identical word.) However, one starts building the dictionary from the
> beginning.

I find it difficult to understand your 'multi-state'.
Could you please illustrate with a toy example? 

BTW, I think it may be useful to code all the words of
a large lexicon with a fixed number of bits (say 20,
containing bits for grammatical informations e.g.
plurals etc.), for that could give a quite substantial 
compression ratio (I guess better than the normal 
Huffman operating on characters). One can afterwards 
attempt to apply compression on the coded stuff, if 
that turns out to be useful. Of course, this dictionary 
encoding may also be dependent on a 'key'.

M. K. Shen

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Reply-To: [EMAIL PROTECTED]
Date: Mon, 4 Jun 2001 21:44:29 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> Tom St Denis <[EMAIL PROTECTED]> wrote:
:> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
:> :> Tom St Denis <[EMAIL PROTECTED]> wrote:

:> :> : But an OTP is provably secure so your question is moot.
:> :>
:> :> Actually, if you use an OTP in the same way that you use CTR mode -
:> :> i.e. you chop off any cyphertext beyond the end of the message -
:> :> the cyphertext in an OTP system leaks length information about the
:> :> plaintext - and consequently fails to have Shannon's "perfect
:> :> secrecy" property. [...]

:> : I hate to burst your bubble but an OTP is provably
:> : secure so you might as well not argue this with me.
:>
:> It /doesn't/ have the perfect secrecy property if you use it in the way
:> that counter mode is commonly used when encrypting files. [...]

:> If used in such a way it leaks information about the length of the
:> plaintext.

: So what? [...]

So a 1-byte cyphertext can only represent 256 possible plaintexts.

Do you think it is acceptable to have a 128 bit key and then
throw away 120 bits of entropy from it when dealing with
particular plaintexts?

What sort of "provable security" do you think that is?
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Def'n of bijection
Date: 4 Jun 2001 21:54:15 GMT

[EMAIL PROTECTED] (Mok-Kong Shen) wrote in <3B1BFCCF.5061A337@t-
online.de>:

>
>
>Phil Carmody wrote:
>> 
>> David (Scott) 's term "Bijection" is a mapping such that the domain
>> (compressed files) is the totality of the set of finite files
>> representable on the file system
>> or
>> Every representable file (in the file system) is the image of a (unique)
>> source file under the compression mapping.
>[snip]
>
>I am not very sure that the above sufficiently well help 
>people capture Scott's compression idea. Since the topic 
>has in the course of time led to quite a number of threads 
>in the group, each time with a few posts apparently based 
>in part on certain misunderstandings (in my humble view), 
>I venture to take the liberty to herewith offer an 
>explanation in some detail (in order to get corrections 
>from the readers, in case I am wrong.) 
>
>Scott's compression assumes that the compressed file be 
>on a byte boundary (a commonly reasonable requirement). 
>So any compressed file has n (whole) bytes, with the 
>original file having m (whole) bytes. For any fixed n, 
>there are 2^n different (possible) files of n byte 
>length. On decompression with a given compression 
>algorithm, these will in general lead to bit sequences 
>of different lengths. Let's denote the length of a 
>typical such sequence by g bits. If the condition 
>g = 0 mod 8 always holds, i.e. the decompression of all 
>(possible) 2^n different files always ends on (in 
>general different) byte boundary and this is true for 
>arbitrary values of n, then Scott's 'bijective property' 
>is fulfilled, otherwise not. His argument of the 
>desirability of that property for a compression algorithm
>is the following: Suppose the plaintext P of m bytes is 
>first compressed by a compression algorithm to give a 
>result H of n bytes (somehow done to achieve byte 
>boundary for the next encryption step) and then encrypted 
>to n bytes by an encryption algorithm with a key K. The 
>opponent, who guesses a (wrong) key K' will get a H' of 
>n bytes, but with H' not identical to H. If the compressor 
>does not have his 'bijective property', then H' will on 
>decompression give a P' that in general does not end 

   I agree to here. But if the compressor is not bijective
then for many wrong keys K' for example. Will lead to H'
at which point it may either not decompress at all. Or if
it does decompress to P' then if you compress P' you don't
get H' back being in either case you can reject K' as a valid
key. Bijective compress doesn't have this problem.

>on byte boundary, i.e. P' will not be of m' (whole) byte 
>length, but having some bits more (equivalently less). 
>That would however give indication to the opponent that 
>his guess K' is wrong.
>
>I suppose that there are two tiny points that may be
>mentioned. First, which is very trivial, is that the 
>requirement of 'bijective property' presumes that one 
>chooses to use compression. If one doesn't use 
>compression, then of course the requirement doesn't

  Not really. Even the approved padding schemes as used
in DES are not bijective. I wrote several comments to the
AES people. But I suppose Wagner and company choose to
have them ignored. But most keys even for DES use such
poor padding methods that a false key if tested leads to
a file that could not have been padded by the method used
for the system. I think these are left over things so the
NSA can break systems easier and I guess Wagner and the boys
want to keep those weaknesses in place.

>arise from the outset. Second, it is assumed that the 
>compression algorithm doesn't have a secret 'parameter' 
>(a 'key' or equivalent), so that the opponent is able 
>to do correct decompression just like the legitimate
>receiver. If this assumption doesn't hold (e.g. in the
>case of an adaptive Huffman that is primed by a secret
>sequence before starting doing the proper compression
>operation), then the opponent cannot know that his
>guess K' is wrong in the manner described previously,
>since now the failure of the decompressed P' from 
>ending at byte boundary (assuming a 'non-bijective' 
>compressor is used) could well be the result of his 
>not having employed the correct 'parameter' of the 
>compression algorithm.

   Yes I would recommend priming the compression with
a secret sequence for extra protection. But that again
is like using a second key. If one did old adaptive
huffman methods many such key sequences should still be
dropped if the endings don't match. With bijective huffman
any sequence decodes smoothly so that no information
given away to attacker.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Reply-To: [EMAIL PROTECTED]
Date: Mon, 4 Jun 2001 21:49:11 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:

: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> Tom St Denis <[EMAIL PROTECTED]> wrote:
:> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
: news:[EMAIL PROTECTED]...
:> :> Tom St Denis <[EMAIL PROTECTED]> wrote:
:> :> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message

[BICOM vs Rijndael in CTR mode]

:> :> :> He explained it - you just didn't understand the explanation.
:> :>
:> :> : What explanation?  All he does is flame me.
:> :>
:> :> This sort of thing, repeated several times now:
:> :>
:> :> DS> And you never anwsered the FACT that a one byte ouput file
:> :> DS> from CTR mode (though you have no working program) would imediately
:> :> DS> lead an attacker to realize that the input file could only have
:> :> DS> come from 1 of 256 possible messages. With BICOM you have many
:> :> DS> many more messages. That alone makes it more secure.
:>
:> [snip]
:>
:> : His logic is flawed.  He states a feature of BICOM then assumes its a
:> : security bonus.
:>
:> Knowledge that a message comes from a set of billions of possible key
:> selected messages, rather than a set of 256 possible key selected messages
:> *is* a feature that has an immediate impact on security.
:>
:> If you can narrow the plaintext down to one of 256 possibilities, then
:> that is already a significant leak of information about the message
:> contents.

: OTP encrypted message.

: C=1101111010001

: What is P?

: (How long must this go on?)

I don't know:

Maybe until you realise that an OTP doesn't have perfect secrecy if it's
dealing with finite files, and converting them to cyphertexts of the same
length as the plaintexts?
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited Again (long)
Date: Mon, 04 Jun 2001 23:53:18 +0200



[EMAIL PROTECTED] wrote:
> 
>  [EMAIL PROTECTED](Mok-Kong Shen) wrote:
> 
> > > Yes - so that the biased bit patterns within those inverted
> > > bytes approximately negate the patterns in the unmodified
> > > ones. The intention is to force the attacker to try all
> > > 2**n permutations (n=block size in bits). Having said that,
> > > I think that a substitution cipher should be applied as well.
> >
> > Consider your inverted bytes (their number is half of
> > the total in the message). These bytes are transformed
> > by one and the same substitution isn't it? So you are
> > applying a monoalphabetical substitution to one half
> > of the number of bytes of the message, right? If
> > that is the case, wouldn't it be better to have a
> > polyalphabetical substitution table and have a PRNG
> > to generate a key sequence to determine which byte
> > of the message is to be substituted according to which
> > column of the substitution table?
> 
> The last sentence was a throw away comment as to further
> strengthening of the cipher.
> 
> The transposition of ASCII bytes and bit balancing were the main
> theme. After transposition, substitution is obviously not on the
> same ASCII bytes, but bytes that should look fairly random,
> without any bias in their bit values.
> 
> In a known plaintext attack, a 1-byte or 1-bit change can be used
> to analyse the transposition. Substitution causes a degree of
> diffusion and makes analysis more difficult.

I have some context problem with the above. Transposition 
and substitution are two basic operations in crypto. 
While the title of this thread contains 'transposition', 
I don't see that your previous posts mention transposition, 
only 'byte inversion' which is a substitution. (Does your 
web page treats transposition? I don't see that also.)

Thanks.

M. K. Shen

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Reply-To: [EMAIL PROTECTED]
Date: Mon, 4 Jun 2001 21:54:32 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:

:> I read an rather eloquent defense of counter mode not very long ago:
:>   http://www.cs.berkeley.edu/~daw/papers/ctr-aes00.ps
:>
:> ``Comments to NIST Concerning AES-modes of Operations: CTR-mode Encryption''
:>   - Helger Lipmaa, Phillip Rogaway, and David Wagner''
:>
:> It formalises the 1-byte message -> 1-byte cyphertext idea by saying:
:>
:> ``Messages of arbitrary bit­length. Unlike other common modes of
:>   operation, handling messages of arbitrary bit­ length is made
:>   trivial. No bits are wasted in doing this---the ciphertext C is
:>   of the same length as the plaintext M''.
:>
:> It also claims security:
:>
:> ``In fact the standard cryptographic assumption about a block cypher's
:>   security [...] is enough to prove the security of CTR mode encryption''.
:>
:> I can only guess that they think that a message only having 256 possible
:> decrypts is considered acceptable :-(

: Why do you think that's a problem?  If you can't tell a byte from random
: it's provably secure.  I.e if the prob of the output is exactly 1/256 what
: advantage do you have?

This is provable *insecurity* - not provable security.

Ideally for security, the cyphertext should contain no information
that suggests any plaintext is more likely than any other (Shannon's
perfect secrecy).

Here we a have a case where the cyphertext eliminates every possible
plaintext *except* for 256.  This is a *tiny* figure - and
may well represent a massive gain in information on the part
of the attacker upon observation of the cyphertext.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to