Cryptography-Digest Digest #518, Volume #14       Mon, 4 Jun 01 19:13:00 EDT

Contents:
  Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler)
  Re: Def'n of bijection (Mok-Kong Shen)
  Re: Dynamic Transposition Revisited Again (long) ([EMAIL PROTECTED])
  Re: Best, Strongest Algorithm (gone from any reasonable topic) ("Tom St Denis")
  Re: Best, Strongest Algorithm (gone from any reasonable topic) ("Tom St Denis")
  Re: Quantum Computers with relation to factoring and BBS (Bill Unruh)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) ("Tom St Denis")
  Re: Best, Strongest Algorithm (gone from any reasonable topic) ("Tom St Denis")
  Re: Is this a weakness in RSA key generation? (Bill Unruh)
  Re: Def'n of bijection (SCOTT19U.ZIP_GUY)
  Re: Def'n of bijection ("Joseph Ashwood")
  Re: Dynamic Transposition Revisited Again (long) (Mok-Kong Shen)

----------------------------------------------------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Reply-To: [EMAIL PROTECTED]
Date: Mon, 4 Jun 2001 22:04:06 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> I, Tim Tyler <[EMAIL PROTECTED]> wrote:

:> : I read an rather eloquent defense of counter mode not very long ago:
:> :   http://www.cs.berkeley.edu/~daw/papers/ctr-aes00.ps
:>
:> : ``Comments to NIST Concerning AES-modes of Operations: CTR-mode
:> : Encryption''
:> :   - Helger Lipmaa, Phillip Rogaway, and David Wagner''
:>
:> They say something else which seems open to debate as well - though it
:> strengthens their overall case, rather than weakening it.
:>
:> Under a section entitled:
:>
:> "Perceived disadvantages of CTR mode" they say:
:>
:> ``Successive blocks ctr and ctr + 1 usually have small Hamming
:>   difference. This has lead to the concern that an attacker can
:>   obtain many plaintext pairs with a known small plaintext difference,
:>   which would facilitate the differential cryptanalysis. However, this
:>   concern is only valid if the underlying cipher is differentially
:>   weak. It is not the responsibility of a mode of operation to try to
:>   compensate (likely without success) for weaknesses in the underlying
:>   block cipher; this concern should be addressed when designing the block
:>   cipher.''
:>
:> Fair enough.  However, ISTM that here a case is being made for relying on
:> the underlying block cypher where it is not necessary to do so.
:>
:> A maximal period counter does not need to work by repeatedly adding 1 -
:> there are other types of counter that work just as well.
:>
:> LFSRs are one candidate - probably a far superior condidate if you know
:> you're going to be in hardware.
:>
:> Also, all the relevant properties of a "+1" arithmetic counter appear to
:> be present in a "+n" counter where n is relatively prime to the size of
:> the counter.
:>
:> A "+n" counter with a reasonable spread of set bits in n avoids the low
:> Hamming distance issue.
:>
:> Since you can use a counter which avoids the possible stigmata of all the
:> high bits remaining fixed 99% of the time - and this has practically no
:> cost - I see no good reason to shy away from doing so.

: Because if the cipher is any good a change in any bit should have drastic
: confusing consequences.

"Should" - but of course nobody knows whether it /actually/ does.

This concern has been seriously raised.  Again (for example) Terry
Ritter has expressed reservations about counter mode *because* of the
powerful driving signal that the cryptographic primitive is being fed,
that might allow an attacker to use differential analysis on it.

We all know that *if* the cypher is behaving idealy, it will make no
difference - but if there's some attack on the cypher, it might.

It seems to me that since the attack is not known to be impossible, while
a plausible-looking defense against appears to be is incredibly cheap,
there's no good reason not to apply the defense.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Def'n of bijection
Date: Tue, 05 Jun 2001 00:17:30 +0200



"SCOTT19U.ZIP_GUY" wrote:
> 
> [EMAIL PROTECTED] (Mok-Kong Shen) wrote:

> >Scott's compression assumes that the compressed file be
> >on a byte boundary (a commonly reasonable requirement).
> >So any compressed file has n (whole) bytes, with the
> >original file having m (whole) bytes. For any fixed n,
> >there are 2^n different (possible) files of n byte
> >length. On decompression with a given compression
> >algorithm, these will in general lead to bit sequences
> >of different lengths. Let's denote the length of a
> >typical such sequence by g bits. If the condition
> >g = 0 mod 8 always holds, i.e. the decompression of all
> >(possible) 2^n different files always ends on (in
> >general different) byte boundary and this is true for
> >arbitrary values of n, then Scott's 'bijective property'
> >is fulfilled, otherwise not. His argument of the
> >desirability of that property for a compression algorithm
> >is the following: Suppose the plaintext P of m bytes is
> >first compressed by a compression algorithm to give a
> >result H of n bytes (somehow done to achieve byte
> >boundary for the next encryption step) and then encrypted
> >to n bytes by an encryption algorithm with a key K. The
> >opponent, who guesses a (wrong) key K' will get a H' of
> >n bytes, but with H' not identical to H. If the compressor
> >does not have his 'bijective property', then H' will on
> >decompression give a P' that in general does not end
> 
>    I agree to here. But if the compressor is not bijective
> then for many wrong keys K' for example. Will lead to H'
> at which point it may either not decompress at all. Or if
> it does decompress to P' then if you compress P' you don't
> get H' back being in either case you can reject K' as a valid
> key. Bijective compress doesn't have this problem.

If the decompression doesn't stuck and it leads to a bit
sequence P' (though not on byte boundary), then since each
step in the algorithm is invertible, compression of
P' should give back H'. What's wrong in this, could you
please tell?

M. K. Shen

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Dynamic Transposition Revisited Again (long)
Date: 4 Jun 2001 22:28:15 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
(Mok-Kong Shen) wrote:

> > The last sentence was a throw away comment as to further
> > strengthening of the cipher.
> > 
> > The transposition of ASCII bytes and bit balancing were the main
> > theme. After transposition, substitution is obviously not on the
> > same ASCII bytes, but bytes that should look fairly random,
> > without any bias in their bit values.
> > 
> > In a known plaintext attack, a 1-byte or 1-bit change can be used
> > to analyse the transposition. Substitution causes a degree of
> > diffusion and makes analysis more difficult.
> 
> I have some context problem with the above. Transposition 
> and substitution are two basic operations in crypto. 
> While the title of this thread contains 'transposition', 
> I don't see that your previous posts mention transposition, 
> only 'byte inversion' which is a substitution. (Does your 
> web page treats transposition? I don't see that also.)

The original comment was made in the context of Terry Ritter's 
need to do 'bit balancing' before transposition at the bit level. 
Not only does it balance the overall number of bits in the block, 
but it also improves the distributions within the average byte.

Yes, the revised codebook suggests firstly substitution at the placode 
(16-bit codegroup) level followed by transposition at the nybble (4-bit) 
level. In both cases this is done treating the whole message as a single 
block.

Keith.

------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Date: Mon, 04 Jun 2001 22:31:32 GMT


"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> :> Tom St Denis <[EMAIL PROTECTED]> wrote:
> :> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
> :> :> Tom St Denis <[EMAIL PROTECTED]> wrote:
>
> :> :> : But an OTP is provably secure so your question is moot.
> :> :>
> :> :> Actually, if you use an OTP in the same way that you use CTR mode -
> :> :> i.e. you chop off any cyphertext beyond the end of the message -
> :> :> the cyphertext in an OTP system leaks length information about the
> :> :> plaintext - and consequently fails to have Shannon's "perfect
> :> :> secrecy" property. [...]
>
> :> : I hate to burst your bubble but an OTP is provably
> :> : secure so you might as well not argue this with me.
> :>
> :> It /doesn't/ have the perfect secrecy property if you use it in the way
> :> that counter mode is commonly used when encrypting files. [...]
>
> :> If used in such a way it leaks information about the length of the
> :> plaintext.
>
> : So what? [...]
>
> So a 1-byte cyphertext can only represent 256 possible plaintexts.
>
> Do you think it is acceptable to have a 128 bit key and then
> throw away 120 bits of entropy from it when dealing with
> particular plaintexts?
>
> What sort of "provable security" do you think that is?

This is meaningless and you should know it.  First off there are 256! =
2^1680 ways to permute a byte.  So technically you are not throwing away
entropy.  So even if it *were* a 1-byte block cipher you're not reducing the
key space.

Second, in CTR mode the counter is not fixed in size.  You do encode full
blocks.  You just end up not having to use all of the output.  For example
with a 64-bit block cipher you can encode 96 bits by encoding two different
counters and only using 64 bits of the first block and 32 bits of the
second.

Tom



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Date: Mon, 04 Jun 2001 22:34:12 GMT


"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> Tom St Denis <[EMAIL PROTECTED]> wrote:
>
> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> :> Tom St Denis <[EMAIL PROTECTED]> wrote:
> :> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
> : news:[EMAIL PROTECTED]...
> :> :> Tom St Denis <[EMAIL PROTECTED]> wrote:
> :> :> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
>
> [BICOM vs Rijndael in CTR mode]
>
> :> :> :> He explained it - you just didn't understand the explanation.
> :> :>
> :> :> : What explanation?  All he does is flame me.
> :> :>
> :> :> This sort of thing, repeated several times now:
> :> :>
> :> :> DS> And you never anwsered the FACT that a one byte ouput file
> :> :> DS> from CTR mode (though you have no working program) would
imediately
> :> :> DS> lead an attacker to realize that the input file could only have
> :> :> DS> come from 1 of 256 possible messages. With BICOM you have many
> :> :> DS> many more messages. That alone makes it more secure.
> :>
> :> [snip]
> :>
> :> : His logic is flawed.  He states a feature of BICOM then assumes its a
> :> : security bonus.
> :>
> :> Knowledge that a message comes from a set of billions of possible key
> :> selected messages, rather than a set of 256 possible key selected
messages
> :> *is* a feature that has an immediate impact on security.
> :>
> :> If you can narrow the plaintext down to one of 256 possibilities, then
> :> that is already a significant leak of information about the message
> :> contents.
>
> : OTP encrypted message.
>
> : C=1101111010001
>
> : What is P?
>
> : (How long must this go on?)
>
> I don't know:
>
> Maybe until you realise that an OTP doesn't have perfect secrecy if it's
> dealing with finite files, and converting them to cyphertexts of the same
> length as the plaintexts?

SO WHAT?

What is your problem?

All OTP's are named OTP, ah a flaw!

That's about as valid as your logic.  Tell me what P was in the system above
and we can talk.  Otherwise you're just being a crank.

Tom



------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Subject: Re: Quantum Computers with relation to factoring and BBS
Date: 4 Jun 2001 22:34:42 GMT

In <9f0g0e$o10$[EMAIL PROTECTED]> "Scott Fluhrer" <[EMAIL PROTECTED]> 
writes:
]> simply returns its input would be a "factoring algorithm".
]I did say I elided a lot of the details: here's a few more details: NP is a
]set of decision problems (that is, problems that give a "Yes" or "No"
]answer), and so to turn factorization into a decision problem, one method is
]to make the problem "given integers n and m, does n have any factors p such
]that 1<p<m?".  Now, we can see that this decision problem is obviously in
]NP, because a "Yes" answer can be proved by demonstrating such a p, which
]can be quickly (i.e. in polynomially time) be verified that is is a factor
]of n, and that 1<p<m.  Note that p needed be verifed to be prime, and hence
]the question of whether primality testing is NP or not is irrelevant.

p does not have to be verified to be a prime. A factoring algorithm
simply needs to provide any factor, prime or composite, less than the
number itself. Then since the same algorithm can be reapplied, one can
eventually discover all the prime factors ( that just takes polynomial
extra time). (of course it is possible that your "p needed be" was
supposed to read "p need not be" in which case I am in perfect agreement
with you.)


]What primality testing gives you is a way of showing that the factorization
]problem is in coNP; that is, if there is no such p, then there's a short
]proof of that as well, by giving the complete factorization (with primality
]proofs for all of the factors).

As above if you have a factoring algorithm you also have a primality
algorithm. m is prime is the test for its factors smaller than itself
returns no factors.




------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Date: Mon, 04 Jun 2001 22:35:55 GMT


"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> :> I, Tim Tyler <[EMAIL PROTECTED]> wrote:
>
> :> : I read an rather eloquent defense of counter mode not very long ago:
> :> :   http://www.cs.berkeley.edu/~daw/papers/ctr-aes00.ps
> :>
> :> : ``Comments to NIST Concerning AES-modes of Operations: CTR-mode
> :> : Encryption''
> :> :   - Helger Lipmaa, Phillip Rogaway, and David Wagner''
> :>
> :> They say something else which seems open to debate as well - though it
> :> strengthens their overall case, rather than weakening it.
> :>
> :> Under a section entitled:
> :>
> :> "Perceived disadvantages of CTR mode" they say:
> :>
> :> ``Successive blocks ctr and ctr + 1 usually have small Hamming
> :>   difference. This has lead to the concern that an attacker can
> :>   obtain many plaintext pairs with a known small plaintext difference,
> :>   which would facilitate the differential cryptanalysis. However, this
> :>   concern is only valid if the underlying cipher is differentially
> :>   weak. It is not the responsibility of a mode of operation to try to
> :>   compensate (likely without success) for weaknesses in the underlying
> :>   block cipher; this concern should be addressed when designing the
block
> :>   cipher.''
> :>
> :> Fair enough.  However, ISTM that here a case is being made for relying
on
> :> the underlying block cypher where it is not necessary to do so.
> :>
> :> A maximal period counter does not need to work by repeatedly adding 1 -
> :> there are other types of counter that work just as well.
> :>
> :> LFSRs are one candidate - probably a far superior condidate if you know
> :> you're going to be in hardware.
> :>
> :> Also, all the relevant properties of a "+1" arithmetic counter appear
to
> :> be present in a "+n" counter where n is relatively prime to the size of
> :> the counter.
> :>
> :> A "+n" counter with a reasonable spread of set bits in n avoids the low
> :> Hamming distance issue.
> :>
> :> Since you can use a counter which avoids the possible stigmata of all
the
> :> high bits remaining fixed 99% of the time - and this has practically no
> :> cost - I see no good reason to shy away from doing so.
>
> : Because if the cipher is any good a change in any bit should have
drastic
> : confusing consequences.
>
> "Should" - but of course nobody knows whether it /actually/ does.
>
> This concern has been seriously raised.  Again (for example) Terry
> Ritter has expressed reservations about counter mode *because* of the
> powerful driving signal that the cryptographic primitive is being fed,
> that might allow an attacker to use differential analysis on it.
>
> We all know that *if* the cypher is behaving idealy, it will make no
> difference - but if there's some attack on the cypher, it might.
>
> It seems to me that since the attack is not known to be impossible, while
> a plausible-looking defense against appears to be is incredibly cheap,
> there's no good reason not to apply the defense.

>From this stand point it stands to reason that using a LFSR to clock it may
be just as a bad.  You haven't given any evidence either way.  Note that
many good ciphers are analyzed wrt to hash modes and still are consider ok.
I think in any mode where you can control the key and plaintext and still
not win, just incrementing by 1 will not hurt it,.

Tom



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic)
Date: Mon, 04 Jun 2001 22:37:31 GMT


"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> : "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> :> SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote:
>
> :> I read an rather eloquent defense of counter mode not very long ago:
> :>   http://www.cs.berkeley.edu/~daw/papers/ctr-aes00.ps
> :>
> :> ``Comments to NIST Concerning AES-modes of Operations: CTR-mode
Encryption''
> :>   - Helger Lipmaa, Phillip Rogaway, and David Wagner''
> :>
> :> It formalises the 1-byte message -> 1-byte cyphertext idea by saying:
> :>
> :> ``Messages of arbitrary bit�length. Unlike other common modes of
> :>   operation, handling messages of arbitrary bit� length is made
> :>   trivial. No bits are wasted in doing this---the ciphertext C is
> :>   of the same length as the plaintext M''.
> :>
> :> It also claims security:
> :>
> :> ``In fact the standard cryptographic assumption about a block cypher's
> :>   security [...] is enough to prove the security of CTR mode
encryption''.
> :>
> :> I can only guess that they think that a message only having 256
possible
> :> decrypts is considered acceptable :-(
>
> : Why do you think that's a problem?  If you can't tell a byte from random
> : it's provably secure.  I.e if the prob of the output is exactly 1/256
what
> : advantage do you have?
>
> This is provable *insecurity* - not provable security.
>
> Ideally for security, the cyphertext should contain no information
> that suggests any plaintext is more likely than any other (Shannon's
> perfect secrecy).

yes I agree.  If you have 256 states that's a prob of 1/256.  Think about
it.  It must add up to 1 so 1/256 over 256 symbols is the lowest possible
bound.  (do the math)

> Here we a have a case where the cyphertext eliminates every possible
> plaintext *except* for 256.  This is a *tiny* figure - and
> may well represent a massive gain in information on the part
> of the attacker upon observation of the cyphertext.

Not really.  Given a 8-bit message, if you can't tell it from any other
8-bit message with a prob higher then 1/256 then ou have no advantage.
(Note if this bound is followed it's equivalent to an OTP)

Tom



------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Subject: Re: Is this a weakness in RSA key generation?
Date: 4 Jun 2001 22:38:29 GMT

In <[EMAIL PROTECTED]> [EMAIL PROTECTED] (Mark 
Borgerding) writes:

]I found that pgp 2.6.2 may sometimes generate a private exponent n
]that does not entirely match the RSA spec (as I know it)

]An RSA private exponent d 
]1) d*e = 1 , mod (p-1)*(q-1)

]which implies 
]2) d*e = 1 , mod (p-1)
]3) d*e = 1 , mod (q-1)


]pgp seems to occasionally generate a key that satisfies 2&3, but not
]1.
]I know that stmt #1 implies 2&3, but the reverse is not true.

]My question is: is this something to worry about?  What effect would

Yes. It will not work. You will not be able to decrypt anything.


------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Def'n of bijection
Date: 4 Jun 2001 22:28:11 GMT

[EMAIL PROTECTED] (Mok-Kong Shen) wrote in <3B1C08FA.96CBA521@t-
online.de>:

>
>
>"SCOTT19U.ZIP_GUY" wrote:
>> 
>> [EMAIL PROTECTED] (Mok-Kong Shen) wrote:
>
>> >Scott's compression assumes that the compressed file be
>> >on a byte boundary (a commonly reasonable requirement).
>> >So any compressed file has n (whole) bytes, with the
>> >original file having m (whole) bytes. For any fixed n,
>> >there are 2^n different (possible) files of n byte
>> >length. On decompression with a given compression
>> >algorithm, these will in general lead to bit sequences
>> >of different lengths. Let's denote the length of a
>> >typical such sequence by g bits. If the condition
>> >g = 0 mod 8 always holds, i.e. the decompression of all
>> >(possible) 2^n different files always ends on (in
>> >general different) byte boundary and this is true for
>> >arbitrary values of n, then Scott's 'bijective property'
>> >is fulfilled, otherwise not. His argument of the
>> >desirability of that property for a compression algorithm
>> >is the following: Suppose the plaintext P of m bytes is
>> >first compressed by a compression algorithm to give a
>> >result H of n bytes (somehow done to achieve byte
>> >boundary for the next encryption step) and then encrypted
>> >to n bytes by an encryption algorithm with a key K. The
>> >opponent, who guesses a (wrong) key K' will get a H' of
>> >n bytes, but with H' not identical to H. If the compressor
>> >does not have his 'bijective property', then H' will on
>> >decompression give a P' that in general does not end
>> 
>>    I agree to here. But if the compressor is not bijective
>> then for many wrong keys K' for example. Will lead to H'
>> at which point it may either not decompress at all. Or if
>> it does decompress to P' then if you compress P' you don't
>> get H' back being in either case you can reject K' as a valid
>> key. Bijective compress doesn't have this problem.
>
>If the decompression doesn't stuck and it leads to a bit
>sequence P' (though not on byte boundary), then since each
>step in the algorithm is invertible, compression of
>P' should give back H'. What's wrong in this, could you
>please tell?
>

 Well most decompression would go to byte boundaries.
However if you have a compressor decompressor designed
to use Plaintext files that are really bit strings that
would be ok. But if compressor is designed to only use
byte files. Then a decompressor that goes to bit strings 
would not be a good match.
 

David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Def'n of bijection
Date: Mon, 4 Jun 2001 15:01:29 -0700

To avoid the fast degradation of conversation that is occuring in the
subthreads I am beginning my own.

There are several variations on bijectivity that are used around here. The
most fundamental is the bijective term as it applies in generic computer
science, simply stated this is:
Given Set A
Given Set B
For every element a in A f(a) = b where b is an element in B
For every element b in B f'(b) = a where a is an element in A
For every element a in A f'(f(a)) = a
For every element b in B f(f'(b)) = b
There is no requirement that A = B, only that |A| = |B| (where |x| is the
cardinality of set x)
An example was given before of
f()
A = 1
B = 2
C = 3
f'()
1 = A
2 = B
3 = C

Other common variations here include adding certain constraints, for example
we very commonly add the constraint of the value k being known to perform
the mapping, otherwise generating an undefined (simply uncared about, but
with very high liklihood different from the original) value, call this
A-bijectivity for now. This is a generic encryption I am unaware of any
functions offhand that are this type that are not also C-bijective. The size
of this class varies depending on the required difficulty of calculating the
inverse.

Scott often places different restrictions, restrictions which make sense in
certain contexts but (like all of these) are inappropriate for some
situations. I will call this B-bijectivity for now. B-bijectivity requires
that Set A = Set B, this immediately implies |A| = |B|. Commonly Scott fixes
the Set A to be the set of all n-OCTET length files.

A third type, call it C-bijectivity is a combination of A and B-bijectivity.
This is the common cryptographic bijectivity, Set A = Set B = blocks of some
length. Rijndael is an example.

A fourth type, call it D-bijective is a rather rare instance, and an
exception to some of the assumped (but not stated) rules above. The
calculation of the function in one direction takes 3 inputs, typically Key,
Data, R-value, while the other direction only requires Key and Data. These
are partially bijective because the R-values by definition are not likely to
repeat. Example of this class include PSS and OAEP. It is probable that for
the sake of sanity that this class of function should not be considered
bijective at all.

If you can keep stright which subclass of bijectivity is being called simply
bijective it tends to make things a bit easier. I'm sure I've missed some
equally valid classes of functions that have been referred to as bijective
on this group, please feel free to add or comment.
                        Joe

"Tom St Denis" <[EMAIL PROTECTED]> wrote in message
news:AIuS6.17512$[EMAIL PROTECTED]...
> http://www.dictionary.com/cgi-bin/dict.pl?term=bijection
>
> Aha.  One-to-one and onto.
>
> That means it's invertible and closed right (i.e from set A to set B,
A=B)?
> --
> Tom St Denis
> ---
> http://tomstdenis.home.dhs.org
>
>



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited Again (long)
Date: Tue, 05 Jun 2001 00:48:25 +0200



[EMAIL PROTECTED] wrote:
> 

> The original comment was made in the context of Terry Ritter's
> need to do 'bit balancing' before transposition at the bit level.
> Not only does it balance the overall number of bits in the block,
> but it also improves the distributions within the average byte.
> 
> Yes, the revised codebook suggests firstly substitution at the placode
> (16-bit codegroup) level followed by transposition at the nybble (4-bit)
> level. In both cases this is done treating the whole message as a single
> block.

Terry Ritter's dynamic transposition does balancing
and transposition but no substitution. An additional
substitution would certainly help. My point was that
it would be preferable to use a general polyalphabetic
substitution. Whole file processing is in such cases
certainly better than working on a number of separate
blocks.

M. K. Shen

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to