Cryptography-Digest Digest #561, Volume #13      Fri, 26 Jan 01 17:13:01 EST

Contents:
  Re: Dynamic Transposition Revisited (long) (AllanW)
  Re: Dynamic Transposition Revisited (long) ("Tony T. Warnock")
  Re: Dynamic Transposition Revisited (long) (Terry Ritter)
  Re: Dynamic Transposition Revisited (long) (Terry Ritter)
  Re: Some Enigma Questions (JimD)
  Re: Echelon in Asia. (JimD)
  Re: Dynamic Transposition Revisited (long) (John Savard)
  Re: Fitting Dynamic Transposition into a Binary World (Terry Ritter)

----------------------------------------------------------------------------

From: AllanW <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited (long)
Date: Fri, 26 Jan 2001 20:49:45 GMT


> AllanW <[EMAIL PROTECTED]> wrote:
> >As I understood the filler bits to work, the data would
> >look like this before the transposition begins: if the
> >data had more 0-bits than 1-bits, it would take the form
> >    XX....XX 00..00 11..11
> >where X bits represent the data, and then there are 1-8
> >0-bits followed by 1 or more 1-bits.

I see now that I should have said, 1-7 0-bits and then
1 or more 1-bits.

> >If the data has
> >more 1-bits than 0-bits, simply reverse the filler:
> >    XX....XX 11..11 00..00
> >this time using 1-8 1-bits and then 1 or more 0-bits.

And here I should have said: 1-7 1-bits and then 1 or
more 0-bits.


> >[EMAIL PROTECTED] (Terry Ritter) wrote:
> That does not seem to be the way I did it.

That's what I got out of it. Not word-for-word, of course.

> I don't understand having both 0's and 1's balancing bytes.

Surely you do...? It is so that we can always remove the balancing
bytes without removing any meaningful data.

What if the block has 16 more 0-bits than 1-bits, but the last byte
of plaintext is 0x0F? You could balance the block by adding 2 more
bytes of 0xFF, but then after decryption we could not identify the
first byte of filler (as you say below: the mixed byte must be
mixed to be a flag).

> If we have an excess of 0's, we want any full balancing bytes to
> be all 1's, with the mixed byte being what it needs to be.  Since
> the mixed byte must be mixed to be a flag, it cannot balance a
> full 8 bits, but at most 6 (1000 0000 is an excess of 6 0's).

Yes, exactly. And then following this, we have bytes with all
1-bits.

I suppose that what I wrote above (1-7 0-bits, followed by 1 or more
1-bits) was stronger than absolutely needed. The mixed byte could be
randomly mixed as well, so instead of using 1000-0000 we could just
as easily have used 0001-0000. Is there any reason to do this
randomly? If not, then my description fits the same pattern but is
easier to describe.

> >Another way to say this is to show what the filler
> >would look like for various imbalances. Here I'll
> >assume that the data has more 0-bits than 1-bits; use
> >the complement if the opposite is true.
> >
> >  If there are B more 0-bits than 1-bits, then the
> >  filler looks like (in hex):
> >
> >  B=0   0F
> >  B=2   1F
> >  B=4   3F
> >  B=6   7F
> >  B=8   0FFF
> >  B=10. 1FFF
> >  B=12. 3FFF
> >  B=14. 7FFF
> >  B=16. 0FFFFF
> >  B=18. 1FFFFF
> >  B=20. 3FFFFF
> >  B=22. 7FFFFF
> >  B=24. 0FFFFFFF
> >    ...and so on.
>
> Looks right.

Good, because in every case the first balance byte starts
with 1-4 0-bits followed by 1-7 1-bits.

--
[EMAIL PROTECTED] is a "Spam Magnet," never read.
Please reply in newsgroups only, sorry.


Sent via Deja.com
http://www.deja.com/

------------------------------

From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited (long)
Date: Fri, 26 Jan 2001 14:08:19 -0700
Reply-To: [EMAIL PROTECTED]

The efficiency of the balanced pre-substitutions can be improved by
taking more bits. Of course a table-lookup gets too big rather quickly.

Output                   Input
Size                      Length                  Expansion
      6                               4                    50.0%
      8                               6                     33.3%
    10                               7                     42.9%
    12                               9                     33.3%
    14                             11                     27.3%
    16                             13                     23.1%
    18                             15                     20.0%
    20                             17                     17.6%
    22                             19                     15.8%
    24                             21                     14.3%
    26                             23                     13.0%
    28                             25                     12.0%
    30                             27                     11.1%
    32                             29                     10.3%
    64                             60                       6.7%
  128                           124                       3.2%
  256                           251                       2.0%
  512                           507                       1.0%
1024                          1018                      0.6%

The 17 bit input is probably the largest that could be done by table
lookup. I haven't tried to see if there is a function to do the mappings
(which would make things more feasible.)

If one could live with a 1-bit bias, then the odd-sized outputs could be
used. It's more efficient. A 15 bit input block then maps to a 17 bit
output block. Or one could use the evenly balanced and 1-off blocks to
improve efficiency. Then 15 bit input blocks map to 16 bit output blocks
with 7,8, or 9 bits.






------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Fri, 26 Jan 2001 21:48:40 GMT


On Fri, 26 Jan 2001 20:43:15 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:

>On Fri, 26 Jan 2001 20:22:00 GMT, AllanW <[EMAIL PROTECTED]> wrote,
>in part:
>
>>To take an extremly small example: a 16-bit block could contain
>>any of 1170 different values, starting with 0000000011111111
>>and ending up with 1111111100000000. Any bit-shuffle would also
>>produce one of these 1170 values. Is one of the 1170 possible
>>values impossible to reach by shuffling?
>
>No. And not one of the 2^64 possible outputs is impossible to reach by
>DES, either.
>
>But Terry Ritter appears to be criticizing DES because some of the
>(2^64)! possible overall substitutions are impossible to reach.

Some?  SOME???!!

(From my factorials page:

   http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM

).

(The number of different values in 64 bits is 2**64.)
2**64 ~= 18446744073709552000 = A
A! ~= some value 1.15397859e+21 bits long.
The DES keyspace is 56 bits, a value 2 bits long.

Let's see:  A value 10**21 bits long compared to a value 2 bits long.
Yeah, I'd say "some" were impossible to reach.  

The DES keyspace is such a tiny fragment of the potential keyspace
that there is no assurance at all that it retains the smooth
theoretical properties of distribution for which we might hope.  


>Well, by shuffling, I can't reach a total overall pairing of bit
>balanced inputs to bit balanced outputs such that
>
>0000000011111111 -> 1111111100000000
>
>and
>
>0000001010111111 -> 0000000011111111

Why not?  What does that mean?  Are you criticizing the balancing
algorithm?  Fine, use another.

With respect to the shuffling, every permutation is possible.  The
expected defect is statistical only, in the sense that there is no
guarantee that all permutations have the same probability.  But since
the number of permutations is so vast, we could never hope to traverse
those so we could establish such bias, or, presumably, use it.

Of course, all of that would assume that we could what each
permutation was.  With known-plaintext, we *know* a particular entry
in a substitution table.  But known-plaintext does *not* expose a
particular permutation with Dynamic Transposition.  


>so it's the 1170! possible overall correspondences of input to output
>that can't all be reached.

Here we have 16 things (bits), and so can have 16! (about 2**44)
permuted results.  Each result is not unique, though, since any
arrangement of 0's (8!) and any arrangement of 1's (8!) produces the
same result.  8! * 8! is about 2**30, so there should be about 2**14
unique result values, each of which is produced by 2**30 different
permutations.  So instead of getting every possible 16 bit value, we
can see that balancing has reduced the number of different values to
about 14-bits worth.

What is the 1170! value?

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Fri, 26 Jan 2001 21:52:58 GMT


On Fri, 26 Jan 2001 13:56:11 +0100, in
<[EMAIL PROTECTED]>, in sci.crypt Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:

>Mok-Kong Shen wrote:
>> 
>> Terry Ritter wrote:
>> >
>[snip]
>> My knowledge is too meager to enable me to offer more
>> points than I have already done. However, it is my
>> impression that your arguments todate somehow (I have
>> no definite idea of that) need strenthening, if DT is
>> to be accepted as on a par with the theoretical OTP
>> (which seems to be your goal), in particular by the
>> academics, assuming DT has in fact that property.
>> (Apology, if my open words are inappropriate in some
>> way.)
>
>After some re-thinking, I do like to elaborate a little
>a previous point of mine concerning the question of 
>perfectness of DT.
>
>Suppose we have block size of n and we agree not to use
>the non-balanced groups of bits but only the balanced
>ones to transmit informations (in other words, we have
>an 'alphabet' of a certain size m that is less than n). This 
>serves to separate out the issue of bit balancing in order to
>simpify the argumentation below.
>
>Now what we have is the following: Given an information block, 
>we do permutations of its bits to produce an enciphered block 
>with the help of a PRNG. A PRNG never provides a perfect bit
>sequence in the sense used in the context of the theoretical 
>OTP. 

First of all, "perfect secrecy" is a technical term from Shannon for a
type of balanced ciphering.  I have also used "perfect" in my articles
to describe a random number generator (RNG) which is a permutation
generator (with a single cycle).  But, in general, I am unaware of any
technical meaning for "perfect bit sequence."  There can be no perfect
sequence.  There is only a universe of sequences, which we can sample,
and with the sampled properties we can establish statistically.  

It is hardly a surprise that something we build is not perfect in an
absolute sense.  At issue is whether the difference between perfect
and what we have is noticeable or exploitable.  

The statement "A PRNG never provides a perfect bit sequence" is
technically false:

Suppose as an RNG we had a simple shift-register (SR), that simply
repeated its' contents cyclically:  Surely we can see that any
possible sequence *shorter* than the SR is easily produced.  Of
course, for longer sequences, only some are possible, and those
sequences are internally correlated, which is not good.  Since we
normally use short RNG's, that is the behavior we are taught to
expect.  

The point of using a large-state RNG in Dynamic Transposition is to
support a full permutation computation using independent state.  

The problem occurs when we talk about RNG output beyond the internal
state size.  Here the problem would arise in subsequent permutations,
where perhaps not all possible sequences of permutations are
available.  That issue would be clearer if we just used the raw output
from the RNG, but we don't: we deliberately process values so that
less information is available.  We fold the output value and mask it
(a power-of-2 modulo), and delete values from the sequence (thus
creating completely different shuffling sequences).  These operations
act to hide correlation and improve statistical qualities.  

Suppose we had a numbered 64-card deck, which had only the values 0
through 62 and 0 again.  We assume we can only sample a small part of
the deck at a time, and can have only so many samples.  Eventually we
may be able to establish that there 0 has twice the expected
probability, and somehow exploit that situation.

Now suppose we can only see the lower 2 bits of each card value.  Here
no value is missing, and 0 is only slightly more probable than 1 or 2.
There is still a statistical difference, and we would still have
difficulty finding it.  But we could not say that any particular value
would not occur, or that a related Shuffle move would not occur.  

I think the worst we could expect is that some permutations might be
less probable than others.  But I don't think we could catch the
effect in practice, because there are too many permutations and our
sample space is too small.  Moreover, the actual defect is likely to
be more like Y-permutation is less probable if X-permutation just
occurred, which makes the statistical problem even worse.  


>How can it be that this imperfectness does not manifest 
>itself in some form (possibly in certain very much reduced, 
>practically entirely negligible, intensity) AT ALL in our 
>encryption result? 

There is no perfect sequence.  We describe perfection statistically,
across our sample of results.  In a statistical sense, it may well be
possible to approach statistical perfection -- over a limited use --
to the extent that statistical differences are not apparent. 

This done first by having a large state RNG, and then by folding and
masking the RNG value, so that many different RNG values produce
exactly the same shuffling value.  This is information reduction or
loss, and in this way deviations are hidden.


>Let's order all the distinct balanced bit 
>groups into a sequence S and feed repetitions of S into our 
>algorithm. 

I don't understand "feeding repetitions of S."  An RNG algorithm steps
or produces a sequence in some way, but does not take input.  

>This input is evidently perfectly predictable. Can 
>the output be perfectly unpredictable? It certainly cannot. 

An RNG is "perfectly unpredictable" as it starts and continues so
until the output exceeds the internal state.  Here, that should
support a full double-shuffle.  An RNG just can't be perfectly
unpredictable indefinitely.  

>For the PRNG has a finite period and hence the ciphertext 
>must cycle ultimately. 

If the RNG cannot cycle while our Sun still shines, can we really say
that it does, in practice, cycle?  Knowing that, can we use even the
most gross property of cycling?  I think not.  

>This shows that, while DT could be 
>practically very secure (an issue that certainly merits 
>careful study before its being used in practice), it cannot 
>offer perfect security that pertains to the theoretical OTP.

Again, "perfect security" is a technical term.  It refers to a cipher
where any possible plaintext may be ciphered into exactly the same
ciphertext with equal probability.  That arguably does occur in
Dynamic Transposition on a block basis.  

It is when we consider multiple blocks that the arguments become
weaker.  But since the ciphering does not expose the permutation, this
imperfection may not be detectable.  If it is not detectable, can we
say, in practice, that it even exists?  

That is not a semantic argument, or a philosophical question -- it is
the real situation.  For example, perhaps we could show that a certain
amount of information is necessary to distinguish between our real RNG
and the theoretical ideal.  Then if we could show that the information
is not available, we could have provable "perfection," much like other
cryptographic proofs.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (JimD)
Subject: Re: Some Enigma Questions
Reply-To: Jim
Date: Fri, 26 Jan 2001 21:59:31 GMT

On Thu, 25 Jan 2001 21:13:42 +1100, "Yeah" <[EMAIL PROTECTED]> wrote:

>> Q5:  If properly used (e.g. few messages, good mixing of rotor settings,
>no
>> obvious rotor settings (e.g. QWE), varying messages to avoid obvious
>cribs,
>> having all rotor increment the next rotor at the same position, not
>sending
>> the same message in more than one cipher system, changing of rotors more
>> often than once a war, etc), say along the lines of the German Navy, would
>> an Enigma today be reasonably secure?  Put another way, would it be easily
>> crackable today by a single person with some cracking tools and a
>computer,
>> or would it require a high level team like that assembled during the war?
>
>Didn't the British Government invent the worlds first programmable computer
>to crak the enigma code. (Suck that America, IBM more precisely)

Yes, Collossus at Bletchley Park, to help break ENIGMA and
FISH.

-- 
________________________________________________

Posted by Jim Dunnett

   Archduke Ferdinand Found Alive!

    World War I a mistake.

dynastic at cwcom.net
nordland at lineone.net

------------------------------

From: [EMAIL PROTECTED] (JimD)
Subject: Re: Echelon in Asia.
Reply-To: Jim
Date: Fri, 26 Jan 2001 21:59:32 GMT

On Thu, 25 Jan 2001 22:55:07 GMT, [EMAIL PROTECTED] (Jim)
wrote:

>On Wed, 24 Jan 2001 23:32:40 +0100, Mok-Kong Shen <[EMAIL PROTECTED]>
>wrote:
>
>>
>>
>>Abe Lin wrote:
>>> 
>>> We've been seeing a bit about echlon, but I haven't heard anything
>>> in Asia yet. Given Chinese Government's nature, they'd really surprise
>>> me if they don't have one.
>>
>>The technique is certainly within the reach of many countries
>>since quite a time. 
>
>NSA assists the government of the PRC with such things.
>(Two somewhat less than democratic countries!)
>Presumably it's targetted against Russia.
>
>>For example, after the 
>>unification of Germany it was discovered that East Germany 
>>had had in Berlin a (small) station intercepting among others 
>>communications from the office of the Chancellor of West 
>>Germany.
>
>Something they were perfectly entitled to do. Be rather silly
>to.

Should read: Be rather silly not to.

-- 
________________________________________________

Posted by Jim Dunnett

   Archduke Ferdinand Found Alive!

    World War I a mistake.

dynastic at cwcom.net
nordland at lineone.net

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Fri, 26 Jan 2001 21:50:35 GMT

On Fri, 26 Jan 2001 20:43:15 GMT, [EMAIL PROTECTED]
(John Savard) wrote, in part:

>so it's the 1170!

actually, that should be 12870!, I guess you had a typo.

John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Fri, 26 Jan 2001 22:06:54 GMT


On Thu, 25 Jan 2001 22:50:38 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:

>On Thu, 25 Jan 2001 22:28:38 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote,
>in part:
>
>>I doubt that substitution could be "just as good."  Resolving that
>>issue would seem to require a comparable alternative design which
>>could be examined and compared.
>
>But the specific point we disagree on, I think, is that "all possible
>transpositions" are not, in my opinion, as good as "all possible
>substitutions" of the whole block, 

Agree.

>so the fact that any substitution
>design won't provide that is not a strike against substitution.

Disagree.  

First, the number of permutations from which to select in a practical
Dynamic Transposition cipher is huge.  I don't expect a substitution
technique to come anywhere close to that.  

In fact, the worst part is that a full substitution keyspace is much,
much larger.  So we can at least talk about having enough state to
select any possible permutation, but we can't even imagine having
enough state to select any possible substitution.  Which means a
practical substitution keyspace is at least potentially biased in a
way which might be exploited.  

Next, the Dynamic Transposition permutations are of equal probability,
if only we assume the generator sequences are random-like with
independent values.  I have discussed that; the approximation to this
goal can be quite good indeed, since reducing functions hide value
imperfections.  Certainly I don't expect to see a stronger argument
for any substitution-based system, unless of course we are talking
keyed Latin square ciphering.


>That's basically because "all possible transpositions" are not the
>same thing as all possible substitutions over the set of balanced
>blocks. 

Yes.

>This is a specific mathematical issue, and I'm puzzled why
>it's not getting across.

You are; I reject your conclusions.


>What I have on my page at
>
>http://home.ecn.ab.ca/~jsavard/crypto/co041205.htm
>
>is just 'conceptual', and hence not completely specified, but then
>neither is Dynamic Transposition (the stream cipher component isn't
>fully defined). 

Dynamic Transposition is not a particular cipher, it is a type of
ciphering which can be realized in many ways.  Designers can use
different RNG structures, different balancing, different block size,
different permutation algorithms, and so on.  The result would still
be a recognizable Dynamic Transposition cipher having properties in
common with the various other implementations.  

>I can't remember the details now, but I wouldn't be
>surprised if your original Dynamic Transposition design might have
>even inspired it in part; I think I remember mentioning in an E-mail
>to you, though, that I was thinking of using a block cipher as a
>combiner simply as an alternative to Dynamic Substitution.
>
>I can understand that interest in novel combiners is limited, but I
>would have thought that even in the 'mainstream', combining a byte at
>a time with byte-wide fixed S-boxes, sort of like a rotor machine,
>would be popular to avoid the bit-flipping problem of the plain XOR
>combiner.
>
>My design at the URL given above, though, was more a tour-de-force
>showing how a gigantic key could be used to some effect. (There is a
>second design, which I consider more potentially practical, lower on
>the page, but that isn't germane - even though _it_ uses
>transposition...of key material.) Unlike Dynamic Transposition, which
>is practical (if _unfashionable_), this is big, slow, and cumbersome.

Then I am at a loss to understand the comparison.  

You argue that a design with a relatively small fixed set of small
substitutions and is slow is somehow superior to a design which
selects from more (indeed, all possible of that size) permutations,
and is also usably fast.  Where is the superiority?

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to