Cryptography-Digest Digest #492, Volume #13      Thu, 18 Jan 01 18:13:01 EST

Contents:
  Dynamic Transposition Revisited (long) (Terry Ritter)
  Re: Comparison of ECDLP vs. DLP (Wei Dai)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Dynamic Transposition Revisited (long)
Date: Thu, 18 Jan 2001 23:02:31 GMT



              Dynamic Transposition Revisited


                       Terry Ritter

                        2001 Jan 18


ABSTRACT

A novel approach to block ciphering is remembered for
relevance to modern ciphering issues of strength, analysis
and proof.  A rare example of a serious transposition
cipher, it is also an unusual example of the simultaneous
use of block and stream techniques.


INTRODUCTION

Over a decade ago, before the web, before PGP, and before
"Applied Cryptography," I began to study the different forms
of ciphering.  (Actually I was re-kindling a technical
interest from my Army years: in 1968 I was a cipher machine
repairman at the Nha Trang AUTODIN site.)  A decade ago there
were a few -- but only a few -- crypto books, and virtually
nothing on serious computer crypto.  Research often involved
reading the original academic literature, from Shannon on,
references which I later included in my articles.  One of the
things which caught my interest was an apparent dichotomy
between substitution and transposition.

(Currently, I have a different view:  I came to see
transposition as a limited, algorithmic form of substitution.
More precisely, "transposition" can be seen as a way to
form a permutation using a sequence of exchanges: literally
"transpositions."  But a mere permutation of the existing
symbols is a very limited subset of substitution.  And it
is exactly those limitations which form the characteristics
of transposition we know so well.)

Emulated huge substitution (in DES) had been the dominant
basis for ciphering, but no comparable transposition-based
designs existed (at least in the open literature).  That
begged the question of whether transposition was simply
unsuitable for secure ciphering, and if so, why.

Subsequent investigations and development produced an
article titled "Transposition Cipher with Pseudo-Random
Shuffling: The Dynamic Transposition Combiner," published
in the January 1991 issue of "Cryptologia" (see:

   www.io.com/~ritter/ARTS/DYNTRAN2.HTM

).  With continuing sci.crypt discussions on stream ciphering,
the one-time-pad (OTP) and cipher security proofs, it may
be worthwhile to re-visit Dynamic Transposition to see if
it has something to offer.

As described here, Dynamic Transposition is a block cipher
without S-boxes or rounds, and which neither has nor needs
avalanche or diffusion.  Like a stream cipher, it uses a
keyed random number generator (RNG) to perform ciphering on
a block-by-block basis.  Dynamic Transposition is notable
for design clarity, ease of understanding and analysis, and
scalability for testing.

The general concept of Dynamic Transposition could be
expanded into a stand-alone secret-key cipher, or used as
the data-ciphering component of a public-key cipher.


TRANSPOSITION

Specifically, a "transposition" is the simple exchange of
the positions of two symbols within a message or ordered
array or vector.  A sequence of such exchanges can form
any possible mathematical "permutation" of the message or
array.  This is the simple re-arrangement of the existing
data symbols.

Classically, a transposition cipher re-positions plaintext,
letter-by-letter, to produce ciphertext.  As a result, every
letter of the plaintext is also visible in the ciphertext.
And that invites an opponent to re-position or "anagram"
the ciphertext letters in an attempt to find the relatively
few plaintexts which would make sense.

In contrast, modern transposition need not work on whole
character units, but can instead permute each binary-coded
character bit-by-bit.  This is significant because vastly
more messages which make sense -- all but one wrong -- can
be created from a heap of bits than from a pile of letters.

Classically, one of the problems with transposition has been
that it does not change the values of the data, and that can
leak information.  Consider a plaintext block of all-0's:
No permutation will change that block at all, so it will be
fully-exposed as ciphertext.  Clearly, classic transposition
has a hidden strength requirement that plaintext data
contain a range of different values.

In contrast, modern computer processing can create blocks
with an exact balance of 1's and 0's (by adding balancing
values to the plaintext).  Only as much plaintext as can be
balanced is accepted for a block, which is then filled with
balancing values.  The result is that there will be no weak
blocks.

Classical transposition is often limited to simple processes
consistent with hand application, and so tends to traverse
only a small subset of all possible permutations.  That is
a limited keyspace which may support attack.

But in a modern computer implementation, efficient
permutation algorithms allow us to produce any of the
immense number of different permutations with equal
probability, provided we have a random sequence to drive
the algorithm.  One immediate result is a vastly-expanded
keyspace.

Classically, each block or message of a transposition cipher
is permuted in exactly the same way.  So if an opponent
acquires multiple messages, it may be possible to find a
single permutation which makes both ciphertexts readable,
and thus identifies the correct permutation.  That attack
is called "multiple anagramming."

But a modern version of transposition may simply permute
each block independently.  That is the "dynamic" part of
Dynamic Transposition, and it completely voids the multiple
anagramming attack.

Overall, the resulting block cipher is a novel combination
of both stream and block ciphering technology.


DYNAMIC TRANSPOSITION

A Dynamic Transposition cipher is conceptually very simple:

   (1) We collect plaintext data in bit-balanced (or almost
       bit-balanced) blocks.

   (2) We shuffle the bits in those blocks under the
       control of a keyed pseudorandom sequence.

(The cipher designer decides exactly how the sequence is
generated and keyed.  A complete design will use message
keys or other techniques to prevent sequence re-use.)  The
resulting scrambled blocks are the final ciphertext.

When every plaintext block is exactly bit-balanced, any
possible plaintext block is some valid bit-permutation of
any ciphertext block.  So, even if an opponent could
exhaustively un-permute a ciphertext block, the result
would just be every possible plaintext block.  No particular
plaintext block could be distinguished as the source of the
ciphertext.  This is a form of balanced, nonlinear combining
of the confusion sequence and data block: as such, it is
related to XOR, Latin squares, Shannon "perfect secrecy,"
and the one-time-pad (OTP).

The inability to distinguish a particular plaintext, even
when every possibility is tried, is basically the advantage
claimed for the OTP.  It is also an advantage which the OTP
cannot justify in practice unless we can prove that the OTP
keying sequence is unpredictable, which generally cannot be
done.  That makes the practical OTP exceedingly "brittle":
if the opponents ever do gain the ability to predict the
sequence, they may be able to attack many messages, both
future and past.  That would occur in the context of a
system supposedly "proven" secure; as usual, the user would
have no indication of security failure.

Dynamic Transposition does not need the assumption of
sequence unpredictability, because the sequence is hidden
behind a multitude of different sequences and permutations
which all produce the same result.  And if the sequence
itself cannot be exposed, exploiting any predictability in
the sequence will be difficult.  (This of course does not
mean that Dynamic Transposition cannot be attacked:
Brute-force attacks on the keys are still imaginable, which
is a good reason to use large random message keys.)

In particular, shuffling the bits in a bit-balanced block
means that a huge number of different permutations will
produce exactly the same ciphertext result.  Where one
ciphertext may have a '1' coming from the start of the
plaintext block, another may have a '1' from near the end,
both of which produce exactly the same ciphertext result.
So, even if the opponents have the plaintext and the
associated ciphertext, the ciphering permutation is still
not revealed, and that protects against attacks on the
pseudorandom sequence and the RNG.

Bit-permutation does have, of course, a substantial
execution cost.

Dynamic Transposition is conceptually as simple as, say,
"simply" exponentiating a value modulo the product of two
large primes.  Of course, actual the implementation details
in any real cipher can be complex.  But the feeling of
complexity often goes away after running through a couple
of designs where the problems are all fully solved.  And
testing the implementation may be easier with Dynamic
Transposition.

A Dynamic Transposition system is far more transparent
than most alternatives.  The components are all well-known
cryptographic technology, each of which can be independently
understood, evaluated, tested -- and even upgraded, when
necessary.  And the simple 8th-grade combinatorics involved
in Dynamic Transposition analysis (which, alas, I
occasionally get wrong anyway) support more believable
conclusions than graduate-level number-theoretic asymptotic
"proofs."


DYNAMIC TRANSPOSITION CIPHERING

Most of the components used in Dynamic Transposition have
been discussed many times; see, for example:

   http://www.io.com/~ritter/KEYSHUF.HTM

and

   http://www.io.com/~ritter/CLO2DESN.HTM

One of the needed components which has not been described
is the bit-balancing process.


Bit-Balanced Blocks

Some of the analytic and strength advantages of Dynamic
Transposition depend upon having the same number of 1's
and 0's in each block, which is called "bit-balance."
Exact bit-balance can be achieved by accumulating data to
a block byte-by-byte, only as long as the block can be
balanced by adding appropriate bits at the end.  Then the
block is ciphered, and another block filled.

We will always add at least one byte of "balance data,"
at the end of a block, and only the first balance byte,
immediately after the data, will contain both 1's and 0's.
We can thus transparently remove the balance data by
stepping from the end of the block, past the first byte
containing both 1's and 0's.

In general, bit-balancing may expand ASCII text by as much
as 1/3.  We can reduce that if we have random-like data.
A pre-processing data compression step would not only reduce
plaintext size, it would also reduce the expansion factor.

Approximate or statistical bit-balance can be achieved by
creating a sort of Vernam cipher to "randomize" the
plaintext.  The result is almost balanced blocks with no
expansion at all.


Bit Shuffling

This is ordinary cryptographic shuffling -- algorithmic
permutation -- of bits.  It is of course necessary that
this be done properly, to achieve a balanced probability
for each result, but that is well-known cryptographic
technology.  (Again, see:

   http://www.io.com/~ritter/KEYSHUF.HTM

.)  The usual solution is the well-known algorithm by
Durstenfeld, called "Shuffle," which Knuth II calls
"Algorithm P (Shuffling)," although any valid permutation
generator would be acceptable.

We can treat the accumulated and balanced block as a bit
vector and walk through it, shuffling just like we might
do with an array of bytes.

If we shuffle each block just once, an opponent who somehow
knows the correct resulting permutation can use that
information to reproduce the shuffling RNG sequence, and
thus start to attack the RNG.  And even though we think
such an event impossible (since the correct permutation is
hidden by a plethora of different bit-permutations that
each produce exactly the same ciphertext from exactly the
same plaintext), eliminating that possibility (by shuffling
each block twice) is probably worthwhile.  This does not
produce more permutations, it just hides shuffling sequence.


Deciphering

We can easily return an encrypted block to plaintext by
applying Shuffle in reverse order.  The keyed pseudorandom
sequence used for encryption is accumulated in a buffer and
used last-value-first.  And if we shuffle twice to encipher,
we must unshuffle twice to decipher.


ANALYSIS

Dynamic Transposition is clearly a block cipher:  Data must
be accumulated into blocks before ciphering can begin.  It
is not, however, a conventional block cipher.  It does not
emulate a large substitution.

Unlike conventional block ciphers, Dynamic Transposition has
no data diffusion, nor is that needed.  It has no S-boxes,
and so has no weak S-boxes.  The only mixing involved is the
single mathematically uniform distribution of permutations,
so there is no weak mixing.  And whereas conventional block
ciphers need "modes of operation" to randomize plaintext and
thus minimize the ability to mount codebook attacks, Dynamic
Transposition does not.

While conventional block ciphers generally do not scale down
to exhaustively testable size, Dynamic Transposition scales
well.  Presumably, it could be made to operate on blocks of
variable size on a block-by-block basis.  Each component
also can be scaled down and tested independently.

It is easy to sink into a morass of arithmetic in the
strength argument.   Rather than do that here, I will just
highlight the major issues:

The main idea is to hide the RNG sequence (actually the
nonlinear sequence of jitterized values), so an opponent
cannot attack the deterministic RNG.  Strength is provided
by the block size and guaranteed bit-balance, since, when
shuffled, a plethora of different permutations will take
the plaintext block to exactly the same ciphertext block.
There simply is no one permutation which produces the given
ciphertext.  Since a plethora of different permutations will
produce the given ciphertext, trying them all is impractical.
So the opponents will not know the permutation -- even with
known plaintext -- and will not have the information to
attack the RNG.

If the first level of strength fails and the ciphering
permutation is discovered, more strength occurs in the
fact that another plethora, this time different RNG
sequences, will create that exact same permutation.
When each block is shuffled twice, this prevents an
opponent from finding the sequence needed to attack the
RNG.

Then, of course, we get to whatever strength is in the RNG
system itself.  That includes having an internal state which
significantly exceeds the amount of information used for any
block permutation.  It also includes the unknown strength of
nonlinear filtering (e.g., "jitterizing") of the raw RNG
data.


Large Numbers

It is common, in cipher analysis, to disdain the use of
large-number arguments.  That is because, in practice,
opponents do not even attempt to attack well-designed
ciphers by straightforward brute force, because such an
approach would be hopeless.  Ciphers are instead attacked
and broken by finding some pattern which will discard huge
numbers of possible keys with every test.  But such patterns
generally occur in the context of complex iterative cipher
systems which *can* have defects.  In contrast, here we use
mathematically-correct permutation algorithms, and no such
defect should be present.

In the current Dynamic Transposition design, even with RNG
nonlinear processing, we do expect correlations to exist
in the RNG shuffling sequence.  But if the sequence is hidden
and cannot be isolated, it would seem to be very difficult
to find and use such relationships.  In a real sense, using
a combiner more complex than XOR provides protection against
the disaster of a predictable sequence -- protection which
is simply unavailable in conventional OTP designs.

The simple structure of the Dynamic Transposition combiner
(a simple selection from among all possible bit-permutations)
would seem to not lend itself as well to statistical attack
as conventional complex block cipher designs.


Key Re-Use and Message Keys

In any real system which uses a RNG confusion sequence, it
is important that any particular encrypting sequence "never"
be re-used.  It is also important to be able to use a master
key to send multiple messages without reducing the security
of the system.  This issue is known as "key re-use."  We can
solve both issues by introducing "message keys."

One way to support the re-use of a master key is to create
some large, random unknowable value which we call a "message
key" and use as the key for data ciphering.  Because this
can be a completely arbitrary random value (e.g., from a
noise generator source), we can repeatedly use the same
master key securely.  We send message key values to the
other end by enciphering each under an unchanging master
key.  At the other end, the message key is first deciphered,
and the exposed random value is used as the key to decipher
the data.  This is standard stream-cipher technology.

The necessary advantage of a message key is to support
key-reuse.  Another advantage is to prevent attacks which
depend upon having the same key across multiple messages.
But an even greater advantage is to skip past the small
and (probably) structured master or user key to a large,
flat-distributed message key.  For the past decade I have
often used random 992-bit message keys.

The concept of a message key pre-dates the similar if more
published term "session key."  But a "session" refers to the
establishment of logical connection in a network or system,
a concept which is both unnecessary and extraneous to
ciphering per se.  And even when a logical session exists,
we might be wise to use different message key on every
transmission in a potentially-long session.  Thus, the term
"session key" simply does not capture the cryptographic
essence of the problem.


ATTACKS

Complex and hard-to-refute conventional block-cipher
attacks, such as Differential and Linear Cryptanalysis,
would seem to be completely irrelevant to ciphers of the
Dynamic Transposition type.  Where there are no S-boxes,
there are no linear or differential relations in S-boxes.
And where there is no Feistel structure, there is no ability
to peel off Feistel layers.  All this is a serious advantage.

The classic attack on transposition, "multiple anagramming,"
is avoided by having a different permutation for every
block.  The use of random message keys forces each
message to use a different encryption sequence.  This also
prevents the "bit flipping" sort of defined-plaintext
attacks which attempt to reveal the permutation.

The classic sort of codebook attack is avoided first by
having a huge block size, and next by not attempting to
re-use permutations.  Note that conventional block ciphers
have the first advantage only to a lesser extent, and the
second advantage not at all, which is why they need "modes
of operation."

Known-plaintext attacks would be a first step in an attempt
to attack the RNG as in a stream cipher.  But with Dynamic
Transposition, known-plaintext does not disclose the
enciphering permutation, because a plethora of different
bit-permutations will produce exactly the same ciphertext
from exactly the same plaintext.  A successful attack
would thus require some way to distinguish more probable
permutations from less probable ones.  Such an attack would
seem to be avoided by proper shuffling.

And should the opponents somehow find the correct
permutation, attempts to find the shuffling RNG sequence
would be thwarted by double-shuffling.


SUMMARY

A novel, relatively easy-to-design and easy-to-analyze type
of cipher has been presented.  There would seem to be ample
reason to consider Dynamic Transposition to be yet another
"unbreakable" cipher, when properly implemented.

In marked contrast to most "unbreakable" ciphers, Dynamic
Transposition strength arguments are exactly the same sort
of arguments we use for keys:  A correct key exists, but is
safe when hidden among many others just like it.  These
arguments are simple, fundamental to all keyed ciphers of
every type, and do not depend upon unproven assumptions.

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: Wei Dai <[EMAIL PROTECTED]>
Subject: Re: Comparison of ECDLP vs. DLP
Date: Thu, 18 Jan 2001 15:07:12 -0800

In article <[EMAIL PROTECTED]>, djohn37050
@aol.com says...
> DJ: Again you think there is no RSA PKV, but there is, how much to use of it is
> a cost/benefit decision.

Ok, I should have expected that a ZKP would exist for RSA public key 
validity. But certainly it's not in common use, and your point was that 
no signature should be considered valid unless the signing key has been 
validated. You seem to have changed your mind by talking about a 
cost/benefit decision.

Thanks for the reference to the paper, BTW. Here's the URL for everyone 
else: http://grouper.ieee.org/groups/1363/StudyGroup/KeyVal.html.

I question the usefulness of the non-repudiation aspect of PKV (as 
opposed to the aspect of defense against subgroup attacks). Bob's paper 
is motivated by the consideration that someone might be able to 
repudiate an RSA key that's not the product of nearly equal primes. But 
if someone is allowed to repudiate such an RSA key, what prevents him 
from repudiating an RSA key with low entropy? As far as I know there is 
no way to prove that a key has sufficient entropy.

The paper also talks about a way to prove that a DL private key is 
sufficiently large. That also seems to be of limited use because a DL 
private key can be large and still have low entropy.

BTW, I think we need a new term for validating a public key for the 
purpose of non-repudiation. For DL, there are now at least two tests, 
and when you say PKV most people will only think of testing that the 
public key is in the right subgroup. Since the title of the paper is "A 
Statistical Limited-Knowledge Proof for Secure RSA Keys", I suggest PSK 
for "Proof for Secure Keys".

-- 
http://cryptopp.com - free C++ cryptography and compression library

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to