Cryptography-Digest Digest #814, Volume #13 Tue, 6 Mar 01 00:13:01 EST
Contents:
Dynamic Transposition Revisited Again (long) (Terry Ritter)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Dynamic Transposition Revisited Again (long)
Date: Tue, 06 Mar 2001 04:34:02 GMT
Dynamic Transposition Revisited Again
Terry Ritter
2001 Mar 05
ABSTRACT
A novel approach to block ciphering is remembered for
relevance to modern ciphering issues of strength, analysis
and proof. A rare example of a serious transposition
cipher, it also demonstrates the simultaneous use of
block and stream techniques.
INTRODUCTION
Over a decade ago, before the web, before PGP, and before
"Applied Cryptography," I began to study the different
forms of ciphering. One of the things which caught my
interest at the time was an apparent dichotomy between
substitution and transposition.
Emulated huge substitution (e.g., DES) was (and is) the
dominant basis for ciphering, but no comparable
transposition-based designs exist (at least in the open
literature). That begs the question of whether
transposition is simply unsuitable for secure ciphering,
and if so, why.
Subsequent investigations and development produced an
article titled "Transposition Cipher with Pseudo-Random
Shuffling: The Dynamic Transposition Combiner," published
in the January 1991 issue of "Cryptologia" (see:
www.io.com/~ritter/ARTS/DYNTRAN2.HTM
). With continuing sci.crypt discussions on stream ciphering,
the one-time-pad (OTP) and cipher security proofs, it may
be worthwhile to re-visit Dynamic Transposition to see if
it has something to offer.
As described here, Dynamic Transposition is a class of block
cipher, without S-boxes or rounds, and which neither has nor
needs avalanche or diffusion. Like a stream cipher, it uses
a keyed random number generator (RNG) to perform ciphering
on a block-by-block basis. Dynamic Transposition is notable
for design clarity, ease of understanding and analysis, and
scalability for testing.
The general concept of Dynamic Transposition could be
expanded into a stand-alone secret-key cipher, or used as
the data-ciphering component of a public-key cipher.
TRANSPOSITION
Specifically, a "transposition" is the simple exchange in
position of two symbols within a message or ordered array
or vector. A sequence of such exchanges can form any
possible mathematical "permutation" of the message. This
is the simple re-arrangement of the existing data symbols.
But classical transposition ciphers have a number of known
weaknesses:
The Character Permutation Weakness
Classically, a transposition cipher re-positions plaintext,
letter-by-letter, to produce ciphertext. As a result, every
letter of the plaintext is also visible in the ciphertext.
And that invites an opponent to re-position or "anagram"
the ciphertext letters in an attempt to find the relatively
few plaintexts which would make sense.
In contrast, modern transposition need not work on whole
character units, but can instead permute each binary-coded
character bit-by-bit. This is significant because vastly
more messages which make sense -- all but one wrong -- can
be created from a heap of bits than from a pile of letters.
The Unchanged Values Weakness
Classically, one of the problems with transposition has been
that it does not change the values of the data, and that can
leak information. Consider a plaintext block of all-0's:
No permutation will change that block at all, so it will be
fully-exposed as ciphertext. Clearly, classic transposition
has a hidden strength requirement that plaintext data
contain a range of different values.
In contrast, modern computer processing can create blocks
with an exact balance of 1's and 0's (by adding balancing
values to the plaintext). Only as much plaintext as can be
balanced is accepted for a block, which is then filled with
balancing values. The result is that there will be no weak
blocks.
The Special Permutation Weakness
Classical transposition is often limited to simple processes
consistent with hand application, and so tends to traverse
only a small subset of all possible permutations. That is
a limited keyspace which may support attack.
But in a modern computer implementation, efficient permutation
algorithms allow us to produce any of the immense number of
different permutations with equal probability, provided we
have a random sequence to drive the algorithm. One immediate
result is a vastly-expanded keyspace.
The Repeated Permutation Weakness
Classically, each block or message of a transposition cipher
is permuted in exactly the same way. So if an opponent
acquires multiple messages, it may be possible to find a
single permutation which makes both ciphertexts readable,
and thus identifies the correct permutation. That attack
is called "multiple anagramming."
But a modern version of transposition may simply permute
each block independently. That is the "dynamic" part of
Dynamic Transposition, and it completely voids the multiple
anagramming attack.
DYNAMIC TRANSPOSITION
A Dynamic Transposition cipher is conceptually simple:
(1) Collect plaintext data in bit-balanced (or almost
bit-balanced) blocks; and
(2) Shuffle the bits in those blocks under the
control of a keyed pseudorandom sequence.
(The cipher designer decides exactly how the sequence is
generated and keyed. A complete design will use message
keys or other techniques to prevent sequence re-use.) The
resulting scrambled blocks are the final ciphertext.
| expand
| Key ----/---> Large RNG --+
| v
| bit-balance Shuffle
| PT -------/-------> ------/------> CT
|
| Fig. 1. Dynamic Transposition General Structure.
| (Operations listed above state.)
|
| First, the key is expanded into the large RNG state.
| Then, for each block, plaintext is bit-balanced and
| bit-permuted, using the RNG sequence.
Ciphering Strength
The primary strength in a Dynamic Transposition cipher is
the ability to produce any possible block permutation with
equal probability (thus requiring an RNG with large internal
state), combined with the inability of an opponent to
identify which permutation actually occurred. We can expand
on this:
First, conventional block ciphers are based on the concept of
emulating a huge keyed substitution table. One problem with
this is that only a tiny -- almost infinitesimal -- fraction
of all possible tables can be produced. In one sense, the
common assumption that a conventional block cipher represents
a random selection from among all possible tables is simply
false. And while that may or may not impact strength, it
does prevent a clear strength argument: It certainly is
possible that the tiny implemented subset of ciphering tables
are all related in an exploitable way; to have a proof of
strength, we must prove that false.
In contrast, Dynamic Transposition supports every possible
ciphering permutation. This guarantees a lack of bias in the
fundamental ciphering operation, and supports proof on that
basis.
Next, every form of block ciphering takes a plaintext block
to a ciphertext block. Normally, known-plaintext is
sufficient to define a specific transformation, which then
can be used to attack the cipher.
Dynamic Transposition ciphering uses bit-permutation on
bit-balanced blocks. Half the data bits are 1's and half
are 0's, and many different permutations will produce the
exact same ciphertext from the exact same plaintext. Thus,
even known-plaintext does not expose the exact ciphering
transformation. And since a different permutation is built
for every block, there is no opportunity for an opponent
to refine a guess, or to repeatedly use defined-plaintext
to expose a permutation.
In marked contrast to conventional block ciphers, the
ciphering transformation in Dynamic Transposition is at
least ambiguous and perhaps externally unknowable. The
number of different permutations which produce exactly
the same ciphertext is huge and unsearchable.
Block-Perfect Secrecy
When every plaintext block is exactly bit-balanced, any
possible plaintext block is some valid bit-permutation of
any ciphertext block. So, even if an opponent could
exhaustively un-permute a ciphertext block, the result
would just be every possible plaintext block. The
implication is that no particular plaintext block can be
distinguished as the source of the ciphertext. This is
a form of balanced, nonlinear combining of the confusion
sequence and data: as such, it is related to XOR, Latin
squares, Shannon "perfect secrecy," and the one-time-pad
(OTP).
The inability to distinguish a particular plaintext, even
when every possibility is tried, is basically the advantage
claimed for the OTP. It is also an advantage which the OTP
cannot justify in practice unless we can prove that the OTP
keying sequence is unpredictable, which generally cannot be
done. That makes the practical OTP exceedingly "brittle",
for if the opponents ever do gain the ability to predict the
sequence, they may be able to attack many messages, both
future and past. That would occur in the context of a
system supposedly "proven" secure; as usual, the user would
have no indication of security failure.
Dynamic Transposition does not depend upon the assumption
of sequence unpredictability, because the sequence is hidden
by the multitude of different sequences and permutations
which would all produce the same ciphertext result. And if
the sequence itself cannot be exposed, any predictability in
the sequence also cannot be exploited. (That does not mean
that Dynamic Transposition cannot be attacked: Brute-force
attacks on the keys are still imaginable, which is a good
reason to use large random message keys.)
Other Distinctions
Bit-permutation does have, of course, a substantial
execution cost.
Dynamic Transposition is just as conceptually simple as,
say, "simply" exponentiating a value modulo the product
of two large primes. This makes for a simple, transparent
design. For example, there are no tables. There are no
rounds. Complexity is not used as the basis for a belief
in strength.
Almost all the components in Dynamic Transposition are
well-known cryptographic technology, each of which can
be independently understood, evaluated, tested -- and even
upgraded, when necessary.
As opposed to the graduate-level computational-complexity
proofs popular for conventional block ciphers, main-strength
analysis of Dynamic Transposition is based on simple
8th-grade combinatorics (which, alas, I occasionally get
wrong anyway).
WELL-KNOWN COMPONENTS
Most of the components used in Dynamic Transposition have
been discussed many times. These components include: key
hashing, key expansion, a large-state RNG, "Jitterizer"
nonlinear filtering, bit-balancing, and random permutation.
See, for example:
http://www.io.com/~ritter/KEYSHUF.HTM
and
http://www.io.com/~ritter/CLO2DESN.HTM
These links describe other types of cipher, and are
offered as convenient descriptions of well-known cipher
components. One component which may need some discussion
is key-expansion, and the data bit-balancing process is
not a conventional cryptographic component.
KEY EXPANSION
It is intended to key a large amount of state from a small
key. One approach is to hash the key into the state of a
small Additive RNG; that RNG is then run through nonlinear
filtering (e.g., "Jitterizer") into a buffer. When
sufficient data have been collected, those data are used
as the state of a larger Additive RNG, which is then run
through another nonlinear filter and collected in another
buffer, and so on.
| hash jitterize
| Key --/---> buf1 / RNG1 -----/-----> buf2
|
| jitterize
| buf2 / RNG2 -----/-----> buf3
|
| jitterize
| bufn / RNGn -----/-----> bufm / RNGm -+
| v
| Shuffle
|
| Fig. 2. Typical Key Expansion.
|
| The key is hashed into a buffer, which becomes the
| state needed to operate a small Additive RNG. At
| each expansion step, the RNG output is nonlinearly
| filtered into a buffer, until sufficient state is
| collected to operate the next larger RNG.
The detailed key expansion probably seems more complex
than it actually is, since the RNG / Jitterizer can be
made general subroutines, and then just called each time
with appropriate parameters. In the Cloak2 cipher I
used a sequence of RNG's of degree 31, 127, 607 and 3217
for expansion, and degree 9689 for sequence production.
Using 32-bit elements, that RNG has 38,756 bytes --
310,048 bits -- of internal state.
BIT-BALANCING
Some of the analytic and strength advantages of Dynamic
Transposition depend upon having the same number of 1's
and 0's in each block, which is called "bit-balance."
Any reasonable way of achieving bit-balance should be
acceptable.
Balancing by Byte
Exact bit-balance can be achieved by accumulating data to
a block byte-by-byte, only as long as the block can be
balanced by adding appropriate bits at the end. Then the
block is ciphered, and another block filled.
We will always add at least one byte of "balance data" at
the end of the data, a byte which will contain both 1's and
0's. Subsequent balance bytes will be either all-1's or
all-0's, except for trailing "padding" bytes, of some
balanced particular value. We can thus transparently remove
the balance data by stepping from the end of the block, past
any padding bytes, past any all-1's or all-0's bytes, and
past the first byte containing both 1's and 0's. Padding
is needed both to allow balance in special cases, and when
the last of the data does not completely fill the last block.
This method has a minimum expansion of one byte per block,
given perfectly balanced binary data. ASCII text may expand
by as much as 1/3, which could be greatly reduced with a
pre-processing data compression step.
Statistical Bit-Balance
Approximate or statistical bit-balance can be achieved
by creating a sort of Vernam cipher to "randomize" the
plaintext. The result can be almost balanced blocks
with no expansion at all. A similar concept is called
"scrambling" in data-communications; various well-known
modem or SONET or other scrambling operations could be
used, with some fixed amount of expansion. Either
way could be a very reasonable approach; the main
disadvantage appears to be the loss of guaranteed exact
bit-balance which may reduce theoretical clarity.
Alternate Approaches to Bit-Balancing
Modern data-communications has developed transformations
to produce approximately bit-balanced data and so avoid
the need to transport a slow or DC signal component. The
key words here seem to be "bounded disparity encoding," of
which one example is "8b/10b" or "8b10b." Here, 8-bit
values are converted to bit-balanced 10-bit values,
constructed so that no more than 5 consecutive 0's or 1's
ever occur. This is a fixed 25% expansion, independent of
the bit-balance of the source data.
For Dynamic Transposition, various bit-balancing approaches
might be compared on the basis of balance accuracy, data
expansion, computational effort, and so on. Certainly,
Dynamic Transposition could directly encipher the already
bit-balanced low-level data used in data communications
systems. Alternately, Dynamic Transposition might just
use bit-balancing hardware.
BIT SHUFFLING
This is ordinary cryptographic shuffling -- algorithmic
permutation -- of bits. It is of course necessary that
this be done properly, to achieve a balanced probability
for each result, but that is well-known cryptographic
technology. (Again, see:
http://www.io.com/~ritter/KEYSHUF.HTM
.) The usual solution is the well-known algorithm by
Durstenfeld, called "Shuffle," which Knuth II calls
"Algorithm P (Shuffling)," although any valid permutation
generator would be acceptable.
We seek to create every possible permutation with equal
probability, and thus avoid a subset of cipherings that
could have some structure which favors solution.
We can treat the accumulated and balanced block as a bit
vector and walk through it, shuffling just like we might
do with an array of bytes.
Shuffling Strength
Unfortunately, Shuffle is approximately reversible. If
we use only one shuffling, and if the opponents get the
permutation, they immediately have a (somewhat imprecise)
idea about what sequence produced that permutation. With
two independent shufflings, there is no such implication.
The sequence generating RNG should have sufficient
internal state to allow every possible *pair* of
permutations.
If there is only enough state in the RNG for a single
shuffling, any second shuffling will be correlated to the
first by way of the limited RNG state, and, thus, not
independent. In this way, knowing the final permutation
might allow the development of the double-length shuffling
sequence which produced the known permutation.
The reason for having enough RNG state for *two*
*independent* shufflings is to isolate the sequence
generator from the permutation. Two shufflings use twice
as much information (sequence) as needed to form a
permutation, so two shufflings use twice as much
information as the resulting permutation can represent.
So even if the opponents do get the final permutation,
the uncertainty in the sequence used to build that
permutation will be as though we had a one-way Shuffle.
A well-known information-theoretic "counting" argument
assures that no "reasonable" hash can be reversed,
provided substantially more information is hashed than
the amount of internal state. This is independent of
whether the hash is "cryptographic" or not, and occurs
when the amount of internal state is insufficient to
distinguish among all possible input data vectors. A
similar "insufficient information" argument assures
that double shuffling also cannot be reversed. Both
hashing and double-shuffling can thus be seen as
"information-reducing" functions and major sources of
strength in Dynamic Transposition.
DECIPHERING
We can easily return an encrypted block to plaintext by
applying Shuffle exchanges in reverse order. Shuffle might
be operated as usual, with the resulting exchange positions
simply buffered. When Shuffle has finished, the actual
data exchanges are made, last position first. Since we
shuffle twice to encipher, we must unshuffle twice to
decipher.
ANALYSIS
Dynamic Transposition is clearly a block cipher: Data must
be accumulated into blocks before ciphering can begin. It
is not, however, a conventional block cipher. It does not
emulate a large substitution.
Unlike conventional block ciphers, Dynamic Transposition has
no data diffusion, nor is that needed. It has no S-boxes,
and so has no weak S-boxes. The only mixing involved is the
single mathematically uniform distribution of permutations,
so there is no weak mixing. And whereas conventional block
ciphers need "modes of operation" to randomize plaintext and
thus minimize the ability to mount codebook attacks, Dynamic
Transposition does not.
While conventional block ciphers generally do not scale down
to exhaustively testable size, Dynamic Transposition scales
well. Presumably, it could be made to operate on blocks of
variable size on a block-by-block basis. Each component
also can be scaled down and tested independently.
Design Assumptions
We do of course assume that "large enough" keys will be
used. I have used 992-bit keys, simply because these
are conveniently hashed as 32 separate degree-31 CRC
operations, and then are easily expanded by a degree-32
Additive RNG working on 32-bit values.
Also, when discussing cipher strength, we have a right to
assume the key will be random and independent of other
key values.
Next, we assume that the RNG will have a large internal
state. In particular, the RNG state needs to be larger
than the information needed for double-shuffling, thus
making any pair of permutations equally possible. Even an
Additive RNG of degree 9689 with 310,048 bits of internal
state is insufficient for the 4096-bit (512-byte) block I
would prefer to use. (Two 4k shufflings need about
8192 * 1.25 = 10,240 values, and 10,240 * 1.3 = 13,312 RNG
steps.) By limiting the maximum block size to 2048 bits
(256 bytes), we only need about 2048 * 2 * 1.25 * 1.3 =
6656 steps per double-shuffle, which the deg-9689 RNG
easily handles.
The main idea is to first make direct brute force attacks
on the key *impractical* by making the key "large enough"
and random. That makes it necessary to attack the vastly
larger internal state of the internal RNG via leaked
information, and we contrive to make that very difficult:
By enciphering as data permutation, we avoid ciphering bias
by creating any possible permutation with equal probability.
By first balancing the data, and by creating a permutation
for each block, the resulting permutation is not exposed
by simple known-plaintext.
Strength is provided by the block size and guaranteed
bit-balance, since, when shuffled, a plethora of different
permutations will take the plaintext block to exactly the
same ciphertext block. There simply is no one permutation
which produces the given ciphertext. Many different
permutations will produce the given ciphertext, and trying
them all is both futile and impractical. So the opponents
will not know the permutation -- even with known plaintext.
And knowing the permutation would seem to be a necessary
(yet not sufficient) condition to attack the internal state
of the RNG.
But even if the ciphering permutation be somehow discovered,
more strength occurs in the fact that another plethora, this
time many different RNG sequences, will create that exact
same permutation. This prevents an opponent from finding
the sequence needed to attack the RNG, even if the
permutation is somehow known.
Large Numbers
It is common, in cipher analysis, to disdain the use of
large-number arguments. In practice, opponents do not even
attempt to attack well-designed ciphers by straightforward
brute force, because such an approach would be hopeless.
Ciphers are instead attacked and broken by finding some
pattern which will discard huge numbers of possible keys
with every test. But such patterns generally occur in the
context of complex iterative cipher systems which *can* have
defects. In contrast, here we use mathematically-correct
permutation algorithms, and no such defect should be present.
Conventional Enciphering Space
First consider a conventional 64-bit block cipher, such as
DES. Such ciphers emulate a substitution table of huge size,
but few people understand just how few of all possible tables
are actually realized. For Simple Substitution, the number of
possible tables is the number of elements or values, factorial:
1. A 64-bit block has 2**64 elements.
2. 2**64 ~= 18446744073709552000
3. 18446744073709552000! ~= 2**(10**21) tables.
4. The DES keyspace is 2**56 tables.
(For detailed computation, try:
http://www.io.com/~ritter/JAVASCRP/PERMCOMB.HTM
.)
One might say that DES has a potential keyspace of 2**(10**21)
keys, of which 2**56 keys are actually selectable. Thus, DES
emulates *ALMOST* *NONE* of the complete substitution keyspace.
One effect of this is that the usual assumption that the
cipher produces a random selection among all possible tables
will never be proven. And trying to prove that the reachable
tables have the same statistical characteristics as the
universe of all possible tables sounds very hard indeed.
Dynamic Transposition Enciphering Space
Next, consider a Dynamic Transposition cipher with a tiny
64-bit block: the number of possible enciphering permutations
is "just" 64!, or about 2**295.995. But each and every one
of these is possible, and, when properly shuffled, equally
probable. Dynamic Transposition thus has a flat unbiased
distribution which supports proof argument.
With a 64-bit block, there are 2**64 possible block values,
but only about 2**60 of these are balanced:
[ The number of 1's in a value is binomial:
| B(n,k,p) = (nCk) (p**k) (q**n-k)
| = (64C32) (.5**32) (.5**32)
| = 1.8326E18 * 2.3283E-10 * 2.3283E-10
| = 9.9345E-2
| E = 2**64 * 9.9345E-2 = 1.83259E18 = 2**60.669
or, for binomial computation, try:
http://www.io.com/~ritter/JAVASCRP/BINOMPOI.HTM#Binomial
]
There are only 2**60 possible results, yet 2**296 different
permutations produce those results. So any particular
known-plaintext result is produced by about 2**235.326
different permutations. (In fact, we know these must be
just the permutations of 1's and the permutations of the
0's, of which there should be (32!)(32!), which is about
2**235.327, a good confirmation.)
Somewhere in the set of 2**235 different permutations which
each take the given plaintext block to the given ciphertext
block is the one which was actually used, which thus might
provide insight into the shuffling sequence and RNG state.
But without information *beyond* known-plaintext, there
would seem to be no way to distinguish the correct
permutation from the rest. Adjacent blocks do not help to
refine this set, because each block has a new construction.
The opponent then looks at 2**235 different permutations
and tries to relate those to the shuffling sequence, in an
attempt to find the correct sequence and eventually expose
the RNG state. But with double shuffling, about 2**296
different shufflings produce any particular permutation,
so the opponent now has 2**531 states to think about.
This appears to be the consequence of reasoning in the
wrong direction through an information-reducing filter.
Ultimately, the permutations in different blocks are
related, but only in the state of the RNG, which is in the
hundred thousand bit range. It is easy to argue that few
of those states are possible -- just those which can be
selected by the key -- but we already assumed the key to
be "large enough" to prevent enumeration or sequential
inspection. Knowing that many potential RNG states are
impossible does not seem to help unless they can be
related in some way to observable values, or otherwise
discarded in large groups.
Primary strength in Dynamic Transposition is gained by
having an enciphering operation which is not exposed by
known-plaintext. Secondary strength is provided by
double-shuffling. At first glance, it seems that only
information which somehow survives passage back through
the information-reduction operations without being
expanded would be worth using to expose the RNG state.
Larger blocks than 64 bits would of course have vastly
larger numbers of permutations and shufflings.
Sequences of Permutations
Any fixed-state RNG must produce sequences in which the
values have some relationship to each other. But both
"Jitterizer" processing and variable range reduction for
shuffling randomly discard RNG values, thus causing the
entire rest of the shuffling sequence to change position.
Thus, very similar RNG sequences will produce radically
different permutations.
Nevertheless, abstractly, sequence relationships must exist.
But if the sequence is hidden and cannot be isolated, it
would seem to be very difficult to use such relationships.
In discussions of *practical* strength, an inability to
externally detect or measure sequence relationships is as
good as not having those relationships in the first place.
Similarly, the sequence of enciphering permutations must
also have structure. But the permutations are both transient
and ambiguous; simply describing the sequence of permutations
in any way which reflects upon the creating sequence would
seem to be well beyond known capabilities.
Key Re-Use and Message Keys
In any real system which uses a RNG confusion sequence, it
is important that any particular encrypting sequence "never"
be re-used. It is also important to be able to use a master
key to send multiple messages without reducing the security
of the system. This issue is known as "key re-use." We can
solve both issues by introducing "message keys."
One way to support the re-use of a master key is to create
some large, random unknowable value which we call a "message
key" and use as the key for data ciphering. Because this
can be a completely arbitrary random value (e.g., from a
noise generator source), we can repeatedly use the same
master key securely. We send message key values to the
other end by enciphering each under an unchanging master
key. At the other end, the message key is first deciphered,
and the exposed random value is used as the key to decipher
the data. This is standard stream-cipher technology.
The necessary advantage of a message key is to support
key-reuse. Another advantage is to prevent attacks which
depend upon having the same key across multiple messages.
But an even greater advantage is to skip past the small
and (probably) structured master or user key to a large,
flat-distributed message key. For the past decade I have
often used random 992-bit message keys.
The concept of a message key pre-dates the similar if more
published term "session key." But a "session" refers to the
establishment of logical connection in a network or system,
a concept which is both unnecessary and extraneous to
ciphering per se. Even when a logical session exists, we
might want to use different message key on every transmission
in a potentially-long session. Thus, the term "session key"
simply does not capture the cryptographic essence of the
problem.
ATTACKS
Complex and hard-to-refute conventional block-cipher
attacks, such as Differential and Linear Cryptanalysis,
would seem to be completely irrelevant to ciphers of the
Dynamic Transposition type. Where there are no S-boxes,
there are no linear or differential relations in S-boxes.
Where there is no emulated table, there is no linear or
differential relationship in that table. And where there
is no Feistel structure, there is no ability to peel off
Feistel layers. To have a block cipher where we can make
these statements is really quite remarkable.
The classic attack on transposition, "multiple anagramming,"
is avoided by having a new permutation created for every
block. The use of random message keys forces each
message to use a different encryption sequence. This also
prevents the "bit flipping" sort of defined-plaintext
attacks which attempt to reveal the permutation.
(It is of course possible to "bit flip" data in transit:
Normally we expect cipher systems to include some form of
data validation, such as a CRC over all data, with the CRC
result enciphered along with the plaintext.)
The classic sort of codebook attack is avoided first by
having a huge block size, and next by not attempting to
re-use the data transformation. Note that conventional
block ciphers have the first advantage only to a much
smaller extent, and the second advantage not at all, which
is exactly why they have and need "modes of operation."
Dynamic Transposition does not need modes of operation.
Known-plaintext attacks would be a first step in an attempt
to attack the RNG as in a stream cipher. But with Dynamic
Transposition, known-plaintext does not disclose the
enciphering permutation, because a plethora of different
bit-permutations will produce exactly the same ciphertext
from exactly the same plaintext.
Should the opponents somehow find the correct permutation,
attempts to find the shuffling RNG sequence would be
thwarted by double-shuffling.
PROVABLE STRENGTH IN PRACTICE
The mere mention of "mathematical proof" in cipher design
encounters an apparently widespread misconception that a
"proof" must necessarily imply security against
"computationally unbounded" opponents. That assumption
is not only false, but also seems strange, because over 50
years of mathematical cryptography without result should
have taught us to not expect such a proof.
Similarly, the apparently endless search for "P=NP" in
computational complexity has almost nothing to do with
cipher security in practice. If all we needed was a
result -- either P=NP or P<>NP -- we could just assume
that right now and reap the rewards. What is hoped for
is not a proof per se, but instead a new *construction*,
which essentially amounts to hoping for a new general
attack. But could we really expect any sort of attack to
find the correct key among exponential possibilities in
non-exponential time, without exploiting other weakness
in the cipher?
In practice, we need to defeat real opponents who do have
computational limitations. In practice, we have very few
security guarantees, so even proofs of security against
particular attacks would be significant advances.
Modern ciphers already do have one universally-accepted
proof of statistical strength in practice. Unfortunately,
the implications of this example with respect to the
possibility for other similar proofs have not been widely
recognized.
Statistical Strength Against Particular Attack
Whenever a cipher uses keys, a correct key always exists.
In that sense, every keyed cipher is "weak" and can never
be proven strong, but *that* definition of strength is not
particularly useful. With respect to keys, we use a
statistical definition of strength, which may be useful in
general.
In this statistical sense, virtually all modern ciphers
have -- and use -- a mathematical proof of statistical
strength in practice against the particular attack of
brute-force against the key. All we need to completely
defeat such an attack is a "large enough" keyspace. The
argument is simple, fundamental to all keyed ciphers of
every type, does not depend upon unproven assumptions,
and does not use either number theory or computational
complexity. The proven strength of a large keyspace is
a wonderful example of an attractive form of strength
proof against a particular attack.
There is no reason to believe that brute force is the only
attack which could be addressed by proof. And since the
universally-accepted example uses little advanced math,
there would seem to be no reason to believe that advanced
math is necessary for such proofs. Consequently, it seems
unlikely that expanding mathematical knowledge will solve
this sort of proof problem.
It may be that current mainstream cipher architectures
are just not well suited to proven strength against
particular attacks. That possibility would seem to make
new ciphers an academic opportunity, rather than the
pointless exercise they are generally claimed to be.
Information Theoretic Strength
Some cipher components -- hashing and other information-
reducing operations -- have a form of irreversibility
proof, based on "insufficient information" and counting
arguments. Strength occurs when external results simply
do not contain sufficient information to distinguish all
possible input values.
Any cipher is limited by the uncertainty of the main key.
We avoid this in practice by making the key "large enough."
The implication of this is that the key cannot be attacked
from the "front end," but instead must be deduced from the
ciphering transformation and "back end" leakage.
The state for any "linear" RNG can be exposed, provided we
have as much known data as the RNG state, and provided also
that we know the "position" of the known data with respect
to the RNG. But Dynamic Transposition hides both value and
position: Value and position are hidden by a "Jitterizer."
Value is hidden by hashing. Shuffling implies range-
reduction, which is a form of hashing and value hiding.
Range-reduction should be implemented by "rejection," which
also discards values and thus affects position. All of
these operations tend to isolate the RNG from any knowable
"back end" sequence.
The use of a large-state RNG to drive "back end" processing
means there is vastly more state which must be deduced.
If it can be shown that Dynamic Transposition with double-
shuffling "leaks" at most some fraction of the RNG sequence,
we have the basis for an overall proof of strength: We just
arrange to re-key the RNG before sufficient information is
leaked to allow the RNG state to be reconstructed. When
combined with "large enough" keys, the result would be the
long-desired proof of statistical strength in practice.
SUMMARY
A novel, relatively easy-to-design and easy-to-analyze type
of cipher has been presented. There would seem to be ample
reason to consider Dynamic Transposition to be yet another
"in practice unbreakable" cipher, when properly implemented.
Nothing in modern cryptography prevents us from creating
mathematical proofs of statistical security against
particular attacks. Provable cipher security in practice
also might be attained if limits for "information leakage"
through information-reducing functions could be defined,
The unusual clarity and generality of the simple permutation
ciphering operation would seem to make Dynamic Transposition
a good basis for investigating practical cipher strength
proofs.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************