Cryptography-Digest Digest #546, Volume #13 Thu, 25 Jan 01 03:13:01 EST
Contents:
Re: using AES finalists in series? (wtshaw)
Re: Why Microsoft's Product Activation Stinks (wtshaw)
Re: Transposition code (Benjamin Goldberg)
Re: Fitting Dynamic Transposition into a Binary World (Terry Ritter)
Re: Dynamic Transposition Revisited (long) (Terry Ritter)
Re: Fitting Dynamic Transposition into a Binary World (Benjamin Goldberg)
Re: Knots, knots, and more knots (Matthew Montchalin)
Re: Dynamic Transposition Revisited (long) (Terry Ritter)
Re: Dynamic Transposition Revisited (long) (Terry Ritter)
Re: Dynamic Transposition Revisited (long) ("John A. Malley")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: using AES finalists in series?
Date: Wed, 24 Jan 2001 23:08:27 -0600
In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn"
<[EMAIL PROTECTED]> wrote:
> wtshaw wrote:
> > You too?? What will we do with so much expertise? I reflect that Herr
> > Ritter already sees efficiency as important, while so many follow Carson's
> > Rule, to fill all available space as quickly as possible. It stands to
> > put reasonable restraint of endless data waste clearly ahead of some small
> > additional amounts used to greatly increase security. Repeating a flawed
> > maximim that encyption must suffer under impossible limitations to keep it
> > kosher is, respectfully, too picky.
>
> If that was supposed to pertain to the project I'm working on,
> you are all wet. The constraints are not arbitrarily imposed.
Arbitrary means not standardized. What I complain about are defacto
standards that limit security.
--
Some people say what they think will impress you, but ultimately
do as they please. If their past shows this, don't expect a change.
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: talk.politics.crypto,misc.survivalism
Subject: Re: Why Microsoft's Product Activation Stinks
Date: Wed, 24 Jan 2001 23:12:32 -0600
In article <94lk6k$790$[EMAIL PROTECTED]>, zapzing <[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] (wtshaw) wrote:
> > In article <94i1dd$2nd$[EMAIL PROTECTED]>, zapzing
> <[EMAIL PROTECTED]> wrote:
> >
> > > Void where prohibited by law.
> > >
> > Couldn't that get you in trouble?
>
> I don't think so, what do you think?
> Do you have any info on this?
>
Voiding like praying is best done in private.
--
Some people say what they think will impress you, but ultimately
do as they please. If their past shows this, don't expect a change.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Transposition code
Date: Thu, 25 Jan 2001 05:52:06 GMT
>From your post, I wrote the following:
nr = strlen( txt ) / keylen;
lr = strlen( txt ) / nr;
for( i = j = 0; i < keylen; ++i )
for( k = 0, n = key[i]; k < (i<lr?nr:(nr+1)); ++k )
out[j++] = txt[n * keylen + k];
However, it doesn't seem to work quite right.
--
Most scientific innovations do not begin with "Eureka!" They begin with
"That's odd. I wonder why that happened?"
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Thu, 25 Jan 2001 06:23:15 GMT
On Thu, 25 Jan 2001 00:10:44 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:
>On Wed, 24 Jan 2001 20:56:25 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote,
>in part:
>
>>Is there some reason why you could not use the algorithm in my
>>"revisited" article?
>
>I'm sure that I'm the only one who really finds that method inadequate
>for his purposes.
>
>As I understand it, your algorithm is:
>
>Given a block size of N bytes:
>
>take N-1 bytes of data. If that data has 7 or fewer excess 1s or 0s,
>add an appropriate last byte.
>
>If the excess is more than that, use only the first N-2 bytes, and
>rectify the excess in the last two bytes.
>
>I suppose you could use alternating all ones and all zeroes bytes in
>the case where the excess is all in the last byte.
Since the description in my "Revisited" article is not working, and
since -- for some reason -- I am obviously not getting through,
perhaps someone else could help out here.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Thu, 25 Jan 2001 06:25:05 GMT
On Thu, 25 Jan 2001 04:10:52 GMT, in
<gHNb6.5111$[EMAIL PROTECTED]>, in sci.crypt "Matt
Timmermans" <[EMAIL PROTECTED]> wrote:
>[...]
>Generate a photon, and polarize it vertically. Then measure its
>polarization at 45 degrees from the vertical. Repeat.
>
>By measuring the transparency of your optics, the sensitivity of your
>photomultipliers, and the orientation of your polarizers, you can place a
>very confident lower bound on the rate of real randomness.
In the language of statistics, a confidence level is some
*probability* of being correct -- which also implies some probability
of error. That is never absolute certainty, and so is never proof.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Thu, 25 Jan 2001 06:38:33 GMT
Terry Ritter wrote:
>
> On 24 Jan 2001 05:41:18 GMT, in <94lptu$[EMAIL PROTECTED]>, in
> sci.crypt [EMAIL PROTECTED] (Kenneth Almquist) wrote:
>
> >[...]
> >These algorithms can be executed moderately efficiently. The
> >combinatorial calculation used to determine the number of balanced
> >strings with a given prefix can be precomputed and stored in a
> >table. At the cost of using larger tables, we could make the
> >decoding algorithm process multiple bits of the balanced string
> >at a time. However, there is no obvious way to generate a balanced
> >string without doing one bit at a time.
>
> I simply do not understand where you guys are going with this.
>
> Is there some reason why you could not use the algorithm in my
> "revisited" article? It does bit-balance arbitrary data into a
> fixed-size block, is not limited in block size (and thus gains
> efficiency with large blocks), and decoding is trivial. Also, it does
> function byte-by-byte, not bit-by-bit.
For various reasons, it is preferable for both encode (raw->balanced)
and decode (balanced->raw) to be bijections -- the simplest reason being
that if one does encode/encrypt/decode, there will be minimal expansion.
(1)
Suppose that the bit-by-bit encoding method I suggested is used -- if
the raw data is unbiased, then there is exactly 1 bit of expansion in
the encode function. After enciphering, there is a minimum of 1 bit of
compaction in the decode function. I'm sure that you can see that, on
average, there will be 0 ciphertext expansion if encode/encrypt/decode
is done. Unfortunatly, this is probablistic, and there is no garuntee
of not expanding.
(2)
Suppose that one of the fixed-length-to-fixed-length methods are used,
specifically, one that maps N-bit-long raw strings onto M-bit-long
balanced strings, with encode being one-to-one but not onto if the
domain is defined as the set of all M bit balanced strings.
With something like this, a user would encode a string, then repeatedly
encipher it until the decode function is defined for that balanced
string, and then decode it. With this method, we are garunteed that
there will be no ciphertext expansion. The drawback being of course
that there is no garuntee that encryption will take place in constant
time. This might open the cipher up to timing attacks.
--
Most scientific innovations do not begin with "Eureka!" They begin with
"That's odd. I wonder why that happened?"
------------------------------
From: Matthew Montchalin <[EMAIL PROTECTED]>
Subject: Re: Knots, knots, and more knots
Date: Wed, 24 Jan 2001 22:39:13 -0800
On Wed, 24 Jan 2001, Matthew Montchalin wrote:
|Imagine you have this very long rope, and you've got this machine
|with two holes, where you feed the rope completely through the
|machine, and before it comes out, it will either be knotted, or
|unknotted, depending on the state of a punch-card (ahem) that has
|been inserted into the machine in advance. Let us further assume
|that the machine does not stretch the rope longer than it was
|to start with, and does not shorten it in any way.
|
|Starting with this simple setup, is it reasonable to describe
|complexity by the knots per unit length of rope, multiplied by
|the operations specified on the punch card?
Thus, supposing a rope can be represented by a continuous but
finite (because we have to keep it practical) series of bits,
e.g.,
0000000000000000
and always knot it over in a particular place, e.g.,
0000000000000001
then we have described (in this instance) a kind of knot, here
one that results from adding 1 bit (or maybe performing a logical
OR of 1 bit in the last place). If, as a result of this operation,
another bit is switched somewhere else, then a different kind of
"knotting" is being performed. Does anybody remember when I looked
into the difference of doing BCD mathematical operations on a long
seed, and comparing the iterations with straight binary math on the
same seed? That was back around June, I think. There is bound
to be a simple way of describing and categorizing the kinds of
permutations according to a "knotlike" systematization.
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Thu, 25 Jan 2001 07:11:39 GMT
On Tue, 23 Jan 2001 23:57:25 -0800, in
<[EMAIL PROTECTED]>, in sci.crypt "John A. Malley"
<[EMAIL PROTECTED]> wrote:
>Terry Ritter wrote:
>>
>[snip]
>>
>> Surely, there is no reason to imagine that permutations must all occur
>> before repeating. In fact, that would be a weakness.
>
>Yes, and this is where I was going in the examination of the strengths
>of the DTC :
>
>What are the effects on the "strength" of the DTC if the PRNG selecting
>the permutations (via a shuffling algorithm or some equivalent) must
>cycle through every possible permutation once before any particular
>permutation appears again?
Fine, but note that your assumed situation does not represent the
Dynamic Transposition design in the "Revisited" article. Please be
careful not to open "Red Herring" arguments.
If the only thing we look at is the permutations, there is no
"permutation of permutations" cycle. No permutation is prevented from
re-occurring adjacent to itself. And, while the RNG does of course
have some cycle length, that value is far beyond what can occur.
>Can the statistics of permutation types
>(what type follow what type, how many of each type can occur, what types
>can never follow what types) be exploited in concert with known
>plaintext to predict with sufficient probability the likely permutations
>to follow?
Clearly, the design cannot put out every possible sequence of
permutations. But that does not mean that the various permutations
are not equally probable to the extent that one could measure them.
In particular, one must ask how they could be measured at all. The
permutations are not exposed by ciphering, or by known-plaintext, so
how could any imbalance be exploited?
>How much plaintext would be needed to get predictions better
>than 50/50? IMO the answers to these questions gauge the strength of the
>DTC and allow quantitative comparison to other ciphers.
>> The design goal is to allow the very same permutation to occur on the
>> next block, and then rely on the almost infinitesimal probability of
>> any particular permutation occurring to be assured that it will almost
>> never happen. The goal is to make the permutation selection for each
>> and every block independent, with equal probabilities.
>
>This is a very important 'engineering' constraint on the PRNG driving
>the permutation selection mechanism in the DTC. And AFAIK there is no
>PRNG that satisfies this constraint.
You need to read more carefully: A "design goal" is not a constraint;
it is a goal.
The desire to have a design in which each block is an independent
selection among all possible permutations obviously cannot be achieved
to perfection, but it can be approximated. To the extent that the
approximation exceeds what can be measured or exploited, there would
seem to be little point in doing better.
To have a mechanism in which deviations from optimality are hidden is
essentially to have a cipher; that is what we do: we hide imperfection
behind curtains and doors.
Having a structure in which deviations from perfection approach zero
as we increase the size is a conventional proof technique.
One way we approach the production of independent permutations is by
having a far-larger RNG output value than will be used in shuffling.
The result is that a vast number of different RNG output values will
produce exactly the same shuffling value, and a vast number of
different sequences will produce the same permutation.
>AFAIK (and I readily admit what I
>do know about cryptology is less than what I DON'T know about
>cryptology):
That is true for us all.
>A PRNG = ( S, s_0, T, U, G) where S is a finite set of states, s_0 is
>the initial state of the PRNG and an element of S, T is the state
>transition function T: S -> S mapping the elements of S to the elements
>of S, U is a finite set of output symbols and G is the output function
>G: S -> U mapping the elements of the set S to the elements of the set
>U.
First of all, this is a standard state-machine model.
Unfortunately, this model does not completely fit an Additive RNG, at
least in the sense of providing insight as to which RNG values are
going to be correlated. In an Additive RNG, adjacent values are not
likely to be correlated (other than in the most abstract sense), but
values which are separated by the distance between recursion
polynomial feedback terms will be equally influenced by some other
value. Eventually this fact can be used to expose the state of the
linear RNG, provided we have sufficient data which corresponds to the
linear output. In this particular case, we also have: a nonlinear
filter ("jitterizer"), double-shuffling, and range reduction both to
and inside the shuffling; all of these stand in the way of collecting
that information.
S is contrived to be extremely large, 310k bits. So even though in
theory the content of S must repeat, sometime; in practice that
sometime cannot be reached. Any issues which require cycle traversal
would seem to be irrelevant.
If we do the best we can with the model, probably the item of most
interest is G. In an Additive RNG, this is a severe reducing
function. It takes S, the full state of the RNG (310k bits) into the
12 bits we need to send to the shuffling of a 4k-element array.
Consequently, about 2**310,036 different RNG states will produce the
exact same 12-bit shuffling value. (Here we ignore the nonlinear
filter.)
>The current state is a function of the previous state. The current
>output of the PRNG is a function of the current state. Now the order of
>U cannot exceed the order of S. If the |S| = |U| then there's a
>one-to-one correspondence between the states and the outputs of the PRNG
>through the function G.
Well, the correspondence is 1:1 to the *input* of G, but G is a
reducing x:1 function. (In these terms G is actually quite severe:
The internal state S is 9689 * 32 = 310,048 bits. Of this we will
eventually use at most 12 in a single shuffling selection. So G is
2**310,048 : 2**12. G is a sort of hash.)
>If the order of U is less than the order of S,
>then multiple states map to the same element in the output set U and the
>function G is a surjection. A subset of S maps to the same element u_i
>in U. We see multiple occurrences of the same output symbol u_i in the
>output sequence from the PRNG.
Right.
>The function T takes the current state and maps it to the next state -
>i.e. state feedback. Now I *think* the longest possible output sequence
>of states ever possible when the previous state determines the next
>state occurs if and only if T : S -> S is bijective (and thus a
>permutation on S) AND that permutation of S (which T is) has a cycle
>decomposition into only one cycle, an |S|-cycle, where |S| is the order
>of the set S.
While abstractly true, that is not the case here. An Additive RNG
does not create a maximal-length sequence in terms of its' state.
Instead, an Additive RNG creates a huge number of distinct cycles,
each of which is "long enough." It is not necessary to have "the
longest possible output sequence" when the sequence we do get vastly
exceeds whatever we need.
How long is the cycle? For the Additive RNG, Marsaglia proves period
((2**r)-1)*(2**(n-1)) for degree r and width n. Here we have r as
9689 and n as 32. So we have the cycle length as (2**9689 - 1) *
(2**31) ~= 2**9689, which of course is some binary value of about
9,690 bits. We will not traverse 2**9690 steps in an afternoon.
>There are |S| start states for the PRNG so there are only |S|
>maximal-length sequences of state the PRNG can produce. Each unique seed
>value s_i makes a unique state sequence. If the mapping from S to U is
>surjective then elements of U (the u_i ) can occur multiple times in
>the output sequence from the PRNG as described here.
>
>(All of this description is from the tutorial by Pierre L'Ecuyer in his
>paper "Uniform Random Number Generation", Annals of Operation Research,
>1994 at his web site.)
It's a classic state-machine description.
>So the current output of the PRNG is a function of the current state of
>the PRNG. And the current state of the PRNG is a function of its past
>state. Let the current output of the PRNG determine the permutation to
>use on the current N-bit block in the DTC.
>
>Now we want the permutation selection on the current N-bit block to be
>independent of the permutation selections on past blocks.
>Independence by definition means the outcome of this trial (selecting a
>permutation to use on this block) does not depend on the outcome of any
>previous trials (selections of permutations used on past blocks.) But,
>the permutation selection for the current N-bit block is a function of
>the current output of the PRNG. The current output of the PRNG is a
>function of the current state of the PRNG. And the current state of the
>PRNG is a function of its previous state. These events are NOT
>independent. What happened in the past does affect the current state.
>And since states map to PRNG outputs, and outputs map to permutations,
>the permutation outcomes in the past will affect the current permutation
>outcome - some outcomes will be more likely than others depending on
>what states in the PRNG already occurred.
Actually, successive Additive RNG output values will tend to be
"fairly" independent, because of the structure of the generator.
Even if that were not so, RNG value independence is *approached* by
the action of G which is 2**310,036 : 1. Such correlation as does
occur between values is thus massively reduced and hidden by the
severe hash action of G.
Similarly, when we shuffle, there is also a substantial reduction
(from about 10k 12-bit values or 120k bits to a 4k-element shuffle
representing about 43k bits). Again, there is no 1:1 relationship,
but instead an 3:1 reduction or hash.
Nor can the issue be about whether the produced permutations are
independent in the abstract. The whole point of a cipher design is to
take the imperfections of a real design and hide them in such a way
that they cannot be exploited.
>> We can see the selected permutation as a "value," in a sequence of
>> values, exactly the same way we get random values from an RNG, or the
>> way we think of sequences as some number of symbols, each one chosen
>> from a set. It is a weakness for a random generator to produce a
>> value which will not then re-occur until the generator recycles.
>
>Yes, I agree, and the function G described above can map a subset K of
>the states of S to a single output value u_i, so the number of times u_i
>appears in the output sequence of the PRNG is |K|. So it is possible
>for the same permutation of the N-bit bit-balanced block to occur
>multiple times in the DTC. Thanks for pointing this out to me.
>
>I disagree with the assertion that we can ever "make the permutation
>selection for each and every block independent, with equal
>probabilities."
As far as I know, I made no such assertion. I think this is it:
>>The goal is to make the permutation selection for each
>>and every block independent, with equal probabilities.
So what I really said was that such was *the goal*. In real
cryptography, we often do not achieve such goals, but may well
approach them sufficiently so the difference cannot be distinguished.
I think this is such a case.
It is, however, disturbing to see an argument made which deliberately
takes a statement out of the original context so that it can be easily
dismissed. Now, I know this is a common academic disease, but I will
not continue a discussion in which that is the mode of argument.
'nuff said.
>AFAIK this cannot be done with a PRNG no matter how
>many states over and beyond the N! required to get multiple occurrences
>of the same permutations for each of the N! permutations of the N-bit
>block. The current permutation is always dependent on the past
>permutations since the current state of the PRNG is always dependent on
>the past state.
>
>I do find this cipher interesting. I've never seen anything like this
>before - the combination of the bit-balanced block with the
>permutation. It's like a new kind of one-way function. It's easy to get
>the output ciphertext block given the input plaintext and the
>permutation, but its "hard" to determine the permutation given the
>plaintext and the ciphertext. I've been mulling that over today...how
>to relate this to other 'one-way functions'. I use the quotes only
>because the existence of one-way functions has never been proved :-)
Indeed.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Thu, 25 Jan 2001 07:13:31 GMT
On Wed, 24 Jan 2001 11:10:01 +0100, in
<[EMAIL PROTECTED]>, in sci.crypt Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:
>Terry Ritter wrote:
>>
>> [EMAIL PROTECTED] (John Savard) wrote:
>>
>[snip]
>> >The "weakness" is:
>> >
>> >The set of permutations of n bits
>> >
>> >considered as a subset of the set of one-to-one and onto mappings from
>> >the set of bit-balanced blocks of n bits to itself
>> >
>> >is a subgroup of those mappings, and therefore is not a generator of
>> >the entire set of mappings.
>>
>> All of which is not a problem, because the actual permutation which
>> encrypted the data is hidden in that clump. There is no information
>> to distinguish the correct one.
>>
>> You might as well say there is a problem because one could read the
>> RNG state and thus know the keying of the cipher. We assume that
>> state is kept secure.
>>
>> There is no way to distinguish the correct permutation from the huge
>> group which can generate the same transformation. And it is only the
>> correct permutation which leads back (eventually) to the shuffling
>> sequence.
>>
>> A weakness which cannot be exploited is no weakness at all.
>
>Having read the ongoing discussions quite a bit, I am
>afraid I am yet very confused. As I wrote previously,
>since the individual operations done in performing the
>permutation does not reveal the exact values of the PRNG
>output used to do that, this 'indirectness' helps to
>a large extent to shield the PRNG sequence from being
>determined by the opponent. This is certainly beneficial.
>But the necessity of bit balancing, which seems to be
>considered to be major factor leading to the strength
>of DT, remains fairly unclear to me. Consider we have
>two bit strings 0011 and 0111, the one balanced, the
>other not. I show someone the sequences 0101 and 1011
>and tell him that these are the results of certain
>permutations from certain original sequences with the
>help of a PRNG, how can he obtain more information
>about the PRNG in one case than the other, if he doesn't
>know the original sequences? The situation doesn't
>change, I suppose, even if he knows the original
>sequences. To make my point clear, consider the extreme
>case 1111. If I perform any permutation, then the result
>remains 1111, from which others certainly can't know
>which permutation operations I have done. So the point
>of bit balancing seems to boil down in my conjecture to
>not letting the opponent know the frequency distribution
>of the original sequence, namely the proportion of 0-bits
>to 1-bits in it and is inherently disassociated with the
>issue of predictability of the PRNG. Am I right or am I
>on an entirely wrong track of thought? Many thanks in
>advance.
But suppose the block is not 1111 but instead 1000. In that case, one
complete bit-move would be fully exposed. So if we had only a single
shuffle, that would expose one shuffling value. (Of course, the
shuffling value is only a portion of the value from the RNG, and the
rest is unknown. And if we double-shuffle, a known move doesn't
expose anything.) The goal was to close every possible door to
weakness at every level.
In a sense, I guess the question is asking: "Is bit-balancing worth
doing?" As I recall, the bit-balancing part is not a major overhead.
The major cost is the shuffling -- for each bit-element we have to
step the RNG, do nonlinear processing, fold and mask for the shuffling
routine, which itself has to handle range reduction, and finally do a
bit-exchange. So there would seem to be little advantage in leaving
the bit-balancing out.
The only reason for using this cipher is if we can believe that it is
effectively unbreakable. We don't want to scrimp on strength.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited (long)
Date: Wed, 24 Jan 2001 23:47:34 -0800
Terry Ritter wrote:
>
[snip some good stuff]
> >
> >I disagree with the assertion that we can ever "make the permutation
> >selection for each and every block independent, with equal
> >probabilities."
>
> As far as I know, I made no such assertion. I think this is it:
>
> >>The goal is to make the permutation selection for each
> >>and every block independent, with equal probabilities.
>
> So what I really said was that such was *the goal*. In real
> cryptography, we often do not achieve such goals, but may well
> approach them sufficiently so the difference cannot be distinguished.
> I think this is such a case.
>
> It is, however, disturbing to see an argument made which deliberately
> takes a statement out of the original context so that it can be easily
> dismissed. Now, I know this is a common academic disease, but I will
> not continue a discussion in which that is the mode of argument.
> 'nuff said.
I meant no disrespect and apologize if I offended you.
I did not deliberately take the statement out of the original context in
an attempt to easily dismiss it. I am not attempting to dismiss the DTC.
I am only attempting to explore ways to describe its "strength". Your
replies to my posts have helped me better understand the strengths of
the DTC.
I appreciate the difference between "goal" and actual achievement as you
explained in your response. I thought you were asserting DTC achieved
permutation selection for each and every block independent from other
blocks, and with equal probabilities, due to your previous post in this
thread, where it's stated
> There are (N!)**S possible sequences of permutations, of sequence
> length S.
I understood this statement to mean that each N-bit block gets one of N!
possible permutations applied to it, independent of any preceding or
following block, for S blocks that make up the message.
Thank you for replying to my posts, thanks for the dialogue. I do hope
we will continue to correspond.
John A. Malley
[EMAIL PROTECTED]
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************