Cryptography-Digest Digest #526, Volume #13 Tue, 23 Jan 01 00:13:01 EST
Contents:
Re: Kooks (was: NSA and Linux Security) ([EMAIL PROTECTED])
Re: Kooks (was: NSA and Linux Security) ([EMAIL PROTECTED])
Re: Dynamic Transposition Revisited (long) (John Savard)
Re: Transposition code (Benjamin Goldberg)
Re: Fitting Dynamic Transposition into a Binary World (Benjamin Goldberg)
Re: 32768-bit cryptography (Jerry Coffin)
Re: Fitting Dynamic Transposition into a Binary World (John Savard)
Re: Fitting Dynamic Transposition into a Binary World (Benjamin Goldberg)
Re: Block algorithm on variable length without padding - redux ("Matt Timmermans")
Re: Easy question for you guys... (Benjamin Goldberg)
Re: Fitting Dynamic Transposition into a Binary World (John Savard)
Re: 32768-bit cryptography ("Joseph Ashwood")
Re: Dynamic Transposition Revisited (long) (Terry Ritter)
Re: Dynamic Transposition Revisited (long) (Terry Ritter)
Re: Kooks (was: NSA and Linux Security) ("Scott Fluhrer")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Kooks (was: NSA and Linux Security)
Date: Mon, 22 Jan 2001 23:57:30 GMT
In article <[EMAIL PROTECTED]>,
Darren New <[EMAIL PROTECTED]> wrote:
> > There is no parallel with those who already had intimate knowledge of
> > the 13th amendment in their days.
>
> Last I looked, the Constitution named the
> Supreme Court as the final judge
> of what the constitution means. Hence,
> arguing here that the "true" 13th
> ammendment is being ignored is rather
> silly. If the Supreme Court justices
> don't believe it's there, it is for all
> intents and purposes not there, even
> if it *was* ratified, yes?
Given that it wasn't ratified, the distinction is moot, but the Supreme
Court has at least strongly that it wasn't ratified.
In 1847, Supreme Court Associate Justice Levi
Woodbury wrote there were "only twelve amendments
ever made to" the Constitution, and nobody quibbled
with his numbers; Waring v. Clarke (1847) 46 US
(5 How.) 441 at 493, 12 L.Ed. 226 at 251 (dissent).
--http://www.militia-watchdog.org/suss9.htm
Sent via Deja.com
http://www.deja.com/
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Kooks (was: NSA and Linux Security)
Date: Mon, 22 Jan 2001 23:59:31 GMT
n article <94i1io$2rp$[EMAIL PROTECTED]>,
Greggy <[EMAIL PROTECTED]> wrote:
> Jol Silversmith - I wasn't there so I cannot say why no one
> protested within the Virginian legislature that day in 1819
> not to include the 13th amendment in their publications, or to
> require all 21 states to ratify the same. But I am absolutely
> certain I know more than they did back then what was really
> going on all around them. Boy, I'm good!
And the courts just happen to agree that he's right, Greggy.
Mr. Anderson claims that no lawyer or member of Congress
is a citizen of the United States because the penalty for
violation of the 'Original' Thirteenth Amendment ('claiming a title
of nobility') is loss of citizenship. . . . These arguments may be
amusing to some but are meritless and must be rejected.
Anderson v. United States, No. 97 C 2805, 1998 WL 246153, at *3 (N.D.
Ill.
Apr. 27, 1998).
So, Greggy, are you going to assasinate the judge who wrote that opinion?
If the thirteenth amendment is in fact law, then only our guns
will enforce it because EVERYONE in congress, EVERY judge,
and the president and vice president hold titles of honor and
would lose their citizenship and office for life.
--Greggy, 01/05/00
Sent via Deja.com
http://www.deja.com/
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Mon, 22 Jan 2001 23:51:49 GMT
On Mon, 22 Jan 2001 07:07:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote,
in part:
>In my experience with actually running such a cipher, bit-balancing
>adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was
>given both in the "Revisited" article, as well as the original
>Cryptologia article on my pages. And if the text is compressed first,
>there will be even less expansion. If you see even a 33 percent
>expansion as a show-stopping amount, a "bandwidth problem," I think
>you need to pause for re-calibration.
Well, it is possible to map 6 arbitrary bits (64 possibilities) to a
string of 8 balanced bits (70 possibilities) for an increase of 33
percent.
However, for larger block sizes, one can indeed do better than that.
One can map 37 arbitrary bits (137438953472 possibilities) to 40
balanced bits (137846528820 possibilities) for only an 8.11% increase
in bandwidth cost.
Or one can map 158 arbitrary bits
(365375409332725729550921208179070754913983135744 possibilities) to
162 balanced bits (365907784099042279561985786395502921046971688680
possibilities), which further reduces the bandwidth cost to 2.532%.
So I suppose you could say that I am quite seriously in need of
"recalibration".
Note that the fraction of 8-bit sequences that are balanced is just
over 1/4, while the fraction of balanced 40-bit sequences is just over
1/8, and the fraction of balanced 162-bit sequences is just over 1/16;
the proportion does decline as the number of bits increases, but much
more slowly than the number of bits increases. Of course, one expects
the efficiency to keep increasing, as overall balance over a larger
block is a less restrictive condition than balance in small subblocks.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Transposition code
Date: Tue, 23 Jan 2001 00:54:20 GMT
John Savard wrote:
>
> On Mon, 22 Jan 2001 05:04:45 +0000, Richard Heathfield
> <[EMAIL PROTECTED]> wrote, in part:
>
> >Why?
>
> Well, he is using C. So, if he is using malloc, it is more convenient
> to have a one-dimensional array, although one could always use an
> array of pointers to get a two-dimensional notation.
Right. Actually, the real problem is figuring out the indices I need
for the transposition. It's easy enough to do the transposition on
paper, but figuring out the numbers needed isn't quite so easy.
What I want to do is this:
If I have a plaintext of abcdef, and a key of acb, the table looks like:
a c b
=====
a b c
d e f
g h
and the resulting text that I want is:
adgcfbeh
This is obtained by reading down the columns.
Assuming I've got the "permutation" array like in my earlier post (or
it's inverse), my text in "txt" and my output going to "out", how do I
calculate the transposition?
for( i = 0, j = strlen(key); txt[i]; ++i )
out[ permutation[i%j]+(i/j)*j ] = txt[i];
Is wrong, I know... but what is right?
--
Most scientific innovations do not begin with "Eureka!" They begin with
"That's odd. I wonder why that happened?"
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Tue, 23 Jan 2001 00:54:24 GMT
Perhaps if we worked with one bit at a time, rather than 8 bit bytes?
Assume we have a 128 bit block.
R = raw bitstream
B = balanced bitstream
z, o = # zeros and ones
To create balanced stream from raw stream:
while( z < 64 && o < 64 ) {
x = R.nextbit();
B.appendbit(x);
x ? ++o : ++z;
}
if( z < 64 ) B.appendbits(z,0);
if( o < 64 ) B.appendbits(o,1);
To create raw stream from balanced stream:
while( z < 64 && o < 64 ) {
x = B.nextbit();
R.appendbit(x);
x ? ++o : ++z;
}
if( z < 64 ) B.skipbits(64-z);
if( o < 64 ) B.skipbits(64-o);
If input bits are all 0s or all 1s, then for every 64 input bits, there
are 128 output bits (100% expansion).
If input bits are balanced, then for every 127 input bits, there are 128
output bits (0.79% expansion). Larger blocks produce less expansion.
If the data has been compressed beforehand, then it is hopefully nearly
balanced, and this method is optimum.
Since, with Real World data, data is not always compressed, and thus not
always almost balanced, we should XOR the raw stream with something to
bring it closer to being bit-balanced.
Some examples are:
x = R.nextbit() ^ (z+o & 1);
or
x = R.nextbit() ^ (z>o ? 1 : 0);
or
x = R.nextbit() ^ lfsr_nextbit();
Which of these will work best depends on the nature of the data.
--
Most scientific innovations do not begin with "Eureka!" They begin with
"That's odd. I wonder why that happened?"
------------------------------
From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: 32768-bit cryptography
Date: Mon, 22 Jan 2001 17:56:58 -0700
In article <94hua9$pq7$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
[ ... ]
> As for 1024 bit keys being forever unsolvable, Quantum Computing is a wild
> card here that may render all previous assumptions obsolete -- perhaps
> within those same 20 years.
I can believe a usable quantum computer could cause serious problems
with the security of RSA using a 1024-bit key. If anybody has a way
to use a quantum computer to attack most forms of symmetric
encryption, this is the first I've heard of it.
--
Later,
Jerry.
The Universe is a figment of its own imagination.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Tue, 23 Jan 2001 01:12:00 GMT
On Tue, 23 Jan 2001 00:54:24 GMT, Benjamin Goldberg
<[EMAIL PROTECTED]> wrote, in part:
>Perhaps if we worked with one bit at a time, rather than 8 bit bytes?
Well, I wasn't thinking of 8-bit bytes necessarily, but yes, I was
thinking of converting a *fixed-length* input string of bits into a
*fixed-length* output balanced string of bits, so that the amount of
expansion was always constant, and therefore predictable.
But as you note, that makes the algorithm more complicated.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Tue, 23 Jan 2001 01:46:36 GMT
John Savard wrote:
>
> On Tue, 23 Jan 2001 00:54:24 GMT, Benjamin Goldberg
> <[EMAIL PROTECTED]> wrote, in part:
>
> >Perhaps if we worked with one bit at a time, rather than 8 bit bytes?
>
> Well, I wasn't thinking of 8-bit bytes necessarily, but yes, I was
> thinking of converting a *fixed-length* input string of bits into a
> *fixed-length* output balanced string of bits, so that the amount of
> expansion was always constant, and therefore predictable.
>
> But as you note, that makes the algorithm more complicated.
Besides that, does conversion of a fixed-length input string to a
fixed-length output string provide any kind of size garuntees?
I know that mine, in the worst case scenario, produces 100% expansion of
data (ie, doubling), but much less than this in most cases.
Also, just as important -- what about speed? Mine is not just simple,
but it's fairly fast too, if bit operations are fast.
How would you go about doing a fixed-length to fixed-length conversion
for large blocks in practice? Certainly not a [ridiculously huge]
lookup table.
PS, I can think of a fixed-length to fixed-length conversion which is as
fast as my method, if not faster.
while(1) R.nextbit() ?
B.appendbit(1), B.appendbit(0) :
B.appendbit(0), B.appendbit(1) ;
However, I'm certain you don't like the expansion rate this gives :)
--
Most scientific innovations do not begin with "Eureka!" They begin with
"That's odd. I wonder why that happened?"
------------------------------
From: "Matt Timmermans" <[EMAIL PROTECTED]>
Subject: Re: Block algorithm on variable length without padding - redux
Date: Tue, 23 Jan 2001 01:54:36 GMT
Not quite. It is effectively encoding the last partial block with a stream
cipher, and so it's vulnerable to the bit-flipping attacks if you don't
include a message digest.
"N. Weicher" <[EMAIL PROTECTED]> wrote in message
news:3AYa6.28351$[EMAIL PROTECTED]...
> Scott Fluhrer was kind enough to offer the following reply to my question:
>
> > That doesn't work: You don't need to know the key to get the final
partial
> > block, and *neither does the attacker*. Doing:
> > C(N) = P(N) ^ E(C(N-1))
> > would work...
>
> My question is: would the technique shown above be just as secure as the
> full "ciphertext stealing in CBC mode" outlined on pages 195/196 in
"Applied
> Cryptography"?
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Easy question for you guys...
Date: Tue, 23 Jan 2001 01:59:09 GMT
CoyoteRed wrote:
>
> I want to take four 8 bit numbers and create an number that can't be
> converted back by an amateur. Resolution can be 12 - 16 bits.
>
> Here is what I'm trying to do. I want to take an IP number and give it
> a not-so unique number. The number of IP's to be converted range maybe
> in the 10-50 range, so I think that should be sufficient.
How about the 0-63 range? Take the sum of the 4 numbers, and the lowest
6 bits of that. Or take a 6 bit CRC of the 4 numbers. Or a larger CRC,
and use the lowest 6 bits.
> What I'm trying to do is identify a poster on a bulletin board without
> giving out the IP or computer name. (or force user names and
> passwords)
>
> Here's the kicker, I want to use simple math that is available in
> Perl, in the fewest lines possible, and be easy enough to understand
> that almost anyone can follow the math. But be unable to reverse the
> process easily.
What you really want is a secure hash. Get a SHA1 module from CPAN
[assuming you don't already have it], and hash either the IP or the
computer name. Convert this, not to a number, but to a base64 string,
and use a few characters of this [5 characters should be fine]. Since
it's a string, not a number, it will be easier to remember, but since 5
characters of 6 bits each is alot, there will be few collisions. Since
it's SHA1, it will be infeasable to go from this string back to the IP.
As for being done in few lines, it would only be something like this:
use Hash::SHA1;
$ip = join(@ip,'.');
$sha = new SHA1($ip);
$output = &substr( $sha->base64(), 0, 5 );
I'm probably getting this all wrong, since I don't have perl installed
and it's been ages since I worked with it, but this is more or less what
the code would look like.
--
Most scientific innovations do not begin with "Eureka!" They begin with
"That's odd. I wonder why that happened?"
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Fitting Dynamic Transposition into a Binary World
Date: Tue, 23 Jan 2001 02:00:36 GMT
On Tue, 23 Jan 2001 01:46:36 GMT, Benjamin Goldberg
<[EMAIL PROTECTED]> wrote, in part:
>How would you go about doing a fixed-length to fixed-length conversion
>for large blocks in practice? Certainly not a [ridiculously huge]
>lookup table.
Well, an earlier post in this thread shows what I was thinking
of...and for an example in another domain, look at the page titled
'From 47 bits to 10 letters' on my web site.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: 32768-bit cryptography
Date: Mon, 22 Jan 2001 18:43:07 -0800
> If anybody has a way
> to use a quantum computer to attack most forms of symmetric
> encryption, this is the first I've heard of it.
One of the basic (cryptographic) assumptions of quantum cryptography (I say
assumption because we haven't done much big enough to test the theory) is
that it can brute force in square root time, the result is that 128-bit
cryptography immediately becomes 64-bit, 256 -> 128, etc. This is believed
to be one of the reasons that AES was specified to allow 256-bit keys.
Joe
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Tue, 23 Jan 2001 04:05:22 GMT
On Mon, 22 Jan 2001 23:51:49 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:
>On Mon, 22 Jan 2001 07:07:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote,
>in part:
>
>>In my experience with actually running such a cipher, bit-balancing
>>adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was
>>given both in the "Revisited" article, as well as the original
>>Cryptologia article on my pages. And if the text is compressed first,
>>there will be even less expansion. If you see even a 33 percent
>>expansion as a show-stopping amount, a "bandwidth problem," I think
>>you need to pause for re-calibration.
>
>Well, it is possible to map 6 arbitrary bits (64 possibilities) to a
>string of 8 balanced bits (70 possibilities) for an increase of 33
>percent.
>
>However, for larger block sizes, one can indeed do better than that.
>
>One can map 37 arbitrary bits (137438953472 possibilities) to 40
>balanced bits (137846528820 possibilities) for only an 8.11% increase
>in bandwidth cost.
What are you going on about?
If we had a flat distribution of all possible values, there would be
little if any bit-imbalance in a Dynamic Transposition bit-balanced
block. Then, using my scheme, we would have an expansion of a little
over one byte per block. One byte in 512, for example.
The point is that ASCII does not have a flat distribution, so if we
encipher ASCII, we accumulate an bit-imbalance of 2 or 3 bits per data
byte. You might figure out a scheme to reduce that, in which case you
will have a data-compression scheme. But -- as I mentioned in the
"Revisited" article -- it is reasonable to apply data-compression to
ASCII before accumulation in the block, in which case we can expect
the random-like distribution to usually be almost perfectly balanced
as it stands.
>Or one can map 158 arbitrary bits
>(365375409332725729550921208179070754913983135744 possibilities) to
>162 balanced bits (365907784099042279561985786395502921046971688680
>possibilities), which further reduces the bandwidth cost to 2.532%.
>
>So I suppose you could say that I am quite seriously in need of
>"recalibration".
Yes, you are. I think you need some sleep.
>Note that the fraction of 8-bit sequences that are balanced is just
>over 1/4,
One can obtain a bit-balanced block even if each byte in the block is
not itself bit-balanced. The issue is the distribution of the various
values. When that distribution is flat, the block is quite likely to
be near balance automatically. The various complex schemes you
mention repeatedly are worse, not better.
>while the fraction of balanced 40-bit sequences is just over
>1/8, and the fraction of balanced 162-bit sequences is just over 1/16;
>the proportion does decline as the number of bits increases, but much
>more slowly than the number of bits increases. Of course, one expects
>the efficiency to keep increasing, as overall balance over a larger
>block is a less restrictive condition than balance in small subblocks.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Dynamic Transposition Revisited (long)
Date: Tue, 23 Jan 2001 04:24:27 GMT
On Mon, 22 Jan 2001 13:05:04 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:
>On Mon, 22 Jan 2001 07:07:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote,
>in part:
>>On Mon, 22 Jan 2001 00:02:04 GMT, in
>><[EMAIL PROTECTED]>, in sci.crypt
>>[EMAIL PROTECTED] (John Savard) wrote:
>>Somewhere there is a reference which continues to corrupt the minds of
>>people coming into cryptography. It deludes them into believing the
>>OTP is mathematically proven to be unbreakable in practice. I would
>>love to find exactly what that reference is. Then maybe we could stop
>>all this nonsense before it starts.
>
>The Codebreakers, David Kahn. Chapter 13: "Secrecy for Sale".
Thanks. I suppose that may be it. Ironically, though, having just
re-read that section, I find it hard to disagree with any particular
statement in it.
Essentially the problem is one of ambiguity: the idea that Kahn is
talking about a real, practical OTP, when in fact he can only be
talking about the theoretical OTP. For example, he starts out saying:
[The one-time system] provides a new and unpredictable key character
for each plaintext character . . . ."
But since abstract predictability is something which cannot be
measured, it can be only asserted, which takes us back to theory, not
practice.
He says: "the key in a one-time system neither repeats, nor recurs,
nor makes sense, nor erects internal frameworks." Again, the only way
to "know" that is to assume it, which is theoretical, not practical.
And: "a random key has no underlying system -- if it did, it would not
be random," which is must be theoretical, because in practice it would
be circular and also useless.
All in all, it would seem to be an interesting lesson about the
ability to write truth which is easily taken out of its' limited
correct context.
>>>>The unexpected advantage of Dynamic Transposition is that a plethora
>>>>of different permutations produce exactly the same ciphering result.
>>>>This would seem to hide the exact permutation used, and thus also hide
>>>>any attempt to define the shuffling sequence by moving back from a
>>>>known permutation.
>
>>>But that's not an advantage that can't be obtained with substitution.
>
>>The advantage cannot be obtained by substitution.
>
>>I have seen you say that, but I have absolutely no idea what you could
>>possibly mean by it.
>
>>>Suppose we enciphered a message using DES, except that the subkeys are
>>>generated by some sort of stream cipher. Each 48-bit subkey could be
>>>replaced with any member (including itself) from a set of 2^16 subkeys
>>>that give the same result.
>
>>How is it that these different keys give the same result?
>
>The same result _for a specific given input block_, just as for a
>specific input block in Dynamic Transposition, two bits can both be
>1s.
>
>Essentially, to obtain a given f-function output from a given
>f-function input in DES, it is sufficient to control the middle four
>bits of every six in the 48-bit subkey; the other two bits can have
>any value. 4 bits -> any one of 4 S-boxes -> XOR with an arbitrary
>value -> any 4 bits you like.
I certainly agree that one can obtain any particular 64-bit ciphertext
for any particular 64-bit plaintext. But that is not an arbitrary
large substitution.
If we had an arbitrary keyspace for a 64-bit block cipher, we could
start with plaintext value 00.......00, and then choose 1 from among
2**64 different values. Then, for plaintext 00.......01, we could
choose 1 from among 2**64 - 1 values, and so on. The total keyspace
is (2**64)!
Nor is that the only issue: A conventional block cipher is not
re-keyed on a block-by-block basis; Dynamic Transposition is.
In a conventional block cipher, known-plaintext completely reveals one
particular transformation (from among 2**64); Dynamic Transposition
does not.
>>There is only one permutation per block in Dynamic Transposition. I
>>do recommend shuffling twice, only to prevent someone who knows the
>>actual permutation from attacking the RNG sequence. But that idea is
>>really getting in the way of comprehension, because that is not the
>>main source of strength in the system. In the end, there is just some
>>permutation.
>
>Four rounds of DES with subkeys that change per block are exactly
>analogous.
Well, the mapping is 2 to the 64th, so -- assuming that each key bit
is effective and 2**64 block values can be selected with those bits --
then 64 bits per block would suffice.
However, for any one known-plaintext, a particular transformation
would be revealed, on the way to attacking the keying sequence. But
that does not happen in Dynamic Transposition.
>>>It might have the advantage that successive permutations are harder to
>>>unravel than successive XORs, or even additions alternating with XORs.
>
>>"Might?"
>
>I could have said here 'It is likely to have...', but the point is: a)
>we don't know how well The Opponent understands permutation groups,
>and b) some analysis of the mathematical properties involved is needed
>to say much more.
First, I call the set of permutations which produce the same result
"clumps," specifically to distinguish them from mathematical "groups."
These clumps are not simply key-dependent, they are also
data-dependent.
Next, each different (balanced) pattern of bits in the plaintext block
has a different clump structure, but that "structure" is only apparent
when we know the plaintext, or if we were to re-use the same
permutation. But Dynamic Transposition does not re-use permutations.
>>>And there is the theoretical interest of showing that, fundamentally,
>>>a transposition can be, inherently, just as secure as a substitution.
>
>>Dynamic Transposition is vastly more secure than a substitution.
>
>>You will have to define what you mean by "substitution" though, since
>>you appear to be describing DES as "substitution."
>
>>Modern block ciphers do attempt to emulate large simple substitutions.
>>They are given "large enough" keyspaces to prevent brute-force attacks
>>on keys. But nobody should have any delusions about the extent to
>>which they actually produce all N! possible cipherings.
>
>Dynamic transposition may produce all n! possible permutations of the
>bits involved; it DOES NOT produce all
>
> n!
> ( ------------) !
> (n/2)!(n/2)!
>
>mappings of the set of balanced n-bit strings onto itself any more
>than DES produces all (2^64)! possible block substitutions.
Well, let's take this apart:
A: n! is the number of different permutations in an n-element system.
B: (n/2)!(N/2)! is the number of different permutations in a balanced
n-bit system which have the same effect (that take a plaintext block
to the exact same ciphertext block).
A / B is the probability that a different permutation will not produce
the same effect.
I frankly have no idea what (A / B)! could possibly be intended to
represent. Perhaps the equation should have been (A/B)**S, where S is
the number of blocks, the length of a sequence of permutations.
>This is a mistake that, frankly, I'm surprised at you for making. But
>we all slip up, and it's looking like this false assumption is at the
>root of some of your claims for Dynamic Transposition as against
>substitution.
Every possible permutation can be constructed with approximately equal
probability provided only that we have enough state in the RNG, and in
practice we can easily build efficient RNG's with that property.
What we cannot do is construct any possible sequence of permutations.
But this "sequence of permutations" is just not available for
analysis, quite unlike the sequence of transformations in a
conventional block cipher. It is different, even if we were to key
the conventional block cipher on a block-by-block basis.
In Dynamic Transposition, each permutation itself is not exposed and,
since it is used only once, cannot be traversed for examination. So
if we leap to the conclusion that a particular permutation produced
the transformation, our chances of that leap being right are:
C: 1 / ((n/2)! (n/2)!)
And if we want to know an explicit sequence of permutations of length
S, our probability of doing that is C**S. When something is difficult
to predict as a single unit, it is exponentially difficult to predict
as a sequence.
Once again I note that "the" permutation clump -- of size (n/2!)(N/2)!
-- changes with each block. Additional blocks thus do not refine an
estimate of a previous clump, but instead step into a new clump with a
new structure.
>And with substitution, unlike Dynamic Transposition, instead of being
>stuck with one set of n! substitutions, one can use steps of different
>kinds so that instead of just having, say, all 2^n possible mappings
>obtained by XORing an n-bit block with an n-bit key, one can explore
>the space of (2^n)! permutations more deeply - depending on how much
>key we use, and how complicated a structure we give the cipher.
>
>>>But because it seems to be stuck with a bandwidth problem
>
>>In my experience with actually running such a cipher, bit-balancing
>>adds 25 percent to 33 percent to simple ASCII text. The 1/3 value was
>>given both in the "Revisited" article, as well as the original
>>Cryptologia article on my pages. And if the text is compressed first,
>>there will be even less expansion. If you see even a 33 percent
>>expansion as a show-stopping amount, a "bandwidth problem," I think
>>you need to pause for re-calibration.
>
>You would be right, unless
>
>>>when taken
>>>'straight', and because its advantages can mostly be matched within
>>>the substitution world,
>
>>Simply false.
>
>I happen to be right _here_.
I'm sorry, but even if you were right "_here_," you would still not be
right about the bandwidth "problem," and you have been trumpeting that
for a sequence of responses.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: Kooks (was: NSA and Linux Security)
Date: Mon, 22 Jan 2001 20:39:44 -0800
Darren New <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Greggy wrote:
> > There is no parallel with those who already had intimate knowledge of
> > the 13th amendment in their days.
>
> Last I looked, the Constitution named the Supreme Court as the final judge
> of what the constitution means.
Sorry to be pedantic here, but where in the Constitution does it say that?
--
poncho
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************