Cryptography-Digest Digest #239, Volume #13 Wed, 29 Nov 00 06:13:01 EST
Contents:
Re: collision probability with file checksums (David Schwartz)
Re: Twofish codesize Q ("kihdip")
Re: Entropy paradox ("John A. Malley")
Re: Isomorphic Elliptic Curves ("John A. Malley")
Re: Isomorphic Elliptic Curves ([EMAIL PROTECTED])
Re: voting through pgp (Dan Oetting)
Re: Effects of successful break - a "what if"-scenario? ("Joseph Ashwood")
Re: Role of linguistics ("Joseph Ashwood")
Re: collision probability with file checksums (Bryan Olson)
Re: collision probability with file checksums (Bryan Olson)
Re: Blowfish key input size ([EMAIL PROTECTED])
Password Protocol: a need to cheat? (John Savard)
Re: Q: Role of linguistics (Mok-Kong Shen)
Re: Q: Role of linguistics (Mok-Kong Shen)
Re: Entropy paradox (Mok-Kong Shen)
Re: On mutation of crypto algorithms (Mok-Kong Shen)
----------------------------------------------------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: collision probability with file checksums
Date: Tue, 28 Nov 2000 22:52:30 -0800
David Hopwood wrote:
> Ed L Cashin wrote:
> > David Schwartz <[EMAIL PROTECTED]> writes:
> > > Ask a few cryptography experts to predict in what year they
> > > think the first SHA1 collision will be found. I predict the first
> > > SHA1 collision might be found by massive distributed search
> > > techniques around 2080.
> It would be a very brave expert who would predict that no weaknesses
> will be found in SHA-1 up to 2080.
I didn't say no weaknesses. Weaknesses have been found in MD5, but no
collisions.
But after thinking things over, I think I'll revise my estimate to
2060. I now predict that the first SHA-1 collision will be found around
2060.
DS
------------------------------
From: "kihdip" <[EMAIL PROTECTED]>
Subject: Re: Twofish codesize Q
Date: Wed, 29 Nov 2000 08:17:49 +0100
I have been searching through the code as you suggested.
The only difference is the definition of Fe32_(x,R), and the PART_KEY switch
*gives* the largest code.
But as I asked in my previous question: Perhabs the switches wasn't
introduced to produce smaller code size - perhabs they were appended to give
the user the ability to choose different codes. The FULL_KEY code has a
*very* slow key-setup, where PART_KEY is quicker.
If the key-setup is the reason why the switches were appended, you'll be
able to choose a different code with a different traffic pattern.
But again - if someone knows why, I'd like to be sure.
Kim
"Per Claesson" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> kihdip wrote:
>
> > Whe
------------------------------
From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Entropy paradox
Date: Tue, 28 Nov 2000 23:18:58 -0800
Mok-Kong Shen wrote:
>
[snip]
>
> If n of BBS is sufficiently large and the construction is
> sound, one can generate a fairly large amount of output bits
> that are practically unpredictable. That is, one inputs
> into BBS a rather small amount of - also practically --
> unpredictable bits and obtain as output a very large
> amount of practically unpredictable bits. There could
> perhaps be certain difference in 'quality' between these
> two groups of bits, but as far as the user is concerned,
> they are not. So, as long as the BBS cannot be brute-forced,
> one has a huge gain in practically unpredictable bits. My
> view is that this gain seems to come from 'nothing' (the
> air), sort of via magic. That's what troubles me.
>
> M. K. Shen
Good news, Mr. Shen! There IS a formal explanation of this apparent
"paradox" in
J. Hastad, R. Impagliazzo, L. Levin, and M. Luby, "Construction of a
pseudo-random generator from any one-way function", SIAM Journal on
Computing,
available on-line at
http://www.icsi.berkeley.edu/~luby/PAPERS/hill.ps
Let g(X) be the output binary string in the set {0,1}^L of a
pseudorandom number generator g initialized with a random seed value X
where X is a random variable, a n-bit long binary string in the set
{0,1}^n, which is uniformly distributed - so any particular value x for
X is equiprobable, and L > n.
The Shannon entropy of X is the expected (average) value of the
summation of the -log( Probability[ X = x] ) over the range of x. As
you (and others in this thread) point out, the Shannon entropy of the
output of the pseudorandom generator g(X) cannot be any greater than the
Shannon entropy of the seed X.
But the entropy of the output string of the generator, g(X), "seems" to
be greater than the Shannon entropy of the seed X. What's going on?
The answer is found in a concept called "computational entropy" and
defined in this paper.
Let Y be a random L-bit long binary string in the set {0,1}^L, which is
uniformly distributed - so any particular value y for Y is
equiprobable. The Shannon entropy of Y is the expected (average) value
of the summation of the -log( Probability[ Y = y] ) over the range of y.
The paper goes on to define the use of a probabilistic Turing Machine
adversary to detect differences between the random variable Y and the
random variable Z = g(X). In summary, the random variable Y and the
random variable Z = g(X) are "computationally indistinguishable" is
there is no adversary for distinguishing between these two within some
established probability of error. The "computational entropy" of g(X) is
defined to be at least the same as the Shannon entropy of Y if g(X) and
Y are computationally indistinguishable.
So in a sense there is "amplification" of the Shannon entropy of X by
g(X) when the computational entropy of g(X) is greater than the Shannon
entropy of X.
Hope this helps,
John A. Malley
[EMAIL PROTECTED]
------------------------------
From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Isomorphic Elliptic Curves
Date: Tue, 28 Nov 2000 23:34:20 -0800
Roger Schlafly wrote:
>
> David Wagner wrote:
> > Bob Silverman wrote:
> > >Two curves are isomorphic iff they have the same j-invariant.
> >
> > Out of curiousity, can the j-invariant be computed in polynomial time?
>
> Oh yes, its real simple. For y^2 = x^3 + ax + b, it is something
> like 4 + 27(b^3)/(a^2). Don't quote me, as I probably have a factor
> wrong. The formula is very similar to that for the discriminant.
> Formula might be different in characteristic 2. Also, the isomorphism
> might be in the algebraic closure.
The j-invariant is given by:
1728(4a)^3 / 16(27b^2 + 4a^3)
for an elliptic curve of the form y^2 = x^3 + ax + b (Weierstrass
normal form), and its discriminant 4a^3 + 27b^2 != 0.
John A. Malley
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Isomorphic Elliptic Curves
Date: Wed, 29 Nov 2000 07:28:54 GMT
In article <900q3t$2gg$[EMAIL PROTECTED]>,
J. Rostand <[EMAIL PROTECTED]> wrote:
> Ok. What I understand now is the following.
>
> Let C1 and C2 be 2 cubic curves in P(K) (the projective plane over the
> field K). (C1 and C2 are 2 homogeneous polynomials of degree 3 in
> K[X,Y,Z].) Suppose that C1 and C2 are non-singular in P(K). Then, the
> elliptic curves defined by C1 and C2 are isomorphic if and only if (by
> definition) there exists polynomials P1,P2,P3,Q1,Q2,Q3 over K such
that
> the function
>
> R(X,Y,Z) := ( P1(X,Y,Z)/Q1(X,Y,Z) , P2(X,Y,Z)/Q2(X,Y,Z) ,
> P3(X,Y,Z)/Q3(X,Y,Z) )
>
> from P(K) onto P(K) is a bijection, its inverse is a rational function
> and [ B=R(A) is a point of C2 if and only if A=R^-1(B) is a point of
C1
> ].
>
> Am I right?
Hmmm...why are you asking? Are you just making definitions up hoping to
get lucky? What text/paper are you reading?
But yes, you're right.
>
> Proposition:
> If 2 elliptic curves are isomorphic, then they have the same group
> structure. The reciprocal is false.
>
> Is this right? If yes, is it easy to prove? Any counter example to the
> reciprocal?
>
> J. Rostand.
>
Proposition is true. And 'easy' is in the eye of the beholder, as
always. I think you should consult a standard text on elliptic curves,
so you can learn the definitions and standard theorems. Someone
mentioned in the thread, Silverman & Tate, Rational Points on Elliptic
Curves. I've looked at it, and while a bit too conversational for me
(and they keep putting the rigorous proofs in the appendices), I can see
it's entirely appropriate for a beginner. A somewhat harder intro is A.
Knapp's Elliptic Curves. First 5 chapters are pretty elementary, last
half of the book requires graduate level knowledge.
And I already gave a counterexample to the converse (or reciprocal as
you call it)! But here it is anyway:
y^2 = x^3 + 1
y^2 = x^3 + 2
both defined over F_5, the field of five elements.
As mentioned earlier in the thread, for elliptic curves over finite
fields, you can prove a simplification of the isomorphism definition.
In my example, the two curves are isomorphic iff there exists an element
in F_5, call it u, s.t. 2u^6 = 1. But there isn't. So they are not
isomorphic. What about their groups? The (abelian) group of each curve
has 6 elements, so are clearly isomorphic to Z_6.
BTW, my memory is a little fuzzy, but I believe your definition of
isomorphism is more general and applies to algebraic curves in general.
For elliptic curves, the definition can be reduced to a very explicit
transformation called 'admissible change of variables.' More on this in
the references I give. One last thing: many of the journal articles on
applications of elliptic curves to cryptography have very nice
introductions to elliptic curves. For example, Koblitz's article
"Elliptic Curve Cryptosystems" in Mathematics of Computation (1987?).
Or Lenstra's "Factoring Integers with Elliptic Curves" annals of
mathematics (date?), a classic. Of course these expositions are very
reduced to special cases, but you can gain the intuition and the working
knowledge you need to read the literature.
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Dan Oetting <[EMAIL PROTECTED]>
Subject: Re: voting through pgp
Date: Wed, 29 Nov 2000 00:46:57 -0700
In article <[EMAIL PROTECTED]>, Benjamin Goldberg
<[EMAIL PROTECTED]> wrote:
> Actually, I believe that something like that exists in some places. I
> live in New York state, and although I haven't participated in the
> recent election, I think that voting machines which do this are used.
>
> Moving the lever for one candidate would cause any other lever for a
> candidate for that position to be reset. (Rather like a radio button)
If you make the world idiot proof the world will make better idiots.
These voting machines are still subject to human error. I recal an
election where the local voting booths were setup with the equivalent of
the "Buterfly Ballot". One of the offices ended up being split between
two of the rows of levers. The election officials hadn't realized that
the lockouts that prevent multiple votes in the same office couldn't be
programmed across the rows. The election officials had to go through by
hand reading dimples on the paper audit log to cast out the overvote.
-- Dan O.
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Effects of successful break - a "what if"-scenario?
Date: Tue, 28 Nov 2000 15:31:17 -0800
The answer really depends on the company. The common reaction is to see how
much influence the person has, if Schneier publishes a break people will
hear, if Brian Bullock (name grabbed from another thread on this newsgroup)
published a break, there's a possibility that few people would ever find
out. They would fix the problem that Schneier found and ignore Bullock
completely. Additionally some companies will sue for various reasons, some
may even go far enough to bring charges. OTOH some companies would accept
that it was broken, tell everyone (allowing people to give them credit) and
move on. You can see these reactions quite often in real life, take a look
at Microsoft's reaction to any security break, which usually consists of
blaming the discoverer for "creating" a security hole. Now take a look at
the Linux community, whose general reaction is "Ok, it's fixed now." You can
see all of this play out on CERT, where some companies will actually publish
breaks that they have fixed to encourage updating, and Microsoft bugs
usually get published through expiration (and even then go without being
fixed on MS's own servers, reference the Microsoft hacking news stories that
happened a short while ago). I believe that Microsoft has even sued people
for publishing breaks, but I don't have a reference. Of course you asked
about crypto algorithms, but the same rules apply, if you're going to make
the giant mad, make sure you either have a really big stick, or the giant
can't figure out who you are.
Joe
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Role of linguistics
Date: Tue, 28 Nov 2000 15:44:30 -0800
It is of less interest now, that much is true. But the ability to recognise
a language (i.e. recognise possible plaintext) is still very important. So
the role is still non-trivial, but it is becoming less worthwhile.In some
cases like RSA, knowledge of the encrypted language is currently fairly
worthless because the best attacks on it (factoring) do not depend on any
internal knowledge. However in many cases this is simply not true, and
breaking Rijndael will almost certainly require having at least some basic
knowledge of the formatting/language of the underlying data.
In some extreme cases having the extra knowledge of the language is even
more useful. At a previous employer they used RC4 to encrypt a data stream.
I proved the folly of doing that with their formatting by handing my boss
his password (it was in Russian, a language I can't even recognise all the
letters from), something he did not appreciate being possible. This was a
protocol failure which managed to show through the RC4 stream. A while ago
on this group I proved myself wrong in very much the same way, by creating a
protocol where knowledge of the language would clearly show through even the
strongest encryption. What I did is I defined a perfect block cipher on an
n-bit block, then I took the plaintext (any plaintext it didn't matter which
one), and padded using 0s such that each block had one and only one bit of
plaintext, and the plaintext was in the same location in each block (e.g.
3DES and padding each plaintext bit with 63-bits of 0). After encryption the
stream can be compressed to the original stream (or it's logical inverse),
making the encryption completely worthless. In these cases knowledge of the
language being used allows the attacker to avoid attacking the underlying
cipher.
Joe
------------------------------
From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: collision probability with file checksums
Date: Wed, 29 Nov 2000 09:57:17 GMT
Ed L Cashin wrote:
> Bryan Olson writes:
[...]
> > Incidentally, a good manipulation detection code must be collision
> > resistant. Otherwise an attacker might be able to construct an
> > innocuous file and a malicious file with the same digest, legally
> > introduce the innocuous file, and then substitute the malicious file
> > without detection of the substitution.
>
> Happily, the idea of a legal introduction is not realistic in file
> integrity verification systems. If the file is being monitored for
> manipulation, say it's /bin/ls, then introducing your own /bin/ls is
> never a legal file introduction. There should be no opportunity for
> such undetected file placement.
You misunderstand the attack. The potential attacker is an
insider - the author of the file. Legal introduction means
the normal and legitimate procedure for installing this
(perfectly good) file on the system. The effect of the
attack is that the author can bypass reviews and audits, and
probably fool post-break forensics.
Files provided by a vendor and installed with the system are
unlikely to be good targets for the attack. Far more
dangerous are the local scripts that configure the host for
its applications.
--Bryan
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: collision probability with file checksums
Date: Wed, 29 Nov 2000 10:19:35 GMT
David Wagner wrote:
> Bryan Olson wrote:
> >Incidentally, a good manipulation detection code must be
> >collision resistant. Otherwise an attacker might be able to
> >construct an innocuous file and a malicious file with the same
> >digest, legally introduce the innocuous file, and then
> >substitute the malicious file without detection of the
> >substitution.
>
> I'm not sure I'm convinced yet.
>
> I agree that collision-resistance is important if you want to have
> the property of non-repudiation. However, non-repudiation is normally
> associated with public-key schemes, and not with symmetric-key
schemes.
[...]
> Should I find your reasoning compelling, given the above? Am
> I missing something?
The application here is a Tripwire-like system, which does
not correspond well to either PK signatures or MAC's. The
goal of the system is to detect and report changes in a set
of files. Tripwire is useful (perhaps most useful) against
insider attacks, and an author of one of the files is a
potential attacker.
--Bryan
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Blowfish key input size
Date: Wed, 29 Nov 2000 10:35:30 GMT
Hi Custerstoe,
In article <900rla$3q7$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>> I can't sleep thinking about why it is said that the input key of
>> Blowfish (16 rounds) is up to 56 bytes. It seems that the limit is in
>> 72 bytes (16 + 2) * 4...
>> Please let me sleep again, thanks.
> I don't know if this will be your sleeping pill but the document:
> http://www.counterpane.com/bfsverlag.html says:
> "The 448 limit on the key size ensures that the every bit of every
> subkey depends on every bit of the key. (Note that every bit of P15,
> P16, P17, and P18 does not affect every bit of the ciphertext, and
> that any S-box entry only has a .06 probability of affecting any
> single ciphertext block.)"
Thanks for your input. I had noticed that, but wouldn't give a key
greater than 448 bits more security than just 448? It would be less
than its effective size, but better anyway...
Why is sleeping so hard?
Regards,
Enlar.
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Password Protocol: a need to cheat?
Date: Wed, 29 Nov 2000 10:48:00 GMT
Looking at the goals of protocols like SRP and PAK, it seems to me
that to achieve them in full, it may be necessary to cheat by having a
computer, like the Kerberos server, which is itself secure, and whose
connection to the host computer is secure.
(There might be a clever way, based on generating random keys earlier
in the protocol, and allowing it to 'succeed' with a wrong password in
a way that is only dectectable as a failure if one knows the right
password, to achieve this without such a condition, but if so, _I_
haven't thought of it yet.)
Then, I can imagine that a protocol would be possible which achieves
the goal that dictionary attacks against the users' password are made
as difficult, for an attacker having the files on the computers at
both ends that are persistent between instances of the protocol, as
well as being able to mount active attacks on the communication line,
as EKE makes dictionary attacks for an eavesdropper with no access to
the password file.
So the setup is:
[User ---] User's computer --- Host computer [--- Security server]
and it is symmetric; the items in square brackets are assumed to be
physically secure. Thus, the attacker cannot read the user's mind to
obtain the user's password, or find it out by tapping the keyboard,
but the attacker may be able to read the user's hard disk. It's also
assumed neither the user's computer nor the host computer is corrupted
by a Trojan program, that the only compromise to their security that
is anticipated is the surreptitious reading of the contents of their
hard disks.
Not only is the setup symmetrical, but the protocol is symmetrical.
The user has a user ID of u, and knows a secret password p(u). The
security server stores a large random key q(u).
The user's computer stores E(A^x mod P,p), and the host computer
stores E(x,q) in the password file.
So the protocol begins like this: the user enters his ID and password
on his computer. The computer now figures out what A^x mod P, the
host's public key for the user, is; the user ID is sent to the host,
the host gets q, which unlike p is not brute-force searchable, from
the security server, and obtains x.
The user's computer generates a random number r, and can transmit
E(A^r mod P, H(A^x mod P)) to the host computer if necessary, or just
A^r mod P. (The extra complication follows a trick used in SRP and
PAK, but is it needed, or even relevant, here? And if it is needed,
even if it is used, will I have obtained the security I am aiming at?)
Both computers now know A^(xr) mod P, and thus a communications
session is set up by using that value as the key to exchange a session
key.
p, which is susceptible to a dictionary attack, is only used to
encrypt a public key.
A man-in-the-middle attack isn't possible, because the key pair x, A^x
is persistent.
The user's posession of the password p is verified, but only very
indirectly, by the user's ability to decipher A^x.
So the attacker is assumed to have E(A^x, p(u)) and E(x, q(u)). Only
the former can be attacked by a dictionary attack, but A^x looks like
a random binary number. (In other words, E(A^x, p(u)) has to be
handled using all the precautions required for EKE. Thus, what might
happen is that if P is 11xxx... then x is chosen so that A^x mod P is
of the form 10yyy... and E(yyy..., p(u)) is what is really stored.)
Even if the user's computer just transmits A^r mod P, an active
attacker can't substitute something for A^x, because that isn't
transmitted, it's on the user's computer.
Thus, it looks like a session key is established in a way that proves
the user's computer recieved p(u) and the host computer recieved q(u).
But this protocol could be used as the base on which to add other
steps that might be desired. Dangerous objects in other protocols
could be kept on the host computer's password file as long as they're
encrypted with q, even the user's password p itself, without violating
the specific security goal of the protocol.
Thus, an _elaborate_ protocol might work like this:
User: knows p(u).
User's computer: knows E(A^x, p(u)), E(y, QQ(u))
Host computer: knows E(x, q(u)), E(A^y, q(u)), E(QQ(u), q(u)),
E(E(QQ(u), p(u)), q(u))
Security server: knows q(u)
Step 1:
User gives p(u) to user's computer, and sends u along all the way to
the security server.
Step 2:
User's computer calculates A^x.
Step 3:
Security server gives q(u) to host computer.
Step 4:
Host computer calculates x, A^y, QQ(u), and E(QQ(u), p(u)) by using
q(u) as the key.
Step 5:
User's computer obtains a random number r, and calculates A^r.
Step 6:
User's computer transmits E(A^r, H(A^x)) to host computer.
Step 7:
Host computer, having x, obtains user's first temporary public key
A^r.
Using the now shared first session key, A^xr, E(E(QQ(u), p(u)),
H(A^xr)) is transmitted to the user's computer.
Step 8:
User's computer uses QQ(u) to obtain its private key, y, which, being
a private key, is guarded by QQ(u) rather than the weak p(u).
Step 9:
User's computer obtains a random number s, and calculates A^s, and
transmits E(A^s, H(A^xr)) to the host computer.
Step 10:
Host's computer obtains a random number t, and calculates A^t, and
transmits E(A^t, H(A^xr)) to the user's computer.
Step 11:
Both computers calculate their new shared session key, which is H(A^xs
+ A^yt) where + can be addition modulo P, XOR, or concatenation.
This uses elements of Kerberos (the security server), but mainly it
makes use of EKE to carry out a version of KEA. It relies on EKE as
presented here, although perhaps it can be modified to work with other
techniques.
By transmitting E(QQ(u), p(u)) to the user's computer, a stronger
authentication of p is obtained, because p is now used to work on
something kept on the host side. But it's still a fixed item; one
could do a more conventional challenge-response by storing, say,
E(H(p(u)), q(u)) on the host computer, and then the host computer
thinks of a random challenge z, transmits E(z, H(A^xs + A^yt)) and the
user replies with E(E(z, H(p(u))), H(A^xs + A^yt)) ... the exchange
being encrypted, a dictionary attack to find the password is again
forestalled, but if the encryption is securely authenticated at this
point, further authentication doesn't add to the security.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Role of linguistics
Date: Wed, 29 Nov 2000 11:56:44 +0100
Tom St Denis wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> > Tom St Denis wrote:
> > >
> > > I do not get what you are trying to say. Are you saying
> multilingual
> > > cryptographers are more capable?
> >
> > No. But the attacks on historical schemes were apparently
> > facilitated with good knowldege of languages. I surmise
> > that the advances made possible by computers to linguistics
> > could provide some processing (encoding) steps that are
> > useful as components of modern encryption schemes.
>
> Modern encryption schemes have nothing todo with human linguistics.
> Perhaps breaking them in very specific circumstances could be
> facilated... but it is not a key focus.
No question of that. In my original post I pointed out
that through working down to the bit level (instead of
remaining at the level of the English alphabet) modern
encryption techniques seem to have rendered linguistics
entirely insignificant in analysis. It is my hope though
that one might eventually nonetheless find some good
applications of linguistics in crypto today.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Role of linguistics
Date: Wed, 29 Nov 2000 11:56:58 +0100
Benjamin Goldberg wrote:
>
> Although I don't think that an encryption scheme based on these ideas
> would be feasable, I do think that a stenographic scheme might be.
>
> Consider: Start with a body of text into which you are going to embed
> your message. Feed it to a language parser, which builds some kind of
> data structure representing the meaning of the sentences. Then, use the
> bits which are to be embeded to select from among the various ways of
> writing that structure as english language. To extract the bits, feed
> the steno'd file to the parser, and rebuild the structure which
> represents the meaning of the sentences. Measure how much, and in what
> ways, the sentence in the steno'd file differs from "canonical" form;
> those are your bits.
>
> Plus, this method has the added advantage of providing a great excuse
> for sending the files back and forth... "Hey, Joe, I'm planning on
> sending this love letter to my GF, could you give me some suggestions on
> the phrasing?" The steno program would change the phrasing, while
> keeping the sentences close to their original meanings, and still being
> good english grammer.
I guess that with the advancement of AI today an automation
of what you described is indeed within reach.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Entropy paradox
Date: Wed, 29 Nov 2000 11:57:04 +0100
"John A. Malley" wrote:
>
> Good news, Mr. Shen! There IS a formal explanation of this apparent
> "paradox" in
>
> J. Hastad, R. Impagliazzo, L. Levin, and M. Luby, "Construction of a
> pseudo-random generator from any one-way function", SIAM Journal on
> Computing,
>
> available on-line at
>
> http://www.icsi.berkeley.edu/~luby/PAPERS/hill.ps
[snip]
Thank you very much for the pointer. I would venture
presently a seemingly plausible explanation of why the
'computational entropy' could be amplified (with
suitable generators, of course). That's because there
is 'computation' in turning the input to the output
and computation needs work to be done by the computing
device, or 'energy'. So the increase in 'computational
entropy' does come from somewhere and not out of the
vacuum.
M. K. Shen
=============================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On mutation of crypto algorithms
Date: Wed, 29 Nov 2000 11:57:09 +0100
Tom St Denis wrote:
>
[snip]
> In the case of AES (Rijndael) it is quite easy to permute the sbox and
> not change any of it's current DP/LP maximums. Just multiply in GF
> (2^8) the input polynomial then add a displacement polynomial...
>
> i.e
>
> Sbox'(x) = Sbox(a.x + b)
>
> You can now make "(255)(256) - 1" new copies of Rijndael which are all
> equally secure.
Maybe S''(x) = c*S'(x) + d would also work.
It is my position as expressed in the original post that,
since Rijndael has been extremely successful in providing
a very simple and strong design, we could easily allow
ourselves the luxury of introducing a little complexity
in applications where standard conformity is not an issue.
That opportunity is evidently huge, at least for software
implementations.
An interesting question is how would a scaled-down version
of Rijndael compete with DES. My feeling of the gut is that
20 rounds would probably suffice to provide a fairly safe
bet.
M. K. Shen
============================
http://home.t-online.de/home/mok-kong.shen
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************