Cryptography-Digest Digest #104, Volume #13 Sun, 5 Nov 00 16:13:01 EST
Contents:
Re: Q: Computations in a Galois Field (Benjamin Goldberg)
Re: Generalized Playfair ("r.e.s.")
A review of Stas Busygin's RHPS has been issued (Stas Busygin)
Client Puzzle Protocol ([EMAIL PROTECTED])
Re: Randomness from key presses and other user interaction (David Schwartz)
Re: hardware RNG's (David Schwartz)
Re: On obtaining randomness (Richard Heathfield)
Rijndael 128/192/256 implemented in GPG 1.0.4 (David Crick)
Re: Brute force against DES (Francois Grieu)
Re: Brute force against DES (Francois Grieu)
Re: PGP 7.0 - Key Reconstruction - White Paper now online (Imad R. Faiad)
Re: sqrt correlations (David Wagner)
Re: [newbie] Is PGP 7.0 hash extension secure? (Benjamin Goldberg)
----------------------------------------------------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Q: Computations in a Galois Field
Date: Sun, 05 Nov 2000 19:18:39 GMT
Tom St Denis wrote:
>
> In article <[EMAIL PROTECTED]>,
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> > I like to ask some questions: The choice of the mapping of
> > x to 1/x in Rijndael has presumably much contributed to
> > resulting in a very good substitution of that cipher. Is
> > this fortuitous or is that mapping intrinsically good for
> > all GF(2^m)? Are there other mappings of comparable quality?
> > Does the choice of the polynomial effect the diffusion
> > properties of the resulting substitution, if not why?
> > Thanks.
>
> It uses inversion in GF(2)^8 if I am not mistaken. and I have said
> already that no the choice of a the modulus DOES NOT change the
> diffusion properties.
Sorry, Tom, I hate to say this, but polynomials in GF(2)^8 do not have
multiplicative inverses. Polynomials in GF(2^8), on the other hand, do.
Think... What poly of type GF(2)^m, multiplied by x**5 + x**2 + 1,
equals 1? Answer: NONE, unless we're multiplying under a modulo.
Similarly, no integer has an integral multiplicative inverse unless
there's a modulo.
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
From: "r.e.s." <[EMAIL PROTECTED]>
Subject: Re: Generalized Playfair
Date: Sun, 5 Nov 2000 11:23:41 -0800
"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote ...
| "r.e.s." wrote:
| >
| [snip]
| >
| > (4) Encipher one group at a time, treating it as a circular
| > sequence of overlapping letter-pairs (e.g. for M=3, "abc"
| > is seen as ab-bc-ca). Each letter of a group is enciphered
| > as the first half of an ordinary Playfair digraph of which
| > it is the first letter, while cycling through the M squares
| > in succession.
|
| Question: You are using multiple matrices, i.e. doing
| sort of 'polyalphabetical substitution' (in contrast to
| monoalphabetical substitution'). How does your method
| compare to encrypt in the example above ab with the
| first matrix as conventional Playfair to Ab' then b'c
| with the second matrix again as conventional Playfair
| to Bc', etc.?
What do you mean by "how does [it] compare", and
in which example, specifically?
How would you wish to compare ordinary Playfair
(one square, i.e., d=2, M=1 in the general method),
to the case, say, d=2, M=3 (three squares)?
| [snip]
| > Further generalizing the above method to d-dimensional arrays
| > of an alphabet, with d=1,2,3,..., is straightforward.
|
| I am afraid that you'll get into some awesome difficulty
| when you advance to above d=3, since one is leaving
| the three-dimentional space that one can easily think of.
Merely being difficult to "visualize" is not in
itself a problem for using multi-dimensional arrays
in computer algorithms. As to efficiency, I've
no feeling for what might be problematic, but it
would surely depend on the sizes of both d and M.
BTW, I make no claims about the suitablity of
these generalizations of such a "classical type"
of cipher. It just seemed interesting to have one
method that produces a family of tractable-by-hand
ciphers for d<3, and includes ordinary Playfair &
the Two-Square cipher.
| The purpose of Playfair is to have a compact (special)
| substitution scheme as compared to a general substitution
| that demands too much storage space. Advancing to d>2 would
| lead to increasingly more compact schemes. But I suppose
| that for practical purposes d=2 is good enough.
|
| [snip]
|
| > Yet another variation of this generalized Playfair would be
| > to *not* cycle back to the first letter of each group, but
| > instead to treat the entire plaintext string as a sequence of
| > overlapping letter-pairs, with the first letter regarded as
| > following the last letter of the message. (And still cycling
| > through the M squares in succession as before.)
|
| I don't yet understand this. Previously you said to
| process like ab bc ca. But this IS 'overlapping and with
| the first letter regarded as following the last letter',
| isn't it?
Suppose the plaintext message is "abc def" and
that M=3.
The generalization that reduces to the classical
Playfair & Two-Square ciphers when d=2 and M=1,2,
respectively, enciphers as follows:
"abc def" -> "ABC DEF"
according to
a -> ab(12)
b -> bc(23)
c -> ca(31) <- (*)
d -> de(12)
e -> ef(23)
f -> fd(31) <- (**)
The other form of generalization would do it
this way:
"abc def" -> "ABC DEF"
according to
a -> ab(12)
b -> bc(23)
c -> cd(31) <- (compare to *)
d -> de(12)
e -> ef(23)
f -> fa(31) <- (compare to **)
In the latter variation, the grouping into
trigraphs only serves to remind of the
need to cycle through the three alphabet-
squares, whereas in the former version,
the grouping reminds also of the need to
cycle back to the beginning of each
*trigraph* in order to encipher its final
letter.
--r.e.s.
------------------------------
From: Stas Busygin <[EMAIL PROTECTED]>
Crossposted-To: comp.theory,sci.op-research,sci.math
Subject: A review of Stas Busygin's RHPS has been issued
Date: Sun, 05 Nov 2000 22:05:16 +0200
Hello All!
The review of Stas Busygin's Repository for Hard Problems Solving
on the paper "An Efficient Algorithm for the Minimum Clique
Partition Problem" by A. Plotnikov has been issued. It is available
along with software implementing the vertex-saturation (i.e., the
correct part of the proposal) at the page dedicated to the paper:
http://www.busygin.dp.ua/clipat.html
http://www.geocities.com/st_busygin/clipat.html (mirror)
Let me remind that according to the publishing policy of the
repository, a review will be marked as outdated or even removed
whenever a correction eliminating its remarks will be submitted.
Best wishes,
Stas Busygin
email: [EMAIL PROTECTED]
WWW: http://www.busygin.dp.ua
------------------------------
From: [EMAIL PROTECTED]
Subject: Client Puzzle Protocol
Date: Sun, 05 Nov 2000 19:48:46 GMT
Hi,
It's been quiet a while since RSA announced the client puzzle protocol,
in the paper they claim the protocol can me layered over TCP/IP as a
solution to TCP SYN Flooding, but is this really possible without
modifying TCP itself?
Thanks,
bodie
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Randomness from key presses and other user interaction
Date: Sun, 05 Nov 2000 11:55:53 -0800
Tim Tyler wrote:
>
> David Schwartz <[EMAIL PROTECTED]> wrote:
> : Mack wrote:
>
> :> There seems to be some argument as to whether timing
> :> keystrokes is a good source of randomness.
> :>
> :> So lets start a thread on that.
> :>
> :> 1) Key stroke timing is generally quantitized to about 20 ms
> :> give or take.
>
> : It's the give or take in the 20 ms that contains the entropy.
>
> Well, *if* this is true, this is not "randomness from key presses and
> other user interaction" - it's more randomness from clock signal drift.
The question is whether timing keystokes is a good source of
randomness. I'm arguing that it is. Some of the entropy comes from the
human, some of it from the quantization of real values built into the
hardware. I happen to have a pretty good idea of how much entropy comes
from the fact that the two oscillators involved have uncorrelated
frequencies. I'm not as knowledgeable about the entropy in the
keystrokes themselves.
DS
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Date: Sun, 05 Nov 2000 11:54:21 -0800
Tim Tyler wrote:
> This is not my field. However, are you sure that the drift is
> unpredictable - and does not reflect environmental conditions
> (such as temperature) which might be measured or controlled?
They are correlated with temperature, but tiny fractions of a degree
over small areas. There is no known way to measure this variation in a
crystal other than by measuring the crystal's oscillation frequency. So,
in other words, the only way to know the frequency at which a crystal
will oscillate is to make it oscillate and measure its frequency.
Remember, we are dealing with cases where you can measure a part per
billion.
DS
------------------------------
Date: Sun, 05 Nov 2000 20:08:42 +0000
From: Richard Heathfield <[EMAIL PROTECTED]>
Subject: Re: On obtaining randomness
Mok-Kong Shen wrote:
>
> Richard Heathfield wrote:
> >
> > Mok-Kong Shen wrote:
> > >
> > > According to the British Museum model, given sufficient
> > > time, monkeys at keyboard could eventually produce all
> > > the works in the literature collection there.
> >
> > No. The Sun will explode long before they complete even a Shakespearean
> > soliloquy.
> >
> > > So
> > > evidently there is enough entropy present in the numerous
> > > volumes written by humans to be found in the libraries.
> >
> > Maybe there is, but it isn't evident from your first sentence.
> >
> > April 1st isn't for a while yet.
>
> Do an actual attempt of prediction of even a stream
> from a some moderate complicated PRNG scheme and you
> would be finding yourself like biting the stone, even
> in the month of April. Try something, don't just write
> big words.
>
> M. K. Shen
I don't know what you mean by "biting the stone". NB I'm not arguing
about your conclusion, merely your premise - the monkeys thing.
Take a typical sonnet (shorter than the average soliloguy!) - say,
Shakespeare's Sonnet CXL. The sonnet form has 14 lines, and this
particular example has about 30-35 alphabetic characters per line. Let's
call it 30.
So that's around 420 characters, giving us 26^420 possible combinations
of those letters (if we are prepared to do a little light copy-editing
on behalf of our monkeys - otherwise, the numbers get a lot sillier a
lot quicker). If a million monkeys were bashing away on specially
adapted typewriters at 10 characters per second each, they would get
through about 3e14 'random' characters per year, which gives them
approximately 7e11 attempts at a sonnet. But they would need, on
average, around 1e594 attempts to produce this particular sonnet. I
don't know how many sonnets the British library has, but let's say it's
a million (I suspect it's actually fewer than this). So we're looking at
1e588 attempts, on average, to produce some sonnet or other. But they
can only make 7e11 per year, so it's going to take them, on average,
2.77e576 years (if you increase the number of monkeys, you can bring the
time down, of course, but there's a limit to the number of monkeys you
can fit on the planet) just to produce one sonnet, let alone all the
great works. (For the record, the sun is expected to last around another
7e9 years.)
This argument has been raging for over a hundred years and it's high
time it was shot down in flames for the specious claptrap it really is.
Nothing personal.
And I still have no quarrel with your conclusion. Just your argument.
--
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
------------------------------
From: David Crick <[EMAIL PROTECTED]>
Subject: Rijndael 128/192/256 implemented in GPG 1.0.4
Date: Sun, 05 Nov 2000 20:16:03 +0000
See http://www.gnupg.org/ for source code and binary distributions.
--
+-------------------------------------------------------------------+
| David A Crick <[EMAIL PROTECTED]> PGP: (NOV-2000 KEY) 0x710254FA |
| Damon Hill Tribute Site: http://www.geocities.com/MotorCity/4236/ |
| M. Brundle Quotes: http://members.tripod.com/~vidcad/martin_b.htm |
+-------------------------------------------------------------------+
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Sun, 05 Nov 2000 21:20:04 +0100
"John A. Malley" <[EMAIL PROTECTED]> wrote :
> Given plaintext and corresponding ciphertext, and (assuming) the
> ciphertext is in ECB form (no chaining), consider using redundancy
> in the plaintext (if present) to speed up the search for the
> correct key.
I do not get it at all. Assuming one gets a known plain-ciphertext
pair, all there is to do is find a key (probably there is
only one) that maps the two. Redundancy of the plaintext is
perfectly useless.
> Also consider separating out the decrypting activity from the
> checking of deciphered output activity.
The checking activity is a mere comparison, and any organization
between the two activities will be more costly.
Things would be different, and thes tips excellent, if the plaintext
was not known exactly, but only some of its general characteristics
(like: it is plain English in ASCII).
Francois Grieu
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Sun, 05 Nov 2000 21:28:36 +0100
"Samir" <[EMAIL PROTECTED]> wrote:
> I can use 20 computers, for a known-plaintext attack.
First compute your odds of success. In rought numbers, a top gun CPU
can test 2^22 key/s using the best known optimizations. So in a year
20 of them will have tested about
20*3600*24*365*2^22/2^56 = 3.7% of the key space
and this is the odds of success.
> I wish to make it clear that I'm not trying to attack DES
> directly ;-) I fix some bits in the key, in order to reduce
> the time of research...
Missed this in my first reading of your post, sorry.
> Is the best way for the server to search with ramdom selection ?
No. If the search algorithm selects key at random
- there is statistically no benefit; if you are afraid the key
might be chosen so that you test it last, defeat this strategy
by just taking the starting point at random.
- it will take some time to build-up the random, compared to a simple
order, so less key will be tested with given resources
(machines*time); the more random, the more lost time.
- if you use true random, you will need more memory than you can
afford to remember which keys have already been tested, so you'll
much probably end up testing quite a few keys several times.
- optimizations made possible by orderly scanning of the keys are
lost (see my post on fr.misc.cryptology).
for more pointers see
<http://www.distributed.net/des/index.html>
<http://www.distributed.net/des/index.html.fr>
Francois Grieu
[revised post]
------------------------------
From: Imad R. Faiad <[EMAIL PROTECTED]>
Crossposted-To: comp.security.pgp.tech,comp.security.pgp.discuss,alt.security.pgp
Subject: Re: PGP 7.0 - Key Reconstruction - White Paper now online
Date: Sun, 05 Nov 2000 20:32:20 GMT
=====BEGIN PGP SIGNED MESSAGE=====
Greetings,
But, isn't that like the chicken and the egg problem?
Assuming that it isn't, wouldn't such a scheme weaken
PGP, in that it creates another avenue to pursue for
a would be attacker?
IMHO it is futile to devise any scheme to recover the
key or the message. This is so because, it does not
work in practice. More important this undermines
a user's faith in the product.
My 2c
Best Regards
Imad R. Faiad
On Sun, 05 Nov 2000 18:31:22 GMT, in comp.security.pgp.discuss Robert
Guerra <[EMAIL PROTECTED]> wrote:
>Hi folks:
>
>PGP 7.0 introduces several new features, the most hotly debated seems to
>be the new "key reconstructions feature"
>
>I was able to attend Will Price's talk on it at the August Cypherpunks
>in San Francisco. He explained the inner workings of how it's been
>implemented.
>
>The white paper which he and the folks at NAI authored is now online at
>the URL below.
>
>
>http://download.nai.com/products/media/pgp/pdf/literature/pgpkeyreconwhit
>epaper.pdf
>
>
>From what I heard it's been carefully laid out, however i'll leave it up
>to the folks on here to debate the fine points.
>
>
>Regards
>
>Robert
=====BEGIN PGP SIGNATURE=====
Version: 6.5.8ckt http://irfaiad.virtualave.net/
Comment: KeyID: 0xBCC31718833F1BAD
Comment: Fingerprint: 75CD 96A7 8ABB F87E 9390 5FD7 2A88 4F45
iQEVAwUBOgXDB7zDFxiDPxutAQFP2gf+IfBja59bmUpyd1rV8JSDItexnZSca2b1
ZsEgEAV4PwlfD2oDXj926pDmP7JPc7FsCekh8eW49BpXVX1Lq1Yb+Hzkn0Z9sSNE
D4b9ZrsX2G0RD2NVuhzrOsyihWp1CMu7p7tRsy0rx/Y2kBHH639q7wYkriu8r0Cd
Uw7A/hjfeoKU5QSjtFZFnfxg0ebCwDEM6e2Z9U6edcaxYl7lcvnoPWmdM20wQHhC
7EOxP1H9krc3D191UduXLAIjvSJPkhYWliUJVMegWKE2rG/0a6mU3A0nWsKYwUZo
3FjCpzAys888AqekRAwHA8tFqjuoJOqps3eqT4KfRG9XXWwD56W0/g==
=lM7Z
=====END PGP SIGNATURE=====
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: sqrt correlations
Date: 5 Nov 2000 20:38:20 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Benjamin Goldberg wrote:
>What I was thinking of was taking the square root of a 10-digit decimal
>number, and outputing the digits to the right of the decimal point of
>the sqrt.
I see this suggested from time to time, and here's my response:
You can break this with continued fractions.
I don't have my usual response handy at the moment, but if you
do a Dejanews search for continued fractions and my name you'll
probably find it, as well as some references to more detailed
information.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: [newbie] Is PGP 7.0 hash extension secure?
Date: Sun, 05 Nov 2000 21:06:55 GMT
Thomas J. Boschloo wrote:
> I had a little e-mail conversation with Micheal Young on the hash
> extension used in PGP (described in RFC 2440). I am not as
> knowledgeable as him, so I can't see if his arguments are true.
>
> My statement was that in order to use Twofish or Rijndael to its full
> potential of 256 bits security, you would need to have a 256 bit hash
> function like they are creating a draft for in
> <http://csrc.nist.gov/cryptval/shs.html>. Micheal Young however,
> claims that the process used in PGP to extend the SHA-1 algorithm to
> 256 bits is secure enough. In PGP you have a large ammount of random
> data in your entropy pool and a 256 bit session key is generated by
> taking the SHA-1 hash on this data. Then the extra missing 96 bits are
> generating by feeding that same pool, with a '\0' character prepended
> to it, to the SHA-1 function. Micheal claims that this is secure and
> that a succesful attack based on this property would leave the SHA-1
> algorithm useless for digital signatures.
> 1) Is the extension methode of the hash in RFC 2440 reason to worry
> about PGP's promised 256 bit security?
>
> During a web search I found this document on the extension of a RIPEM
> hash:
> <http://www.esat.kuleuven.ac.be/~bosselae/ripemd160.html#extensions>.
> I don't get this! (Sorry) Why would anyone want to get a 320 bit
> result from a hash function, if it only has a security of 160 bits?
The cited document states:
"Remark that the security level of the 320-bit extension of RIPEMD-160
is only guaranteed to be the same as that of RIPEMD-160 itself, and
similarly for the 256-bit extension of RIPEMD-128 with respect to
RIPEMD-128 itself."
Where it says "is only guaranteed," it means we can guarantee that it is
at least as good as, but we can't guarantee that it is any better.
> You could just as well add 160 fixed bits after the hash, just like
> they do in reducing the security from e.g. RC4-128 to 40 bits for
> export?
If you did that, then you would be guaranteeing that the security was NO
MORE THAN 160 bits.
> Why not use the technique used in PGP to increase security to
> the full 320 bits if it is so easy? It would be useful for both
> purposes.
You might be correct, but using PGP's technique takes longer.
> It is unclear to me if the RIPEM-320 hash is generated from the same
> input buffer as RIPEM-160 (allowing 'PGP tricks'), or only from the
> RIPEM-160 hash output (resulting in a fixed 2^160 on 2^320 mapping).
RIPEMD-160 has 10 32-bit integers to which the input is stirred, and as
a final step before output, pairs of those integers are added to create
a 160 bit output.
RIPEMD-320 skips the final combining, and outputs all 10 integers
directly.
> Further questions that arise from this for me are:
> 3) If PGP uses this 'extension' technique to generate large primes for
> new keys, doesn't this reduce the security of those keys?
> 4) Where can I find the FIPS for SHA-256 and others? The crypto++ 4.0
> library already seems to have them, while the NIST homepage claims
> they are still proposing a 'draft' for it which won't be ready until
> 2001?
> 5) Why the 512 bit SHA-2 functions? Wouldn't SHA-256 be enough for 256
> bit session keys? I have heard about the 'birthday attack', but
> wouldn't this require an average database of 2^128 elements to produce
> an 'unchosen' collision? Isn't this a bit silly?
>
> Lots of questions, little answers, TIA!
> Thomas
--
"Mulder, do you remember when I was missing -- that time that you
*still* insist I was being held aboard a UFO?"
"How could I forget?"
"Well, I'm beginning to wonder if maybe I wouldn't have been
better off staying abo-- I mean, wherever it was that I was
being held." [from an untitled spamfic by [EMAIL PROTECTED]]
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************