Re: Fermat's primality test vs. Miller-Rabin

2005-11-14 Thread Travis H.
  Although the Carmichael numbers fool the Fermat test
  (that is, $a^{n-1} = 1 (n)$) for *all* a,

I thought it would work properly if a shares a factor with n.

 Yes I guess the difference is that with MR you are trying to find a
 number that is *likely* a prime, whereas with Fermat you are testing
 primality. But MR will still fail when given a Carmichael number,
 since elsewhere MR is defined as iterated application of the Fermat
 test [1].

I hate to jump on the bandwagon about this, but these statements fail
a basic consistency test.  Iterating a deterministic test will not
generate a probabilistic one.  And since the Fermat test fails for
Carmichael numbers, I wouldn't say that it's testing primality.   Both
tests are probabilistic, and return answers of composite or answer
unclear for a chosen basis.

MR does appear to save some exponentiations, but it also appears to
check that the last (temporally) non-1 square root of 1 we used was
-1, which it must be if n is prime, making it a stronger test than
Fermat's.  Wikipedia concurs that MR is preferred over Fermat,
primarily (pun intended) because of Carmichael numbers.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-14 Thread Alexander Klimov
On Fri, 11 Nov 2005, Joseph Ashwood wrote:
 From: Charlie Kaufman [EMAIL PROTECTED]

 I've heard but not confirmed a figure of one failure in 20 million. I've
 never heard an estimate of the probability that two runs would fail to
 detect the composite. It couldn't be better than one failure is 20 million
 squared or worse than one in 80 million.

 I can confirm that that number of completely wrong. I just implemented a
 small Java program to test exactly that. Each number was vetted by a single
 pass of Miller-Rabin (iterations = 1). With 512-bit numbers the first 52
 random guesses that pass the first test resulted in 26 numbers that failed
 to pass 128 iterations. I find it rather odd that this is exactly half, and
 I also notice that of those that failed they almost all seem to have failed
 at least half of them.

The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:

  number of Miller-Rabin iterations for an error rate of less
  than 2^-80 for random 'b'-bit input, b = 100 (taken from table
  4.4 in the Handbook of Applied Cryptography [Menezes, van
  Oorschot, Vanstone; CRC Press 1996]; original paper: Damgaard,
  Landrock, Pomerance: Average case error estimates for the
  strong probable prime test. -- Math. Comp. 61 (1993) 177-194)

  #define BN_prime_checks(b) ((b) = 1300 ?  2 : \
  (b) =  850 ?  3 : \
  (b) =  650 ?  4 : \
  (b) =  550 ?  5 : \
  (b) =  450 ?  6 : \
  (b) =  400 ?  7 : \
  (b) =  350 ?  8 : \
  (b) =  300 ?  9 : \
  (b) =  250 ? 12 : \
  (b) =  200 ? 15 : \
  (b) =  150 ? 18 : \
  /* b = 100 */ 27)

and thus a single test gives ~2^{-13}.

[1] http://cvs.openssl.org/getfile/openssl/crypto/bn/bn.h?v=1.21

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: On Digital Cash-like Payment Systems

2005-11-14 Thread Travis H.
 Don't ever encrypt the same message twice that way, or you're likely to
 fall to a common modulus attack, I believe.

Looks like it (common modulus attack involves same n, different (e,d) pairs).

However, you're likely to be picking a random symmetric key as the
message, and Schneier even suggests picking a random r in Z_n and
encrypting hash(r) as the symmetric key.

More generally, I wonder about salting all operations to prevent using
the same value more than once.  It seems like it's generally a bad
idea to reuse values, as a heuristic, and applying some kind of
uniquification operation to everything, just as it's a good idea to
pad/frame values in such a way that the output of one stage cannot be
used in another stage of the same protocol.

  Since I'm on the topic, does doing exponentiation in a finite field
  make taking discrete logarithms more difficult (I suspect so), and if
  so, by how much?

 This doesn't make sense. The discrete log operation is the inverse of
 exponentiation. Doing exponentiation is a prerequisite for even
 considering discrete log operations. Hence it cannot make them more
 difficult.

What I really meant was, if it wasn't computed in a finite field, how
difficult would it be to compute the logarithm?  I'm just curious
about how much work factor is involved in reducing modulo n.

I also wonder about some of the implications of choosing a message or
exponent such that not enough reductions take place during
exponentiation.

 I'm not sure conventional covert-channel analysis is going to be that
 useful here, because the bandwidths we are looking at in this attack
 model are so much greater (kilobytes to megabytes per second).

Well, it depends on how you define the attack, which wasn't defined. 
If the attack is to smuggle out a key using a covert channel, it may
apply.  If the attack is to download the key on a conventional
network, it wouldn't make much difference.

Unless, of course, you're auditing network flows over a certain size
or lasting a certain amount of time.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: On Digital Cash-like Payment Systems

2005-11-14 Thread Whyte, William
  Don't ever encrypt the same message twice that way, or you're likely to
  fall to a common modulus attack, I believe.
 
 Looks like it (common modulus attack involves same n, 
 different (e,d) pairs).
 
 However, you're likely to be picking a random symmetric key as the
 message, and Schneier even suggests picking a random r in Z_n and
 encrypting hash(r) as the symmetric key.
 
 More generally, I wonder about salting all operations to prevent using
 the same value more than once.  It seems like it's generally a bad
 idea to reuse values, as a heuristic, and applying some kind of
 uniquification operation to everything, just as it's a good idea to
 pad/frame values in such a way that the output of one stage cannot be
 used in another stage of the same protocol.

I forget the beginning of this conversation... but if you're
salting all public-key encryption operations you may as well
just use a standard RSA encryption scheme, such as OAEP or
RSA-KEM. OAEP is specified in PKCS#1, available from 
http://www.rsasecurity.com/rsalabs/node.asp?id=2125; it's well-
studied and has a proof of security, and should certainly be
used in preference to any home-grown system.

If you were talking about salting something other than public
key operations, accept my apologies...

William

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]