----- Original Message -----
From: "Anton Stiglic" <[EMAIL PROTECTED]>
Subject: RE: Fermat's primality test vs. Miller-Rabin
From: [Joseph Ashwood]
Subject: Re: Fermat's primality test vs. Miller-Rabin
I think much of the problem is the way the number is being applied. Giving
a stream of random numbers that have passed a single round of MR you will
find that very close to 50% of them are not prime, this does not mean that
it passes 50% of the numbers (the 2^-80 probability given above is of this
Do you do an initial sieving to get rid of the more obvious primes?
No I did not, since this was specifically to test the effectiveness of MR I
determined that it would be better to test purely based on MR, and not use
any sieving. The actual algorithm was:
question = random 512-bit number
//this is not the most efficient, but it should remove bias making this
while(question does not pass a single round of MR) question = random
perform an MR round
log MR round result
Then I performed analysis based on the log generated. I will gladly disclose
the source code to anyone who asks (it's in Java).
guessing you don't since you seem to have a result contradictory to what
been proven by Damgard, Landrock and Pomerance. If you look at table 4.3
HAC (which comes from Damgard & al. paper), it says that if your
come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is <= (1/2)^56. You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important). Am I misunderstanding something?
No you're not. The seiving is important from a speed standpoint, in that the
odds improve substantially based on it, however it is not, strictly
speaking, necessary, MR will return a valid result either way.
In fact it appears that integers fall on a continuum of difficulty
for MR, where some numbers will always fail (easy composites), and other
numbers will always pass (primes). The problem comes when trying to denote
which type of probability you are discussing.
Well I think I explained it pretty clearly. I can try to re-iterate. Let
represent the event that a candidate n is composite, and let Y_n denote
event that Miller-Rabin(n,t) declares n to be prime, where
means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) <= (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for
But this is not the probability that we are interested in, we are (at
I am) more interested in P(X | Y_t). In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime? Do we agree that this is the probability that we are
If we are discussing that aspect, then yes we can agree to it. That is the
probability I gave, at exactly a single round (i.e. no sieving involved),
approaching 1/2 (my sample was too small to narrow it beyond about 2
significant digits). I know this result is different from the standard
number, but the experiment was performed, and the results are what I
reported. This is where the additional question below becomes important
(since it gives how quickly the odds of being incorrect will fall).
What are the odds that a
random 512-bit composite will be detected as composite by MR in one round?
I don't think anyone has dependably answered that question, but the answer
is very different from 1-(probability that MR-* says it's a prime)^-k. Any
discussion needs to be more accurately phrased.
You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t)
n to be composite. Is that what you are looking for?
Actually I'm not, the probability is a subtley different one and the key
different is in Y. Instead it is given random composite RC what is P(MR(RC,
k) | Comp X). This appears to me to be a complex probability based on the
size of the composite. But this is the core probability that governs the
probability of composites remaining in the set of numbers that pass MR-k.
Fortunately, while it is a core probability, it is not necessary for MRs
main usefulness. Performing log_2(N)/4 rounds of MR appears to be a solid
upper bound on the requirements, and as this is the probability given by
Koblitz, and the most common assumption on usage it is a functionable
For example, my phrasing is that in the tests that I performed 50% (+/-
experimental noise) of those numbers that passed a single round of MR also
passed 128 rounds, leading me to conclude that 50% of the numbers that
passed a single round of MR are in fact prime. Since each number that
passed a single round was subjected to 127 additional rounds, a number of
additional statistics can be drawn, in particular that of those that
at least one round none failed less than 40 rounds, and that few passed
less than 40 rounds. Due to the fact that this was only iterated 65536
times there is still substantial experimental error available. These
of information combined indicate that for 512-bits it is necessary to have
80 rounds of MR to verify a prime.
I don't understand what you are trying to point out. If you chose your
candidates uniformly at random, do the sieving before applying the
Miller-Rabin tests, then for 512 bit number it is sufficient to apply 5
rounds to get probability of error lower than (1/2)^80.
But that would be entirely insufficient to isolate MR, which is what I was
discussing. Including sieving does speed up the process enormously, but it
fails to discuss what the actual behaviors of MR alone are, a very critical
You should take a look at Damgard & al's paper, they did a very good
And I'm saying that if you have accurately represented their analysis, their
analysis does not apply to MR itself, but on a combination of MR with other
techniques. As such it is not a valid analysis of MR alone.
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]