Re: Fermat's primality test vs. Miller-Rabin

2005-12-06 Thread Sidney Markowitz
Joseph Ashwood wrote:
 Apparently, they are, I'm ran a sample, but even with the added second 
 sanity check, every one of them that passes a single round comes up prime.
 
 I then proceeded to move it to 2048-bit numbers. It takes longer and the 
 gaps between primes is averaging around 700 right now, but once again if it 
 passes a single test it passes all 128+128

Ok, I did misunderstand you. If that failed 120-130 times is talking about
the number of trials between primes, then you are getting within the range of
expected results.

According to the prime number theorem, the probability of selecting a prime
number at random from odd numbers is about 2/ln(n) which for a 512 bit number
is about 1 in 177, which means you have about a 50% chance of 120 tries before
finding a prime.

According to the results that Anton quoted there is a 2^56 chance that a 512
bit odd number that passes one round of Miller-Rabin is actually prime.

So all of your results do make sense.

 -- sidney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-06 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin





Ok after making that change, and a few others. Selecting only odd numbers
(which acts as a small seive) I'm not getting much useful information. It
appears to be such that at 512 bits if it passes once it passes 128 times,
and it appears to fail on average about 120-130 times, so the sieve
amplifies the values more than expected. Granted this is only a test of 
the



generation of 128 numbers, but I got 128 primes (based on 128 MR rounds).



O.k., so if I read this right, your new results concord with the analysis 
of

Pomerance et al.   That would make much more sense.

When you say on average about 120-130 times the test fails, out of how
many is that?


I should've said that the the quantity of numbers that failed the first test 
between each success was about 120-130. Apparently, even sieving based 
solely on is it odd is enough to substantially influence the outcome.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-05 Thread Sidney Markowitz
Joseph Ashwood wrote:
 Granted this is only a test of the 
 generation of 128 numbers, but I got 128 primes (based on 128 MR rounds). 

That doesn't make sense, unless I'm misinterpreting what you are saying. Primes
aren't that common, are they?

I don't have time right now to look for a bug in your code, but you could add a
sanity check that would catch a bug immediately by adding in the appropriate
spot a test like

 if (!curnum.isProbablePrime(128))
   System.out.println(Something wrong, number is not really a prime!);


to check that your primality test gets the same result as the M-R primality
test that comes with BigInteger.

 -- sidney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-05 Thread Joseph Ashwood
- Original Message - 
From: Sidney Markowitz [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



Joseph Ashwood wrote:

Granted this is only a test of the
generation of 128 numbers, but I got 128 primes (based on 128 MR rounds).


That doesn't make sense, unless I'm misinterpreting what you are saying. 
Primes

aren't that common, are they?


Apparently, they are, I'm ran a sample, but even with the added second 
sanity check, every one of them that passes a single round comes up prime.


I then proceeded to move it to 2048-bit numbers. It takes longer and the 
gaps between primes is averaging around 700 right now, but once again if it 
passes a single test it passes all 128+128. This sample is currently 
statistically completely insignificant, but even after the currently 8 tries 
I'd expect something different.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-12-05 Thread Anton Stiglic

Ok after making that change, and a few others. Selecting only odd numbers 
(which acts as a small seive) I'm not getting much useful information. It 
appears to be such that at 512 bits if it passes once it passes 128 times, 
and it appears to fail on average about 120-130 times, so the sieve 
amplifies the values more than expected. Granted this is only a test of the

generation of 128 numbers, but I got 128 primes (based on 128 MR rounds). 


O.k., so if I read this right, your new results concord with the analysis of
Pomerance et al.   That would make much more sense.

When you say on average about 120-130 times the test fails, out of how
many is that?


--Anton





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-04 Thread Joseph Ashwood
- Original Message - 
From: Sidney Markowitz [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



Joseph Ashwood wrote:

  byte [] rawBytes = new byte[lenNum/8];
  rand.nextBytes(rawBytes);
  curNum = new BigInteger(rawBytes);



curNum = BigInteger.ONE.or(new BigInteger(512, rand));


Ok after making that change, and a few others. Selecting only odd numbers 
(which acts as a small seive) I'm not getting much useful information. It 
appears to be such that at 512 bits if it passes once it passes 128 times, 
and it appears to fail on average about 120-130 times, so the sieve 
amplifies the values more than expected. Granted this is only a test of the 
generation of 128 numbers, but I got 128 primes (based on 128 MR rounds). 
For the sake of full code review and duplication I've appended the entire 64 
lines of code. You'll see I made a few optimizations, and removed writing 
the data to a csv. I developed compiled and ran it only through Eclipse.'

   Joe




import java.math.BigInteger;
import java.security.SecureRandom;
import java.io.IOException;

public class millerMain {
static int numTests = 128;
static int lenNum = 512;
static SecureRandom rand = new SecureRandom();
static BigInteger two = BigInteger.valueOf(2);


public static void main(String[] args) throws IOException {
 BigInteger curNum = null;
 int totalPrimes = 0;
 int [] successes = new int[numTests];
 int failuresBetween = 0;
 for(int i = 0; i  numTests; i++)
 {
  failuresBetween = -1;
  //choose starting number
  do
  {
   curNum = BigInteger.ONE.or(new BigInteger(lenNum, rand));
   failuresBetween++;
  }
  while(testOnce(curNum) == false);
  System.out.println(Failed  + failuresBetween+  times);
  //passed once


  //run 127 more tests
  for(int j = 0; j  127; j++)
  {
   if(testOnce(curNum))
   {
successes[i]++;
   }
  }
  if(successes[i] == 127) totalPrimes++;
  System.out.println(Test Number +i+ successes +(successes[i]+1)+ 
Total Prime so far +totalPrimes);


  BigInteger temp = BigInteger.valueOf(successes[i]);
  String num = temp.toString();
 }
}


static boolean testOnce(BigInteger N){
 BigInteger A = null;

 // 0  A  N
 A = new BigInteger(lenNum, rand).mod(N);
 if(A.compareTo(BigInteger.ZERO) == 0) return testOnce(N);

 BigInteger Nminus1 = N.subtract(BigInteger.ONE);

 //shouldBeOne = A^(N-1) mod N
 BigInteger shouldBeOne = A.modPow(Nminus1, N);
 if(shouldBeOne.compareTo(BigInteger.ONE)!= 0) return false;
 return true;

}
}



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-03 Thread Sidney Markowitz
Joseph Ashwood wrote:
   byte [] rawBytes = new byte[lenNum/8];
   rand.nextBytes(rawBytes);
   curNum = new BigInteger(rawBytes);

I haven't thought through why it would produce non-primes, but it
doesn't seem to do what you want. That produces a 512 bit
twos-complement number, which gives you a 511 bit positive integer, not
512 bit. It also is unnecessarily complicated compared to this form of
the BigInteger constructor and the or method (see the javadoc):

curNum = BigInteger.ONE.or(new BigInteger(512, rand));

 -- Sidney Markowitz
http://www.sidney.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-02 Thread Nicolas Rachinsky
* Joseph Ashwood [EMAIL PROTECTED] [2005-11-22 02:50 -0800]:
 - Original Message - 
 From: Anton Stiglic [EMAIL PROTECTED]
 Subject: RE: Fermat's primality test vs. Miller-Rabin
 
 
 -Original Message-
 From: [Joseph Ashwood]
 Subject: Re: Fermat's primality test vs. Miller-Rabin
 I think much of the problem is the way the number is being applied. Giving
 a stream of random numbers that have passed a single round of MR you will
 find that very close to 50% of them are not prime, this does not mean that
 it passes 50% of the numbers (the 2^-80 probability given above is of this
 type).
 
 Do you do an initial sieving to get rid of the more obvious primes?
 
 No I did not, since this was specifically to test the effectiveness of MR I 
 determined that it would be better to test purely based on MR, and not use 
 any sieving. The actual algorithm was:
 
 
 16384 times
 {
question = random 512-bit number
//this is not the most efficient, but it should remove bias making this 
 just MR

If I remember the proof of MR correctly it assumes an odd number. Were
all your questions odd?

If not, please try again with odd numbers only.

Nicolas

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-12-02 Thread Joseph Ashwood
- Original Message - 
From: Nicolas Rachinsky [EMAIL PROTECTED]

Subject: Re: Fermat's primality test vs. Miller-Rabin



* Joseph Ashwood [EMAIL PROTECTED] [2005-11-22 02:50 -0800]:

16384 times
..



If I remember the proof of MR correctly it assumes an odd number. Were
all your questions odd?


The random numbers tested were almost certainly not all odd, no filtering 
was done on random.



If not, please try again with odd numbers only.


I'm running an abbreviated test right now, and it's looking less impressive, 
I have to assume I'm hitting a bad streak somehow. Real bad, 30 numbers 
tested, no primes at all so far, I see one that has passed 79 tests. I have 
to assume I'm missing something really stupid at this point in my new number 
chooser that I don't have the time to find right now. So I'm asking for 
anyones help in pointing out to me, why after I let it go the full 128 runs 
(that is 128 numbers that passed a single round of MR) I didn't get a single 
number to pass more than 79? Did I just hit a really, really bad streak?


The exact code for the function and the support variables :

static int lenNum = 512;
static SecureRandom rand = new SecureRandom();
static BigInteger two = BigInteger.valueOf(2);

static BigInteger chooseValue()
{
 //pick a random integer
 BigInteger curNum = null;
 byte [] rawBytes = new byte[lenNum/8];
 rand.nextBytes(rawBytes);
 curNum = new BigInteger(rawBytes);

 //make sure it's odd
 if(curNum.mod(BigInteger.valueOf(2)).compareTo(BigInteger.ZERO) == 0)
 {
  curNum = curNum.add(BigInteger.ONE);
 }

 //it's 0 or negative try again
 if(curNum.compareTo(BigInteger.ZERO)=0)
 {
  return chooseValue();
 }
 return curNum;

}

This should choose a 512-bit random odd positive number, unless I'm missing 
something horribly, horribly braindead.


Anyway, back to trying to design a cool user interface (anyone who knows 
me knows that the cue to begin laughing, I can't design a UI for sh*t).
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-11-30 Thread Anton Stiglic


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joseph Ashwood
Sent: November 18, 2005 3:18 AM
To: cryptography@metzdowd.com
Subject: Re: Fermat's primality test vs. Miller-Rabin

 Look at table 4.3 of the Handbook of
 applied cryptography: for t = 1 (one iteration) and for a 500-bit 
 candidate,
 we have probability p(X | Y_1) = 2^-56, which is better than what you
 concluded.  (X representing the event that the candidate n is composite, 
 Y_t
 representing the event that Miller-Rabin(n, t) declares n to be prime).

 The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
 candidates, and I think you need to do a basic sieving (don't remeber if
 that is necessary, but I think it is).  The result is due to the fact 
 that under these conditions, the strong pseudoprime test does in fact 
 much  better than 1/4 probability of error ( value of P(Y_t | X) is very
 low ), this result is due to Damgard, Landrock and Pomerance, based on 
 earlier work of Erdos and Pomerance.

I think much of the problem is the way the number is being applied. Giving
a stream of random numbers that have passed a single round of MR you will
find that very close to 50% of them are not prime, this does not mean that
it passes 50% of the numbers (the 2^-80 probability given above is of this 
type). 

Do you do an initial sieving to get rid of the more obvious primes?  I'm
guessing you don't since you seem to have a result contradictory to what has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 of
HAC (which comes from Damgard  al. paper), it says that if your candidates
come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is = (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


In fact it appears that integers fall on a continuum of difficulty 
for MR, where some numbers will always fail (easy composites), and other 
numbers will always pass (primes). The problem comes when trying to denote 
which type of probability you are discussing. 

Well I think I explained it pretty clearly.  I can try to re-iterate.  Let X
represent the event that a candidate n is composite, and let Y_n denote the
event that Miller-Rabin(n,t) declares n to be prime, where Miller-Rabin(n,t)
means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) = (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for example).
But this is not the probability that we are interested in, we are (at least
I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


What are the odds that a 
random 512-bit composite will be detected as composite by MR in one round?
I don't think anyone has dependably answered that question, but the answer
is very different from 1-(probability that MR-* says it's a prime)^-k. Any 
discussion needs to be more accurately phrased.

You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) proves
n to be composite. Is that what you are looking for?


For example, my phrasing is that in the tests that I performed 50% (+/- 
experimental noise) of those numbers that passed a single round of MR also 
passed 128 rounds, leading me to conclude that 50% of the numbers that 
passed a single round of MR are in fact prime. Since each number that
passed a single round was subjected to 127 additional rounds, a number of 
additional statistics can be drawn, in particular that of those that failed

at least one round none failed less than 40 rounds, and that few passed
less than 40 rounds. Due to the fact that this was only iterated 65536
times there is still substantial experimental error available. These pieces
of information combined indicate that for 512-bits it is necessary to have
80 rounds of MR to verify a prime.
 
I don't understand what you are trying to point out.  If you chose your
candidates uniformly at random, do the sieving before applying the
Miller-Rabin tests, then for 512 bit number it is sufficient to apply 5
rounds to get probability of error lower than (1/2)^80.  

You should take a look at Damgard  al's paper, they did a very good
analysis.

--Anton
  



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-30 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin



-Original Message-
From: [Joseph Ashwood]
Subject: Re: Fermat's primality test vs. Miller-Rabin

I think much of the problem is the way the number is being applied. Giving
a stream of random numbers that have passed a single round of MR you will
find that very close to 50% of them are not prime, this does not mean that
it passes 50% of the numbers (the 2^-80 probability given above is of this
type).


Do you do an initial sieving to get rid of the more obvious primes?


No I did not, since this was specifically to test the effectiveness of MR I 
determined that it would be better to test purely based on MR, and not use 
any sieving. The actual algorithm was:



16384 times
{
   question = random 512-bit number
   //this is not the most efficient, but it should remove bias making this 
just MR
   while(question does not pass a single round of MR) question = random 
512-bit number

   127 times
   {
   perform an MR round
   log MR round result
   }
}

Then I performed analysis based on the log generated. I will gladly disclose 
the source code to anyone who asks (it's in Java).



I'm
guessing you don't since you seem to have a result contradictory to what 
has
been proven by Damgard, Landrock and Pomerance.  If you look at table 4.3 
of
HAC (which comes from Damgard  al. paper), it says that if your 
candidates

come from a uniform random distribution, then for 500 bit candidate, the
probability that a candidate n is composite when one round of miller-Rabin
said it was prime is = (1/2)^56.  You are finding that the probability is
about 1/2, that seems very wrong (unless you are not doing the sieving,
which is very important).  Am I misunderstanding something?


No you're not. The seiving is important from a speed standpoint, in that the 
odds improve substantially based on it, however it is not, strictly 
speaking, necessary, MR will return a valid result either way.



In fact it appears that integers fall on a continuum of difficulty
for MR, where some numbers will always fail (easy composites), and other
numbers will always pass (primes). The problem comes when trying to denote
which type of probability you are discussing.


Well I think I explained it pretty clearly.  I can try to re-iterate.  Let 
X
represent the event that a candidate n is composite, and let Y_n denote 
the
event that Miller-Rabin(n,t) declares n to be prime, where 
Miller-Rabin(n,t)

means you apply t iterations of Miller-Rabin on n.
Now the basic theorem that we all know is that P(Y_t | X) = (1/4)^t (this
is problem in one of Koblitz basic textbooks on cryptography, for 
example).
But this is not the probability that we are interested in, we are (at 
least

I am) more interested in P(X | Y_t).  In other words, what is the
probability that n is in fact composite when Miller-Rabin(n, t) declared n
to be prime?  Do we agree that this is the probability that we are
interested in?


If we are discussing that aspect, then yes we can agree to it. That is the 
probability I gave, at exactly a single round (i.e. no sieving involved), 
approaching 1/2 (my sample was too small to narrow it beyond about 2 
significant digits). I know this result is different from the standard 
number, but the experiment was performed, and the results are what I 
reported. This is where the additional question below becomes important 
(since it gives how quickly the odds of being incorrect will fall).




What are the odds that a
random 512-bit composite will be detected as composite by MR in one round?
I don't think anyone has dependably answered that question, but the answer
is very different from 1-(probability that MR-* says it's a prime)^-k. Any
discussion needs to be more accurately phrased.


You are looking for P( Comp Y_t | X), where Comp Z is the complementary
event of Z. In our case, Comp Y_t is the event that Miller-Rabin(n,t) 
proves

n to be composite. Is that what you are looking for?


Actually I'm not, the probability is a subtley different one and the key 
different is in Y. Instead it is given random composite RC what is P(MR(RC, 
k) | Comp X). This appears to me to be a complex probability based on the 
size of the composite. But this is the core probability that governs the 
probability of composites remaining in the set of numbers that pass MR-k. 
Fortunately, while it is a core probability, it is not necessary for MRs 
main usefulness. Performing log_2(N)/4 rounds of MR appears to be a solid 
upper bound on the requirements, and as this is the probability given by 
Koblitz, and the most common assumption on usage it is a functionable 
substitute.



For example, my phrasing is that in the tests that I performed 50% (+/-
experimental noise) of those numbers that passed a single round of MR also
passed 128 rounds, leading me to conclude that 50% of the numbers that
passed a single round of MR

Re: Fermat's primality test vs. Miller-Rabin

2005-11-18 Thread Joseph Ashwood
- Original Message - 
From: Anton Stiglic [EMAIL PROTECTED]

Subject: RE: Fermat's primality test vs. Miller-Rabin



The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:


My own tests disagreed with this, 512-bits seemed to have a threshhold 
around 70 passes for random candidates, I'm thinking you forgot a sieving 
step there (which would change the number substantially).



and thus a single test gives ~2^{-13}.


If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning.  Look at table 4.3 of the Handbook of
applied cryptography: for t = 1 (one iteration) and for a 500-bit 
candidate,

we have probability p(X | Y_1) = 2^-56, which is better than what you
concluded.  (X representing the event that the candidate n is composite, 
Y_t

representing the event that Miller-Rabin(n, t) declares n to be prime).

The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
candidates, and I think you need to do a basic sieving (don't remeber if
that is necessary, but I think it is).  The result is due to the fact that
under these conditions, the strong pseudoprime test does in fact much 
better

than 1/4 probability of error ( value of P(Y_t | X) is very low ), this
result is due to Damgard, Landrock and Pomerance, based on earlier work of
Erdos and Pomerance.


I think much of the problem is the way the number is being applied. Giving a 
stream of random numbers that have passed a single round of MR you will find 
that very close to 50% of them are not prime, this does not mean that it 
passes 50% of the numbers (the 2^-80 probability given above is of this 
type). In fact it appears that integers fall on a continuum of difficulty 
for MR, where some numbers will always fail (easy composites), and other 
numbers will always pass (primes). The problem comes when trying to denote 
which type of probability you are discussing. What are the odds that a 
random 512-bit composite will be detected as composite by MR in one round? I 
don't think anyone has dependably answered that question, but the answer is 
very different from 1-(probability that MR-* says it's a prime)^-k. Any 
discussion needs to be more accurately phrased.


For example, my phrasing is that in the tests that I performed 50% (+/- 
experimental noise) of those numbers that passed a single round of MR also 
passed 128 rounds, leading me to conclude that 50% of the numbers that 
passed a single round of MR are in fact prime. Since each number that passed 
a single round was subjected to 127 additional rounds, a number of 
additional statistics can be drawn, in particular that of those that failed 
at least one round none failed less than 40 rounds, and that few passed less 
than 40 rounds. Due to the fact that this was only iterated 65536 times 
there is still substantial experimental error available. These pieces of 
information combined indicate that for 512-bits it is necessary to have 80 
rounds of MR to verify a prime.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Fermat's primality test vs. Miller-Rabin

2005-11-16 Thread Anton Stiglic

The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:

... 

 and thus a single test gives ~2^{-13}.

If you just took the exponent 80 and divided it by 6 to get ~13, I don't
think that is the right reasoning.  Look at table 4.3 of the Handbook of
applied cryptography: for t = 1 (one iteration) and for a 500-bit candidate,
we have probability p(X | Y_1) = 2^-56, which is better than what you
concluded.  (X representing the event that the candidate n is composite, Y_t
representing the event that Miller-Rabin(n, t) declares n to be prime).

The results in table 4.3 and 4.4 of HAC are for randomly (uniform) chosen
candidates, and I think you need to do a basic sieving (don't remeber if
that is necessary, but I think it is).  The result is due to the fact that
under these conditions, the strong pseudoprime test does in fact much better
than 1/4 probability of error ( value of P(Y_t | X) is very low ), this
result is due to Damgard, Landrock and Pomerance, based on earlier work of
Erdos and Pomerance.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-15 Thread Hal Finney
Ron Rivest reported on some theoretical and practical experimental
work in Crypto 90, Finding Four Million Large Random Primes,
http://theory.lcs.mit.edu/~rivest/Rivest-FindingFourMillionLargeRandomPrimes.ps

A number n is a (base two) pseudoprime if it is composite and
satisfies the identity 2^(n-1) = 1 mod n  How rare are pseudoprimes?
We performed an experiment that attempts to provide an answer...

They tested 718 million random 256-bit values.  First they performed a
small divisor test using divisors up to 10^4.  43,741,404 numbers passed.
These were then tested using the equation above (the Fermat test with
base 2).  4,058,000 passed that further test.  These were then subjected
to 8 iterations of Miller-Rabin to see which were pseudoprimes.

None of them were pseudoprimes.  All of the numbers in this sample which
passed the small divisor and base-2 Fermat test passed all subsequent
MR tests and were presumably all prime.

Rivest goes on and uses a conjecture of Pomerance to argue that the
number of pseudoprimes  2^256 is at most 4 x 10^52, while the number
of true primes is 6.5 x 10^74.  Hence your chance of running into a
pseudoprime is less than 1 in 10^22.  I'm not sure the role of the
small-divisor tests but I think that may make the odds even better.
The larger primes in use today should also have improved odds.

Based on this kind of result, RSAREF, the free library available from
RSA back in the 90s, did not use MR and did not do multiple tests.
It performed a small divisor test (only testing 3, 5, 7 and 11 as
divisors!) and a single base 2 Fermat test, for its RSA keygen.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-14 Thread Travis H.
  Although the Carmichael numbers fool the Fermat test
  (that is, $a^{n-1} = 1 (n)$) for *all* a,

I thought it would work properly if a shares a factor with n.

 Yes I guess the difference is that with MR you are trying to find a
 number that is *likely* a prime, whereas with Fermat you are testing
 primality. But MR will still fail when given a Carmichael number,
 since elsewhere MR is defined as iterated application of the Fermat
 test [1].

I hate to jump on the bandwagon about this, but these statements fail
a basic consistency test.  Iterating a deterministic test will not
generate a probabilistic one.  And since the Fermat test fails for
Carmichael numbers, I wouldn't say that it's testing primality.   Both
tests are probabilistic, and return answers of composite or answer
unclear for a chosen basis.

MR does appear to save some exponentiations, but it also appears to
check that the last (temporally) non-1 square root of 1 we used was
-1, which it must be if n is prime, making it a stronger test than
Fermat's.  Wikipedia concurs that MR is preferred over Fermat,
primarily (pun intended) because of Carmichael numbers.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-14 Thread Alexander Klimov
On Fri, 11 Nov 2005, Joseph Ashwood wrote:
 From: Charlie Kaufman [EMAIL PROTECTED]

 I've heard but not confirmed a figure of one failure in 20 million. I've
 never heard an estimate of the probability that two runs would fail to
 detect the composite. It couldn't be better than one failure is 20 million
 squared or worse than one in 80 million.

 I can confirm that that number of completely wrong. I just implemented a
 small Java program to test exactly that. Each number was vetted by a single
 pass of Miller-Rabin (iterations = 1). With 512-bit numbers the first 52
 random guesses that pass the first test resulted in 26 numbers that failed
 to pass 128 iterations. I find it rather odd that this is exactly half, and
 I also notice that of those that failed they almost all seem to have failed
 at least half of them.

The general consensus is that for 500-bit numbers one needs only 6 MR
tests for 2^{-80} error probability [1]:

  number of Miller-Rabin iterations for an error rate of less
  than 2^-80 for random 'b'-bit input, b = 100 (taken from table
  4.4 in the Handbook of Applied Cryptography [Menezes, van
  Oorschot, Vanstone; CRC Press 1996]; original paper: Damgaard,
  Landrock, Pomerance: Average case error estimates for the
  strong probable prime test. -- Math. Comp. 61 (1993) 177-194)

  #define BN_prime_checks(b) ((b) = 1300 ?  2 : \
  (b) =  850 ?  3 : \
  (b) =  650 ?  4 : \
  (b) =  550 ?  5 : \
  (b) =  450 ?  6 : \
  (b) =  400 ?  7 : \
  (b) =  350 ?  8 : \
  (b) =  300 ?  9 : \
  (b) =  250 ? 12 : \
  (b) =  200 ? 15 : \
  (b) =  150 ? 18 : \
  /* b = 100 */ 27)

and thus a single test gives ~2^{-13}.

[1] http://cvs.openssl.org/getfile/openssl/crypto/bn/bn.h?v=1.21

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-13 Thread Joseph Ashwood
- Original Message - 
From: Charlie Kaufman [EMAIL PROTECTED]

Subject: FW: Fermat's primality test vs. Miller-Rabin
In practice, the probability of randomly choosing a Carmichael number of 
size 250 bits is vanishingly small.


I would say that finding any Carmichael number without deliberately looking 
for it is vanishingly small.


The probability of a single run of Miller-Rabin or Fermat not detecting 
that a randomly chosen number is composite is almost vanishingly small.


I've heard but not confirmed a figure of one failure in 20 million. I've 
never heard an estimate of the probability that two runs would fail to 
detect the composite. It couldn't be better than one failure is 20 million 
squared or worse than one in 80 million.


I can confirm that that number of completely wrong. I just implemented a 
small Java program to test exactly that. Each number was vetted by a single 
pass of Miller-Rabin (iterations = 1). With 512-bit numbers the first 52 
random guesses that pass the first test resulted in 26 numbers that failed 
to pass 128 iterations. I find it rather odd that this is exactly half, and 
I also notice that of those that failed they almost all seem to have failed 
at least half of them.


It appears that the minimum estimate of 1/2 probability is necessary, but 
that 1/4 is more likely.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: FW: Fermat's primality test vs. Miller-Rabin

2005-11-13 Thread Florian Weimer
* Charlie Kaufman:

 The probability of a single run of Miller-Rabin or Fermat not
 detecting that a randomly chosen number is composite is almost
 vanishingly small.

How do you chose a random integer, that this, based on which
probability distribution? 8-)

Anyway, one can show that for some fixed number, the probability that
one run of the Miller-Rabin algorithm fails (i.e. reports potentially
prime for a composite) does not exceed 1/4.  Knuth gives a proof in
an exercise in Volume 2 of The Art of Computer Programming, including
an example that the 1/4 bound is pretty good.  However, this answers a
slightly different question.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Jeremiah Rogers
 I guess the small increase in efficiency would not be worth additional
 program code.

That depends on the size of the numbers you're working with...
Considering the research that goes into fast implementations of
PowerMod I don't think the required computation is trivial.

 Although the Carmichael numbers fool the Fermat test
 (that is, $a^{n-1} = 1 (n)$) for *all* a, there are no such things for
 the Miller-Rabin test:  for any odd composite n at least 3/4 of a's
 fail the test, that is if you made m MR tests with random a's then you
 are mistaken with probability at most (1/4)^m.

Yes I guess the difference is that with MR you are trying to find a
number that is *likely* a prime, whereas with Fermat you are testing
primality. But MR will still fail when given a Carmichael number,
since elsewhere MR is defined as iterated application of the Fermat
test [1].

[1] http://www.nist.gov/dads/HTML/millerRabin.html
--
Jeremiah Rogers

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Alexander Klimov
On Wed, 9 Nov 2005, Jeremiah Rogers wrote:

  I guess the small increase in efficiency would not be worth additional
  program code.

 That depends on the size of the numbers you're working with...
 Considering the research that goes into fast implementations of
 PowerMod I don't think the required computation is trivial.

For large n the ratio s to log n is small (where n-1 = d 2^s) and s is
exactly the number of multiplication you may hope to avoid.

 But MR will still fail when given a Carmichael number,
 since elsewhere MR is defined as iterated application of the Fermat
 test.

It is very easy to check.

561 is a Carmicael number (the smallest one), for example, for a = 2
2^560 = 1 (561) and the same for all a's coprime to 561.
Btw, 651 = 3*11*17, so don't try with a = 3 :-)

Let us now go to the RM test:  560 = 35 * 2^4 (d = 35 and s = 4), so,
e.g., for a = 2, 2^35 = 263 (that is a^d \ne 1) and
263^2 = 166, 166^2 = 67, 67^2 = 1 (that is a^{2^r d} \ne -1 \forall r
\in [0, s-1]), so RM test declares that 561 is composite and 2 is a
strong witness of this.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Anton Stiglic
 I guess the small increase in efficiency would not be worth additional
 program code.

 That depends on the size of the numbers you're working with...
 Considering the research that goes into fast implementations of
 PowerMod I don't think the required computation is trivial.

 Although the Carmichael numbers fool the Fermat test
 (that is, $a^{n-1} = 1 (n)$) for *all* a, there are no such things for
 the Miller-Rabin test:  for any odd composite n at least 3/4 of a's
 fail the test, that is if you made m MR tests with random a's then you
 are mistaken with probability at most (1/4)^m.

That is true but is not the result of a direct conclusion.  Let X
represent the event that n is composite, and Y_t the event that
MILLER-RABIN(n,t) declares n to be prime.  Because for a composite n there
is at least 3/4 of a's that fail the test, we can conclude that Pr(Y_t |
X) = (1/4)^t.
But the probability I think you are referring to (the one that is usually
considered the most interesting) is P(X | Y_t).  It happens to be the case
that P(X | Y_t) is in fact = (1/4)^t when using uniform random
candidates, but to come to that conclusion you need to consider the fact
that the error probability of Miller-Rabin is usually far smaller than
(1/4)^t (and apply Bayes theorem and a theorem on the distribution of
prime numbers).  See Note 4.47 in the Handbook of applied cryptography, or
the following paper:
http://www.cs.mcgill.ca/~crepeau/PS/BBC+87.ps

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Anton Stiglic

 Although the Carmichael numbers fool the Fermat test
 (that is, $a^{n-1} = 1 (n)$) for *all* a, there are no such things for
 the Miller-Rabin test:  for any odd composite n at least 3/4 of a's
 fail the test, that is if you made m MR tests with random a's then you
 are mistaken with probability at most (1/4)^m.

 Yes I guess the difference is that with MR you are trying to find a
 number that is *likely* a prime, whereas with Fermat you are testing
 primality. But MR will still fail when given a Carmichael number,
 since elsewhere MR is defined as iterated application of the Fermat
 test [1].

That is not true, in several counts.
Firstly Miller-Rabin probabilistic primality test doesn't generate a
number, it verifies a number for primality.
Secondly, the Miller-Rabin probabilistic primality test is not based on
Fermat's Little theorem, or so called pseudoprime test, but rather on the
strong pseudoprime test, which derives from a theorem that says that if n
is an odd prime, n-1 = 2^s * r with r odd, then for any a such that
gcd(a,n) = 1 either a^r == 1 (mod n)  or  a^(r*2^j) == -1 (mod n) for some
j, 0 = j = s-1.   See Handbook of a applied cryptography fact 4.20.

I'm affraid the reference you gave is incorrect.

--Anton

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


FW: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Charlie Kaufman
(resending after bounce)

-Original Message-
From: Charlie Kaufman 
Sent: Tuesday, November 08, 2005 3:11 PM
To: 'Travis H.'; 'cryptography@metzdowd.com'
Subject: RE: Fermat's primality test vs. Miller-Rabin

Is that the distinction that makes
Miller-Rabin a stronger primality test?

Yes. The Miller-Rabin test will fail to detect that a Carmichael number is 
composite 25% of the time. Thus, repeating the Miller-Rabin test many times 
gives ever greater confidence. Fermat's test will fail to detect that a 
Carmichael number is composite 100% of the time, so repeating it doesn't help 
(in the fringe case of a Carmichael number).

In practice, the probability of randomly choosing a Carmichael number of size 
250 bits is vanishingly small. The probability of a single run of Miller-Rabin 
or Fermat not detecting that a randomly chosen number is composite is almost 
vanishingly small. I've heard but not confirmed a figure of one failure in 20 
million. I've never heard an estimate of the probability that two runs would 
fail to detect the composite. It couldn't be better than one failure is 20 
million squared or worse than one in 80 million.

--Charlie Kaufman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Travis H.
Sent: Tuesday, November 08, 2005 3:05 AM
To: cryptography@metzdowd.com
Subject: Fermat's primality test vs. Miller-Rabin

In Practical Cryptography, Schneier states that the you can prove
that when n is not a prime, a certain property of a mod n holds for at
most 25% of possible values 1  a  n.  He later states that Fermat's
test can be fooled by Carmichael numbers, and finally he basically
says that Miller-Rabin is based on Fermat's test.

It appears that Fermat's test can be fooled by Carmichael numbers,
whereas Miller-Rabin is immune, but I'm not sure why.  It appears that
M-R tests that just before the squaring of a that produces a residue
of 1, is the residue n-1.  Apparently that's not true for most bases
of Carmichael numbers.  Is that the distinction that makes
Miller-Rabin a stronger primality test?

It's amazing how many words that took to state, and I didn't even
specify the squaring process.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fermat's primality test vs. Miller-Rabin

2005-11-08 Thread Travis H.
In Practical Cryptography, Schneier states that the you can prove
that when n is not a prime, a certain property of a mod n holds for at
most 25% of possible values 1  a  n.  He later states that Fermat's
test can be fooled by Carmichael numbers, and finally he basically
says that Miller-Rabin is based on Fermat's test.

It appears that Fermat's test can be fooled by Carmichael numbers,
whereas Miller-Rabin is immune, but I'm not sure why.  It appears that
M-R tests that just before the squaring of a that produces a residue
of 1, is the residue n-1.  Apparently that's not true for most bases
of Carmichael numbers.  Is that the distinction that makes
Miller-Rabin a stronger primality test?

It's amazing how many words that took to state, and I didn't even
specify the squaring process.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fermat's primality test vs. Miller-Rabin

2005-11-08 Thread Jeremiah Rogers
 It appears that Fermat's test can be fooled by Carmichael numbers,
 whereas Miller-Rabin is immune, but I'm not sure why.

Where does it say Miller-Rabin is immune to Carmichael numbers? It
seems confusingly worded and says that Fermat's Test is not immune to
Carmichaels, but this does not imply M-R is immune. Additionally, the
book says We limit the probability of a false result [with M-R] to
2^(-128) to achieve our required security level.

 It appears that
 M-R tests that just before the squaring of a that produces a residue
 of 1, is the residue n-1.  Apparently that's not true for most bases
 of Carmichael numbers.  Is that the distinction that makes
 Miller-Rabin a stronger primality test?

To me it looks like M-R just eliminates some needless computation that
a naive application of the Fermat test would require. Computing a^n -
a (mod n) is harder than computing smaller powers of a and checking
each of those. This is why s the largest s such that 2 does not divide
s is found. If one power q is such that a^(sq) - a != 0 (mod n)
then continued squaring isn't going to generate a power of a that is
congruent to 1.

The n vs n-1 distinction appears only when discussing if x^2 - 1
= 0 (mod n). This is why M-R fails for n=2.

--
Jeremiah Rogers

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]