Re: Primality test function doesn't work on large numbers?

2017-01-12 Thread Timon Gehr via Digitalmars-d-learn

On 10.01.2017 04:02, Elronnd wrote:

Thank you!  Would you mind telling me what you changed aside from pow()
and powm()?


1. This code:

// make 2^a = integer-1
while ((integer-1)%(pow(bigint(2), a))!=0)
a--;
m = (integer-1) / pow(bigint(2), a);

a starts out as integer-1, so this computes many big powers of two if 
integer-1 has only a few factors of 2 (which is very likely if integer 
is a random number), so I changed this to:


for(m=integer-1;m%2==0;m/=2,++a){}

where a starts from 0. Now it just counts the factors of two and m is 
computed on the fly.


2. There was a bug in the Miller-Rabin implementation. (There are two 
cases where the function should return false. Only one was present.)




diff isn't giving me readable results, since there was some
other stuff I trimmed out of the original file.  Also, while this is a
*lot* better, I still get some lag generating 1024-bit primes and I
can't generate larger primes in a reasonable amount of time.  Maybe my
genbigint() function is to blame?  It isn't efficient:

bigint genbigint(int numbits) {
bigint tmp;
while (numbits --> 0) {
tmp <<= 1;
tmp += uniform(0, 2);
}
return tmp;
}


This should be quite fine, as its running time is linear in numbits 
(when rejection sampling to get into the right range, the running time 
will still be linear in numbits in expectation). I would expect that the 
problem is pow and powm, as for the typical bases, exponents and moduli 
you use, they have running time 
Ω((log(modulus)+log(exponent))*log(exponent)). The actual asymptotic 
running time is significantly more than this bound, as multiplication is 
done using Karatsuba.



Maybe you can get a relevant speedup by using a different bigint 
implementation. The std.bigint documentation says that it is optimized 
for numbers up to ~1000 decimal digits and that you should probably use 
GMP for larger numbers. https://dlang.org/phobos/std_bigint.html

GMP also has implementations of pow and powm built-in.


Other than that:

1. You can save a constant factor of time in genbigint by generating 
more than one bit at a time. (But I don't think this is the bottleneck.)


2. Generating only odd candidates should give you a factor of two.

3. In mrprime, you should check other small primes other than 2, as a 
random number is likely to have a small prime factor.



If what you need is some fast black-box to generate large primes, you 
can probably use existing libraries.


Re: Primality test function doesn't work on large numbers?

2017-01-10 Thread Eugene Wissner via Digitalmars-d-learn

On Tuesday, 10 January 2017 at 03:02:40 UTC, Elronnd wrote:
Thank you!  Would you mind telling me what you changed aside 
from pow() and powm()?  diff isn't giving me readable results, 
since there was some other stuff I trimmed out of the original 
file.  Also, while this is a *lot* better, I still get some lag 
generating 1024-bit primes and I can't generate larger primes 
in a reasonable amount of time.  Maybe my genbigint() function 
is to blame?  It isn't efficient:


bigint genbigint(int numbits) {
bigint tmp;
while (numbits --> 0) {
tmp <<= 1;
tmp += uniform(0, 2);
}
return tmp;
}


Yes. You would normally get some random string/value only once 
and then apply md5 or better sha512 on it (several times if you 
want to have a more secur hash) to get the right length and then 
get the hex digest and load it into bigint.


Re: Primality test function doesn't work on large numbers?

2017-01-09 Thread Elronnd via Digitalmars-d-learn
Thank you!  Would you mind telling me what you changed aside from 
pow() and powm()?  diff isn't giving me readable results, since 
there was some other stuff I trimmed out of the original file.  
Also, while this is a *lot* better, I still get some lag 
generating 1024-bit primes and I can't generate larger primes in 
a reasonable amount of time.  Maybe my genbigint() function is to 
blame?  It isn't efficient:


bigint genbigint(int numbits) {
bigint tmp;
while (numbits --> 0) {
tmp <<= 1;
tmp += uniform(0, 2);
}
return tmp;
}


Re: Primality test function doesn't work on large numbers?

2017-01-08 Thread Timon Gehr via Digitalmars-d-learn

On 08.01.2017 08:52, Elronnd wrote:

I'm working on writing an RSA implementation, but I've run into a
roadblock generating primes.  With a more than 9 bits, my program either
hangs for a long time (utilizing %100 CPU!) or returns a composite
number.  With 9 or fewer bits, I get primes, but I have to run with a
huge number of iterations to actually _get_ a random number.  It runs
fast, though.  Why might this be?  Code:
http://lpaste.net/1034777940820230144


Fixed version:

import std.bigint;
import std.stdio;

private alias bigint = BigInt;
bigint pow(bigint base,bigint exponent){
bigint tmp=1;
for(auto x=base,y=exponent;y;x*=x,y/=2)
if(y%2) tmp*=x;
return tmp;
}
bigint powm(bigint base,bigint exponent,bigint modulus){
bigint tmp=1;
for(auto x=base,y=exponent;y;x=x*x%modulus,y/=2)
if(y%2) tmp=tmp*x%modulus;
return tmp;
}
private pragma(inline, true) bigint pow(int base, bigint exponent) {
return pow(bigint(base), exponent);
}
private pragma(inline, true) bigint pow(bigint base, int exponent) {
return pow(base, bigint(exponent));
}
private pragma(inline, true) bigint pow(int base, int exponent) {
return pow(bigint(base), bigint(exponent));
}

// Credit to 
http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf appendix C.3

private bigint genprime(int bits, int numtests){
import std.random: uniform;

bool mrprime(bigint integer, int iterations) {
if(integer%2==0)
return false;

bigint m, a = 0, tmp = integer, b, z;
int length;

for(m=integer-1;m%2==0;m/=2,++a){}
assert((integer-1)%pow(bigint(2), a)==0);

while(tmp != 0) {
tmp >>=1;
length += 1;
}

for (int i=0; i= (integer-1))) {
b = 0;
for (int j = 1; j<=length; j++) {
b <<= 1;
b += uniform(0, 2);
}
}

z = powm(b, m, integer);
if((z == 1) || (z == integer-1))
goto endofloop;

for(int k=1; k<=a-1; k++) {
z = z*z%integer;
if(z == integer-1)
goto endofloop;
if(z == 1)
return false;
}
return false;
endofloop:
}
return true;
}

bigint genbigint(int numbits) {
bigint tmp;
while (numbits --> 0) {
tmp <<= 1;
tmp += uniform(0, 2);
}
return tmp;
}
bigint currnum;
while (true) {
currnum = genbigint(bits);
if (mrprime(currnum, numtests)) {
return currnum;
}
}
assert(0);
}

void main(){
writeln(genprime(300,30));
}



Re: Primality test function doesn't work on large numbers?

2017-01-08 Thread Eugene Wissner via Digitalmars-d-learn

On Sunday, 8 January 2017 at 07:52:33 UTC, Elronnd wrote:
I'm working on writing an RSA implementation, but I've run into 
a roadblock generating primes.  With a more than 9 bits, my 
program either hangs for a long time (utilizing %100 CPU!) or 
returns a composite number.  With 9 or fewer bits, I get 
primes, but I have to run with a huge number of iterations to 
actually _get_ a random number.  It runs fast, though.  Why 
might this be?  Code: http://lpaste.net/1034777940820230144


I haven't read your code very exactly, but I have an assumption 
and you can check if it is helpful:)


I think your actual problem is this line:

z = pow(b, m) % integer;

If it does what I think, it can be horribly slow and memory 
consuming. You have to implement your own pow function that does 
a ^ b mod c. Look into python source code or in "tanya" (D): 
https://github.com/caraus-ecms/tanya/blob/master/source/tanya/math/package.d. It is the same algorithm that phobos uses but with modulo operation built-in and a bit differently written (my code is based neither on python nor on phobos and uses a different bigint implementation). You can also rewrite pow(z, 2) % integer then. It will be faster.

Try to reduce bigint copying and arithmetic anyway if possible.