Re: Mersenne: How to factor further?

1999-03-19 Thread david campeau


I got the idea to do some factoring with my now slower-than-average
machine (a P133), but I don't want to factor at the current assignments
(in the 9M range); instead I would like to fill up the factor limits of
small exponents to some common value (56 bits or 58 bits or so).

Of course, doing it manually using "Advanced - Factor" is out of 
question,
so I thought to create appropriate entries in worktodo.ini and send the
results unsolicited :-) to the PrimeNet server.

However, I seem to hit the automatic factor limit value in Prime95, or
something else:

  Error: Work-to-do-file contained bad factoring assignment: 65537,56

Is it possible to do what I am trying?

Hi,

I have just that, sitting in one of my folders. It`s a modification to 
prime95 ver 16.4. witch accept AdvancedFactor entry (but it does not 
generate valid checksum i.e WS1: ). I also have a small program 
that generate a worktodo.ini file based on the file available at 
www.mersenne.org. I've switch to double-checking since then but if I 
remember correctly you could comunicate with Primenet and submit those 
results without a problem.

Sample input:
AdvancedFactor=2122363,55,55
AdvancedFactor=2099393,55,55

Sample output:
[Fri Oct 02 01:01:19 1998]
UID: diamonddave/seeker, M1319719 no factor from 2^55 to 2^56, WS1: 

[Fri Oct 02 01:45:30 1998]
UID: diamonddave/seeker, M1319741 no factor from 2^55 to 2^56, WS1: 


Regards,

David,

PS: Just mail me if you want it.


Get Your Private, Free Email at http://www.hotmail.com

Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Re: Mersenne Digest V1 #533

1999-03-19 Thread Brian J Beesley

Chris Nash writes:

 I'd be interested to hear from anyone who constructs such a 
statistical
 deviation vs logarithm base plot. We may expect such a 
statistical approach
 to suggest a distribution where the overall scaling, and artifacts 
such as
 Noll's islands, manifest themselves in the plot as large deviations 
from
 randomness and spikes in the plot. This is one for the 
statisticians, to
 create a suitable measure of the deviation of these fractional 
parts from a
 uniform distribution on [0,1). Perhaps the sample variance will be 
a good
 first measure, but with only 37 samples and a high degree of
 non-independence, beware!

Sadly the statistical inferences that can be drawn indicate no 
evidence of any deviation from a theoretical "smooth" exponenential 
decay curve. There is a message in the archive on this very point 
(search for "Noll island")

Studies of related large primes e.g. 3*2^n+1, 5*2^n+1 exhibit 
similar distributions, though they do "look less lumpy" to the naked 
eye. (The top limit is around 300,000 rather than 3 million)

The point is that random events *do* tend to occur in clusters. As 
an example, here in Northern Ireland we have already had more 
accidental deaths in house fires this year than we had in the whole 
of 1998, or in the whole of 1997. Politicians may panic, calling for 
compulsory fitting of smoke detectors, etc., but in fact there is no 
evidence that this is anything other than a run of "bad luck".
Similarly I can find no statistically convincing evidence, even at the 
5% level, that the "Noll islands" really do exist.

(The rest of this reply is off-topic. Stop reading now if you object)

 (It may be apocryphal, but apparently some 8-bit machine 
(perhaps Atari?)
 had a means of generating "random" numbers because some memory location was
 subject to "noise" - effectively some component acted as a radio antenna. It
 may even have been by design... but of course results obtained by sampling
 this location for random bits were awful. Being natural they were not only
 non-uniform and non-independent but also subject to their surroundings. Can
 anyone validate this?).

I've never seen a system with a built-in hardware RNG, however I do 
know that no less an authority than von Neumann suggested that 
this was a worthwhile feature to have built into the architecture. Of 
course it has to be properly designed to be of any value. I believe 
von Neumann suggested shot noise from a thermionic valve as a 
suitable source, nowadays few computers incorporate such 
elements, however thermal noise from a high-value resistor would 
do equally well. Or, for that matter, acoustic noise from a 
microphone... the point is that you want to amplify the signal way 
up (distortion, extra noise, interference etc. introduced by this is 
not a problem!) then take only a few of the least significant bits 
output by the ADC.

You *do* need to check the output of many samples taken from 
such a hardware RNG using a variety of statistical tests before you 
can trust it, and you *do* need to test each completed hardware 
RNG individually.

There may also be a need to non-linearly transform values output 
from the RNG if you need to have a smooth flat distribution of 
random values to feed into your application. (Especially if the RNG 
is based on time intervals between shot noise / radioactive decay 
type events)

Nevertheless, done properly, such a technique for generating 
random numbers is *far* superior to the pseudo-random number 
generator functions in standard programming languages.

Regards
Brian Beesley

Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



RE: Mersenne: Re: Mersenne Digest V1 #533

1999-03-19 Thread Aaron Blosser

 From: Brian J Beesley
 Sent: Friday, March 19, 1999 2:36 AM

 I've never seen a system with a built-in hardware RNG, however I do
 know that no less an authority than von Neumann suggested that
 this was a worthwhile feature to have built into the architecture.

FWIW, that whole Pentium III fiasco regarding privacy was about the CPU ID
on the chip.  Well, Intel built a hardware RNG onto the chip which is used
in the process of psuedo-encrypting this value.  Not sure about the details,
but I'm pretty sure it's just picking up thermal noise.

 There may also be a need to non-linearly transform values output
 from the RNG if you need to have a smooth flat distribution of
 random values to feed into your application. (Especially if the RNG
 is based on time intervals between shot noise / radioactive decay
 type events)

I don't know...once you start monkeying with the output, it then becomes
pseudo-random again.  Basically, *someone* is telling the numbers "sorry,
you're not random enough for me" so they "adjust" them.  Hmmm...

 Nevertheless, done properly, such a technique for generating
 random numbers is *far* superior to the pseudo-random number
 generator functions in standard programming languages.

What about the way PGP gets it's random number seed?  You move the mouse,
hit the keyboard for a good 10-15 seconds to generate a "random" sample.

I doubt anyone could exactly duplicate their keystrokes and mouse movements
from one time to the next, and this is about as random as thermal noise from
a resistor.  Is it pseudo-random because a human is involved?  Couldn't I
affect the results of thermal noise in a non-predictable way by merely
waving my hand over the resistor, or the transistors in the amp stage?

Just food for thought. :-)

Aaron


Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: Re: Mersenne Digest V1 #535

1999-03-19 Thread Rudy Ruiz



Brian J Beesley wrote:


 From: "Brian J Beesley" [EMAIL PROTECTED]
 Date: Fri, 19 Mar 1999 09:36:19 GMT
 Subject: Re: Mersenne: Re: Mersenne Digest V1 #533


 [...snip...]

 The point is that random events *do* tend to occur in clusters. As
 an example, here in Northern Ireland we have already had more
 accidental deaths in house fires this year than we had in the whole
 of 1998, or in the whole of 1997. Politicians may panic, calling for
 compulsory fitting of smoke detectors, etc., but in fact there is no
 evidence that this is anything other than a run of "bad luck".
 Similarly I can find no statistically convincing evidence, even at the
 5% level, that the "Noll islands" really do exist.

 (The rest of this reply is off-topic. Stop reading now if you object)


In that vein, I had just read the following a couple of days ago (while
researching a bit) under the  tittle: "Following Benford's Law, or Looking Out for
No. 1"


 Probability predictions are often surprising. In the case of the coin-tossing 
experiment,
 Dr. Hill wrote in the current issue of the magazine American Scientist, a "quite
 involved calculation" revealed a surprising probability. It showed, he said, that the
 overwhelming odds are that at some point in a series of 200 tosses, either heads or
 tails will come up six or more times in a row. Most fakers don't know this and avoid
 guessing long runs of heads or tails, which they mistakenly believe to be improbable.
 At just a glance, Dr. Hill can see whether or not a student's 200 coin-toss results
 contain a run of six heads or tails; if they don't, the student is branded a fake.


http://www.256.com/~gray/info/benfords.html

Rodolfo


Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Re: Mersenne Digest V1 #533

1999-03-19 Thread Chris Nash

Great points, Brian.

Sadly the statistical inferences that can be drawn indicate no
evidence of any deviation from a theoretical "smooth" exponenential
decay curve. There is a message in the archive on this very point
(search for "Noll island")

The important word here is "statistical" - as human beings, even "trained"
ones, we are pretty dismal at being able to recognize what is random, and
what isn't. The sample of 37 exponents is statistically too small to deduce
very much at all, it may be enough to suggest that exponential decay is a
reasonable hypothesis - but not enough to deduce anything about the
deviation from such. (Hence my caveat about the number of samples!).

The point is that random events *do* tend to occur in clusters

Behavioral studies on human recognition of randomness are a lot simpler to
get these days - just analyse lottery number picking strategies... Humans
have a dire aversion for instance against picking numbers that are
sequential, even those who are "aware" that such a sequence is just as
likely as any other. Similarly, humans avoid any picks that even have a pair
of consecutive numbers, which a bit of combinatorics proves is not that rare
at all. Random events most definitely do cluster, because of the Poisson
"time between" distribution. It's possible to have very large gaps (with
corresponding low probability) but a small gap is typical. In fact, two
consecutive short gaps is just as likely as a single, very long gap. Hence
some very human observations that "trouble/celebrity deaths/bus arrivals"
often occur "in threes" actually have probabilistic foundation!

Similarly I can find no statistically convincing evidence, even at the
5% level, that the "Noll islands" really do exist.

I wonder how many more "confirming instances" we'd have to find before we
could even get a result as statistically weak as 5%? My statistical
background is pretty awful but I know that these sort of analyses are order
sqrt(n)... maybe we'll be having the same discussion in a "few years" time
after M(148) is discovered and the sample size is four times larger... of
course M(148) will probably have 10^24 digits...

(The rest of this reply is off-topic. Stop reading now if you object)
[8-bit hardware RNG]


Jean-Charles Meyrignac kindly informed me off-list that the machine may have
been the Commodore 64, which apparently used such a setup for it's "white
noise" sound channel. As Brian points out, *provided* it's done properly
(correct statistical normalization of the output of each individual RNG)
such a technique is *far* superior to pseudo-random routines. It's only a
little off-topic, after all, one of the Pollard methods (Pollard-rho?) uses
a "traditional" pseudo-random number generator (x - x^2+c mod N) and
*expects* the output to eventually correlate to indicate a factor. When
probabilistic methods are in use, remember Knuth's caveat that a good
algorithm can be killed stone-dead by a poor random-number generator - and,
worst of all, a good random-number generator may be proven poor in a
particular application. One worth remembering if you're looking for a
parameter in a factoring algorithm...

Chris Nash
Lexington KY
UNITED STATES



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: Off-topic: Random numbers

1999-03-19 Thread STL137

Just wanted to say something quickly.
Does anyone know if those con men with the video cameras and the three
lava lamps are still in business? 
I wouldn't call them con men - their setup is very, very good, and they work
at SGI (if I recall correctly). Somewhat more on topic, they churn their lava
lamp data through a Blum Blum Shub random number generator - which uses prime
numbers! Anyone know anything about the BBS generator? I've been able to find
little on it.
Thanks.
S.T.L.

Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Mersenne Digest V1 #535

1999-03-19 Thread mersenne-digest-invalid-reply-address


Mersenne DigestFriday, 19 March 1999   Volume 01 : Number 535


--

From: "Brian J Beesley" [EMAIL PROTECTED]
Date: Thu, 18 Mar 1999 10:12:24 GMT
Subject: Re: Mersenne: LL testing

[... snip ...] (Interesting, but requires no comment)

   M(p) is usually the pth Mersenne number and Mp is 2^p-1 in the
 literature.  Though occasionally M(p) is used as 2^p-1 on the list.  It
 could cause confusion only for small p.  Is M(3) 2^3-1 or 2^5-1?
 
Sorry, I'm guilty of this confusion, (though I don't think I'm alone in 
this)... Properly, we should be using subscripts (M sub p for 2^p-1) 
but this is a text based list.

Also there is a problem in using M(p) for the pth Mersenne prime. 
Do we mean the pth numerically or the pth to be discovered? 
Several of the known Mersenne primes were discovered out of 
numeric sequence. Also, given that double-checking has passed 
M(35) but is still quite incomplete in the gap up to 2^2976221-1, I 
would consider it makes sense to use provisional names "Spence's 
Number" and "Clarkson's Number" for the two known Mersenne 
primes bigger than M(35) = 2^1398269-1, until double checking is 
complete.

I'm also guilty of confusion between "Mersenne number" meaning a 
number of the form 2^p-1 for some prime p and a prime number of 
the form 2^p-1. I prefer the former definition, on the basis that (a) 
"Mersenne prime" is an obvious replacement for "prime number of 
the form 2^p-1", (b) some of the numbers 2^p-1 Mersenne was 
interested in (surely "Mersenne numbers" if the term has _any_ 
meaning) turned out to be composite, despite p being prime.

  did Lucas use his test by hand? I know he did it by
  hand, at the very least.
 
   I don't know about Lucas.  Read some of the articles in Luke's
 bibliography, it is a wonderful history.  Lehmer and Uhler used desk
 calculators.  Uhler made many calculations of logarithms and powers,
 see for example http://www.scruznet.com/~luke/lit/lit_019.htm
 Though poor Uhler had been doing LL-tests in the gap between M127 and
 M521.

I deliberately tested 2^31-1 using decimal hand calculation as an 
exercise in arithmetic. I can see that using a board with p columns 
would be a way of taking advantage of the binary nature of the 
problem - in particular, it makes reducing modulo 2^p-1 easy - but 
testing 2^127-1 would still take a time on account of the sheer 
number of marks to be made (or beans counted, or whatever). If 
you have an aptitude for doing arithmetic in octal or hexadecimal, 
that would probably be most efficient.

The major problem with hand calculation is that mistakes happen. 
(Imagine a computer that makes a random error with a probability 
of 1% each time an instruction is executed...) I avoided wasting 
time by checking each stage by "casting out nines", this detects 
single digit errors, but not neccessarily multiple errors in a 
calculation, or transposition errors.
 
   STL137 asks in his post if there is a test similiar to the LL test for
 numbers of the form 2^N-k where k=3,5,7,  A primality test of the
 Lucasian type depends on the factorization of N+1, so I guess not.
 However, for some k*2^N-1 there is a primality test that uses the familiar
 S{n} = S{n-1}^2-2 differing only in the starting value S{0}.  See
 Riesel, "Prime Numbers and Computer Methods of Factorization" for example.

In Knuth vol 2 (3rd ed) one of the worked examples refers to this. 
(My copy is not to hand at present, so I can't give a detailed 
pointer, but it's obvious enough if you follow the references to 
"Lucas, Edouard" in the index)

I guess that the problem in practical implementation is that you 
don't felicitously get calculation mod k*2^N-1 from the FFT unless k 
happens to be 1.



Regards
Brian Beesley

Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm

--

From: "Cornelius Caesar" [EMAIL PROTECTED]
Date: Thu, 18 Mar 99 11:34:39 CET
Subject: Mersenne: How to factor further?

I got the idea to do some factoring with my now slower-than-average
machine (a P133), but I don't want to factor at the current assignments
(in the 9M range); instead I would like to fill up the factor limits of
small exponents to some common value (56 bits or 58 bits or so).

Of course, doing it manually using "Advanced - Factor" is out of question,
so I thought to create appropriate entries in worktodo.ini and send the
results unsolicited :-) to the PrimeNet server.

However, I seem to hit the automatic factor limit value in Prime95, or
something else:

  Error: Work-to-do-file contained bad factoring assignment: 65537,56

Is it possible to do what I am trying?

Cornelius Caesar



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm

--

From: "George Strohschein" [EMAIL