Mersenne Digest Friday, November 26 1999 Volume 01 : Number 664
----------------------------------------------------------------------
Date: Wed, 24 Nov 1999 08:56:11 -0800
From: Stefan Struiker <[EMAIL PROTECTED]>
Subject: Mersenne: Desperately Seeking DeepInfo On Factoring
M&Ms:
Am looking for FAQs on factoring, and on the algorithm in particular.
Is there life after 2^64?
Awaiting The Mersennium,
Stefan S
XQRPA
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 14:38:57 EST
From: [EMAIL PROTECTED]
Subject: Mersenne: Re: Mersenne Digest V1 #663; Norwegian Abel contest problem
In a message dated 24/11/99 09:38:55 GMT Standard Time,
[EMAIL PROTECTED] writes:
<<
> First, perhaps I should explain some notation :-) a_11 is the letter `a',
> followed by 11 in subscript. x^2 is the letter `x', followed by the letter
> 2 in superscript (ie. `x^2' would be mathematically the same as `x*x').
> OK, here goes:
>
> If (3x^2 - x - 2)^6 = (a_12)x^12 + (a_11)x^11 + ... + (a_1)x + a_0, what
> is a_0 + a_2 + a_4 + ... + a_12?
>
> The answer is an integer from 0 to 999, inclusive.
>
>>
I thought you would like to know how easy Steinar's problem
is in UBASIC:
4 'asave"polynom1"
10 A=poly(-2,-1,3) ' Set A = polynomial -2 - 1*x + 3*x^2
20 print A
30 B=A^6
40 print B
50 Sc=0
60 for I=0 to 12 step 2 ' sum even coefficients 0 to 12
70 Sc=Sc+coeff(B,I)
80 next I
90 print "Sum of even coefficients =";Sc
This gives the answer 32.
All the best, George.
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 21:00:23 +0100 (MET)
From: [EMAIL PROTECTED]
Subject: Re: Mersenne: Re: Mersenne Digest V1 #663; Norwegian Abel contest problem
[EMAIL PROTECTED] gives a UBASIC solution to Steinar's problem:
<<
> First, perhaps I should explain some notation :-) a_11 is the letter `a',
> followed by 11 in subscript. x^2 is the letter `x', followed by the letter
> 2 in superscript (ie. `x^2' would be mathematically the same as `x*x').
> OK, here goes:
>
> If (3x^2 - x - 2)^6 = (a_12)x^12 + (a_11)x^11 + ... + (a_1)x + a_0, what
> is a_0 + a_2 + a_4 + ... + a_12?
>
> The answer is an integer from 0 to 999, inclusive.
>
>>
A Maple solution needs only one input line (plus the command line):
|\^/| Maple V Release 5 (WMI Campus Wide License)
._|\| |/|_. Copyright (c) 1981-1997 by Waterloo Maple Inc. All rights
\ MAPLE / reserved. Maple and Maple V are registered trademarks of
<____ ____> Waterloo Maple Inc.
| Type ? for help.
> add(coeff((3*x^2 - x - 2)^6, x, 2*j), j = 0..6); quit;
32
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 14:47:33 -0500
From: "Tim Charron" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: Quiet list?
> First, perhaps I should explain some notation :-) a_11 is the letter `a',
> followed by 11 in subscript. x^2 is the letter `x', followed by the letter
> 2 in superscript (ie. `x^2' would be mathematically the same as `x*x').
> OK, here goes:
>
> If (3x^2 - x - 2)^6 = (a_12)x^12 + (a_11)x^11 + ... + (a_1)x + a_0, what
> is a_0 + a_2 + a_4 + ... + a_12?
Maple, UBasic, ugh. Are we that dependent on computers???
Warning, spoiler follows...
*
*
*
*
*
- define f(x)=(3x^2-x-2)^6=a_0+x*a_1+x^2*a_2+...+x^12*a_12
- Note that f( 1)=0=a_0+a_1+a_2+a_3+...+a_12
- Note that f(-1)=64=a_0-a_1+a_2-a_3+...+a_12
- So, f(1)+f(-1)=64=2*(a_0+a_2+a_4+...+a_12)
- Therefore, the required sum is 32.
- -- Tim
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 20:30:52 +0000
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Mersenne: Re: Mersenne Digest V1 #663; Norwegian Abel contest problem
On Wed, Nov 24, 1999 at 02:38:57PM -0500, [EMAIL PROTECTED] wrote:
>I thought you would like to know how easy Steinar's problem
>is in UBASIC:
But I explicitly disallowed any use of computers! :-)
BTW, the answer of 32 is correct, but there is a very elegant
solution (which I believe I posted to the list -- please correct
me if I'm wrong here).
/* Steinar */
- --
Homepage: http://members.xoom.com/sneeze/
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 21:40:14 +0000 (GMT)
From: Chris Jefferson <[EMAIL PROTECTED]>
Subject: Mersenne: Factoring Mersenne
Hello!
I was fiddling around with Mersenne factors the other day and came across
an interesting result. I'm not sure if it is of any use, but I think if
anyone can extend it just a little further, then it would cut down the
numbers we would have to try to factor by a HUGE amount...
Anyway, any mersenne's factor can be written as 2kp+1
So any persenne number 2^p - 1 (here on written as P) can be written as
(2ap+1)(2bp+1) = P i.f.f. it is not a prime
Now define x = p(a+b)+1, y=p(a-b)
Then x+y=2ap+1, x-y=2bp+1
so P=(x-y)(x+y)
x^2 - y^2 = P
Now write n=a+b, m=a-b so x=pn+1 , y=pm
Then (pn)^2 + 2pn + 1 + (pm)^2 = P
taking this mod p^2 and rearranging a little
2pn = P-1 mod p^2
this means 2pn = (P-1) + pZ for some integer Z
so (P-1) must have a factor of p, which we can cancel (we also know this
directly). Call (P-1)/p = Q
Then 2n = Q mod p
n = Q/2 mod p which is well defined
Therefore we can find the sun of the two factors mod p.
I have tried (and failed so far) to get a similar relationship for y (or a
or b)
If we could get such a relationship for y, and we assumed we were looking
for the smallest factor, then we could work out something about that
factor (hopefully it's value mod p) which would cut down on factoring
requirements.
Any ideas?
- ------------------------------------
Chris Jefferson, Girton College, Cambridge, [EMAIL PROTECTED]
- ------------------------------------
Someone may have beaten me to Fermat's Last Theorem, but I've done
Riemann's general equation. However it won't fit in my signature file...
- ------------------------------------
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 15:55:09 -0600
From: "Willmore, David" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Re: Quiet list?
No, no, no, you guys have it all wrong, you take a pencil and a big sheet of
paper and write:
(brute force spoiler)
power 1 3 -1 -2
9 -3 -6
-3 1 2
-6 2 4
power 2 9 -6 -11 4 4
81 -54 -99 36 36
-54 36 66 -24 -24
-99 66 121 -44 -44
36 -24 -44 16 16
36 -24 -44 16 16
power 4 81 -108 -162 204 145 -136 -72 32
16
729 -972 -1458 1836 1305 -1224 -648 288
144
-486 648 972 -1224 -870 816 432
- -192 -96
-891 1188 1782 -2244 -1595 1496
792 -352 -176
324 -432 -648 816 580
- -544 -288 128 64
324 -432 -648 816
580 -544 -288 128 64
power 6 729 -1458 -1701 4320 1755 -5418 -1259 3612
780 -1280 -336 192 64
even exp sum 32
You may need to widen your windows for this one. :)
Cheers,
David
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 19:13:57 EST
From: [EMAIL PROTECTED]
Subject: Mersenne: Mersenne Paper, Part II
Thanks to the number of people who have already read my Mersenne Primes paper
and commented on it, and thanks to the (hopefully larger) number of people
who read it without commenting. Could I make a request from the list? A
couple of people have noted that they live in a Micro$oft-free world and
can't read it. Could anyone make a more Unix-friendly version of the .DOC
file (even, say, HTML)? I'm afraid I don't know the slightest thing about
Unix or HTML coding. :-<
There were some miscellaneous nitpicks, and big thanks to the people who
notified me of them.
1) I was mistaken when I noted that taking numbers modulo 2^P - 1 is easy
because you can just keep the lower P bits. The process is still easy, but a
few more steps are required. D'oh! This is the only "error" found so far...
2) I didn't speak enough about the DWT algorithm used in Prime95. At the very
least I should have defined what a DWT is. D'oh!
3) My use of U_n for Fibonacci numbers is non-standard. (In my defense, I got
that from my source.)
4) I defined V_n in Proof 6, but not in the main paper. I was assuming that
the reader would read Proof 6 when I said "See Proof 6 in Appendix III", but
I should have mentioned it in the main body. D'oh!
The comments seem to be generally good, though. :-D
The URL of the paper is (for those who missed it)
http://members.aol.com/stl137/private/mersenne.zip
Please do not post links to the paper on public websites, but feel free to
send it to acquaintances.
Thanks again to all the people who have read it!
Stephan "Nitpicker" Lavavej
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Wed, 24 Nov 1999 19:28:53 EST
From: [EMAIL PROTECTED]
Subject: Mersenne: Re: great books
Luke Welsh writes:
>The author tried to write two books at once. He really wanted
>to explain to the novice that the modern PC is essentially the
>same as the SWAC, built in 1950. By this, Rutland means that
>both are "stored program machines", and I skipped over sections
>where he tried to make this point. But I did learn that the
>credit does not all go to John von Neumann!! That was a huge
>surprise to me.
>From what I've heard, von Neumann was one of those towering figures
who made outstanding contributions in many areas, but whose reputation
was such that he also got credit for many things he was only peripherally
involved in - much like Newton and Gauss.
The stored-program idea goea back at least to 1805, when Jacquard used
strung-together sequences of hole-punched wooden cards to control the
weaving patterns of his famous loom. A few years later, Babbage used
the same stored-program idea (also with puched cards) for his difference
engine. Von Neumann himself usually credited the idea to Turing, who
advocated its use years before.
Thanks for the book review, Luke - another computer-oriented book
folks might want to check out (this one concerned more with algorithms
and models of computation) is "Out of Their Minds: the Lives and Discoveries
of 15 Great Computer Scientists," by Dennis Shasha and Cathy Lazere, 1995.
Now the fact that I like this book doesn't mean I agree with all the
points it (or the people whose work it describes) make. But that's good,
because it gets one thinking things like, "hmm, why is it that such-and-
such an idea never caught on? It seems so elegant in theory...". Example:
an interesting point I noted in rereading the section on John Backus, who
(among other things) led the IBM team that developed the first Fortran
compiler back in the 1950s, is that Backus was unhappy with many aspects
of his own creation, especially with its use of the so-called von Neumann
style of coding, which (for example) might compute an inner product of
two length-n vectors a and b using a scalar variable c to accumulate the
partial sums and an explicit loop structure like (I quote from the book)
" c := 0
for i := 1 step 1 until n do
c := c + a[i]*b[i]
Backus criticizes this formulation on several counts, but espe-
cially the following two. First, c is repeatedly updated, so that the
only way to understand the program is to understand that each update
serves to add one more product to a running total. Thus, one must be
able to mentally execute the code in order to understand it.
Second, the formulation names its vector arguments (a and b) and
their length (n). To work for general vectors, these must be passed
as "reference" parameters - a tricky issue in most languages. In
FP [a language Backus proposed to remedy the perceived drawbacks
of the von Neumann style], by contrast, the inner product operation
would be defined as follows:
Def Innerproduct = (Insert +)(ApplyToAll *)(Transpose)
This must be read right to left. Transpose pairs up the appro-
priate elements of the two vectors. In the above example, a[1] should
be paired up with b[1], a[2] with b[2], and so on. '(ApplyToAll *)'
replaces each pair by its scalar product. '(Insert +)' sums up those
products.
This has three principal advantages, according to Backus. First,
there is no hidden state (the variable c in the program above). Second,
it works for any two vectors of the same size because it does not name
its arguments (thus eliminating the parameter-passing issue). Third,
there is no notion of repeated calculation; each step, or {\it function}
(Transpose, ApplyToAll, and Insert,) is applied exactly once to the re-
sult of the previous step in a process known as {\it composition}."
In my opinion, many of the above arguments are spurious. For one thing,
what better way is there of defining "to understand" a code than to be
able to mentally execute it, i.e. to follow it step-by-step in one's head?
The argument about the passing of array dimensions is also spurious - to
set aside storage for the arrays, the computer must know their dimension
at allocate time, and thus n is not a "problematic" parameter to keep around.
The FP inner product definition also has its drawbacks - for instance, why
would a computer need the notion of an array transpose, especially for a
one-dimensional array? A human might need to know about this to do a pencil-
and-paper computation of an array product, but a computer doesn't care,
as long as the algorithmic rules for matrix multiplication are well-defined.
The "hidden state" argument is questionable - any decent compiler (human
or machine) can see that c is just a name for a place to store the partial
sums, and its accumulative nature is obvious from the assignment c := c +...
The naming issue is equally bogus - in practice, we have these things called
"dummy arguments," and writing their names down makes it a whole lot easier
to follow what is going on - can you imagine trying to generalize Backus'
name-less syntax to a problem involving many arrays? It's barely under-
standable for two!
The book mentions that functional programming languages like FP have not
caught on for various reasons, one the main ones being that they would
make it difficult for a compiler to generate efficient code. This seems
a rather spurious argument - in fact I would argue that many (if not most)
high-level languages already possess the functional composition features
Backus likes so much (A more modern term for it is "object orientation.")
Consider the successor to Backus' own baby, Fortran-90,
(which is not generally considered to create slow-running code) with its
whole-array syntax. In f90, one could do the inner product of vectors a and
b without imposing a particular loop structure or explicitly giving the
array dimensions, via
InnerProduct = sum(a*b).
This in fact works for arrays of any allowable number of dimensions,
as long as they they have the same shape and size. If one desires a
true matrix product, one has the spiffy matmul intrinsic, which
allows the details of the underlying computation to be done as the
compiler writers see fit, i.e. the user has a reasonable expectation
that the computation will be done in some manner that is well-matched
to the underlying hardware, whether it be an x86, Alpha or SPARC (or
whatever), and a single or multiple-processor machine - the latter
aspect is referred to as "inherent parallelism."
My point with the preceding example is, it's often quite illuminating
to compare the good initial ideas people had with the form they wind
up with in the real world of computation.
Happy Thanksgiving to our American viewers,
- -Ernst
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 25 Nov 1999 01:57:44 -0500
From: "Vincent J. Mooney Jr." <[EMAIL PROTECTED]>
Subject: Mersenne: Atanasoff
There seems to some interest in the first computer. I refer you to the book
"The First Electronic Computer : The Atanasoff Story" by Alice R. Burks,
Arthur W. Burks still available on amazon.com.
Dr. John Vincent Atanasoff (a Bulgarian name; Dr. Atanasoff was native born
in Hamilton, New York, 1903) is credited by court decision in 1973 with the
invention of the computer. The case in dispute was between Honeywell and
Sperry Rand for claims of the computer invention. If either party have
prevailed, the winner might have had patent rights. IBM was worried and
introduced JV (as he was called) who showed that he had invented the
computer at Iowa State in 1938 when he was in the mathematics department (JV
was a 1930 PhD in physics from the University of Wisconsin). The computer
invented belonged to JV and his assistant, Charles Berry (hence the name ABC
= Atanasoff Berry Computer). There were several versions built, some in
1939 and in 1940.
The court decision was that as there was a prior invention (the ABC) which
had not been patented by anyone, no one could patent the computer comcept.
I am delighted that that was the decision and told JV that several times (I
lived near him, his home was New Market Maryland and I was in Frederick
Maryland) until he died about 10 years. He was always grouchy about my view
but did concede (mostly by remaining silent) that the speed of computer
advances was because there was no patent restriction in effect.
ENIAC owed much to Dr. Atanasoff as Mauchly saw the ABC in visits to Iowa
State. Some visits were for several days ("for the better part of a week"
was JV's court testimony). Programming and program languages were not part
of JV contribution. Dr. Mauchly's own testimony as reproduced in the book
shows he grudgedly agreed that he owed ideas and examples to others.
The original case was filed in 1968 as Honeywell v. Sperry Rand and Illinois
Scientific Developments. Among the almost 100 issues pushed by Honeywell
and the ENIAC, the judge, Earl R. Larsen, ruled "Eckert and Mauchly did not
themselves first invent the automatic electronic digital computer, but
instead derived that subject matter from one Dr. John Vincent Atanasoff".
Other equally strong language was used to assert that JV and Berry held
nothing back concerning the machine's theory, design, construction, use or
operation; that Mauchly went to Ames Iowa and had correspondence with Atanasoff.
Judge Larsen's decision was not appealed by anyone. A blessing to us all.
Dr. Atanasoff did not realize until late in life that he had done something
tremedous. He retired wealthy but not from his computer invention.
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 25 Nov 1999 08:50:53 -0000
From: "Daniel Grace" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Factoring Mersenne
> Anyway, any mersenne's factor can be written as 2kp+1
>
<snip>
> directly). Call (P-1)/p = Q
>
> Then 2n = Q mod p
> n = Q/2 mod p which is well defined
> Therefore we can find the sun of the two factors mod p.
I think what you are trying to say is
M_p is composite for p a prime iff
1+2kp divides (2^(p-1)-1)/p - k for some k>0.
If I am not mistaken factoring this using current methods
is harder than factoring 2^p - 1. Remeber it is easy to
trial divide 2^p - 1 using bit wise operators, because
2^p - 1=1+2+2^2+...+2^(p-1). Let u be a potential divisor
(e.g. u=2kp+1) then let j be smallest int. such that 2^j-1>u
then you can try dividing 2^p-1 using just j bits of storage.
e.g. start with 1+2+2^2+...+2^(j-1), subtract u, shift to the
left appending 1's at the start, until you get v>u,
subtract u and so on. I think that the software stops
if it gets a residue of 0 before all p bits have been
eliminated - in this case u divides some smaller Mersenne.
Rgds,
- ----------------------------------------------------------
Daniel
e-mail: [EMAIL PROTECTED]
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 25 Nov 1999 13:21:46 -0500
From: "Vincent J. Mooney Jr." <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Mersenne Primes Paper
At 10:04 PM 11/23/99 EST, you wrote:
>Greetings, everybody!
>
>My real name (which I've kept more or less a secret until now) is Stephan T.
>Lavavej. I'm a senior in high school, and I recently had to write an Extended
>Essay for the International Baccalaureate program. Now that I've just turned
>it in, I don't need to worry about getting "help" from outside sources. So,
>as promised, I have made my Extended Essay available on the Internet.
>
rest snipped.
I downloaded the paper and found it delightful. Thanks for making it
available. Where is Thornton High School ?
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 25 Nov 1999 20:37:08 +0000
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Mersenne: The `one and only' solution to the Abel problem
On Wed, Nov 24, 1999 at 06:21:48PM -0500, Lucas Wiman wrote:
>you didn't, or if you did (Then please repost it).
The `nice' way of doing it was posted (by somebody else) to the list the
other day, but I'll resend the answer I gave to David Willmore:
===
If (3x^2 - x - 2)^6 = (a_12)x^12 + (a_11)x^11 + ... + (a_1)x + a_0, then
obviously setting x = 1 and calculating would give a_12 + a_11 +
a_10 + ... + a_0. Still with me? OK... Now, if you set x = -1, you get
a_12 - a_11 + a_10 - ... + a_0. So, adding these two together gives
2(a_12) + 2(a_10) + 2(a_8) + ... + 2(a_0), which is exactly twice of
what we want. So, to sum it up:
(3x^2 - x - 2)^6, x = 1 is 3*1 - 1 - 2 = 3 - 1 - 2 = 0.
(3x^2 - x - 2)^6, x = -1 is 3*1 + 1 - 2 = 3 + 1 - 2 = 2.
So, the answer is (1/2)*(0 + 2^6) = 64/2 = 32.
===
I find it quite fascinating that multiple people on the list have been
doing this by brute-force. I tried that too, but when I saw it for the
first time, the other problems left me no time for doing it (so I never
got a correct answer) :-)
I think I got in three or four correct answers for this, but I haven't
really kept count.
/* Steinar */
- --
Homepage: http://members.xoom.com/sneeze/
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Sat, 27 Nov 1999 01:12:39 -0000
From: "Ian L McLoughlin" <[EMAIL PROTECTED]>
Subject: Mersenne: Maple
Anybody know about this language?
Where it can be obtained...et..al..
Cheers,
Ian McLoughlin, Chematek U.K.
Tel/Fax : +44(0)1904 679906
Mobile : +44(0)7801 823421
Website: www.chematekuk.co.uk
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
End of Mersenne Digest V1 #664
******************************