Mersenne Digest            Friday, 19 March 1999       Volume 01 : Number 535


----------------------------------------------------------------------

From: "Brian J Beesley" <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 10:12:24 GMT
Subject: Re: Mersenne: LL testing

[... snip ...] (Interesting, but requires no comment)

>   M(p) is usually the pth Mersenne number and Mp is 2^p-1 in the
> literature.  Though occasionally M(p) is used as 2^p-1 on the list.  It
> could cause confusion only for small p.  Is M(3) 2^3-1 or 2^5-1?
> 
Sorry, I'm guilty of this confusion, (though I don't think I'm alone in 
this)... Properly, we should be using subscripts (M sub p for 2^p-1) 
but this is a text based list.

Also there is a problem in using M(p) for the pth Mersenne prime. 
Do we mean the pth numerically or the pth to be discovered? 
Several of the known Mersenne primes were discovered out of 
numeric sequence. Also, given that double-checking has passed 
M(35) but is still quite incomplete in the gap up to 2^2976221-1, I 
would consider it makes sense to use provisional names "Spence's 
Number" and "Clarkson's Number" for the two known Mersenne 
primes bigger than M(35) = 2^1398269-1, until double checking is 
complete.

I'm also guilty of confusion between "Mersenne number" meaning a 
number of the form 2^p-1 for some prime p and a prime number of 
the form 2^p-1. I prefer the former definition, on the basis that (a) 
"Mersenne prime" is an obvious replacement for "prime number of 
the form 2^p-1", (b) some of the numbers 2^p-1 Mersenne was 
interested in (surely "Mersenne numbers" if the term has _any_ 
meaning) turned out to be composite, despite p being prime.

> > did Lucas use his test by hand? I know he did it by
> > hand, at the very least.
> 
>   I don't know about Lucas.  Read some of the articles in Luke's
> bibliography, it is a wonderful history.  Lehmer and Uhler used desk
> calculators.  Uhler made many calculations of logarithms and powers,
> see for example http://www.scruznet.com/~luke/lit/lit_019.htm
> Though poor Uhler had been doing LL-tests in the gap between M127 and
> M521.

I deliberately tested 2^31-1 using decimal hand calculation as an 
exercise in arithmetic. I can see that using a board with p columns 
would be a way of taking advantage of the binary nature of the 
problem - in particular, it makes reducing modulo 2^p-1 easy - but 
testing 2^127-1 would still take a time on account of the sheer 
number of marks to be made (or beans counted, or whatever). If 
you have an aptitude for doing arithmetic in octal or hexadecimal, 
that would probably be most efficient.

The major problem with hand calculation is that mistakes happen. 
(Imagine a computer that makes a random error with a probability 
of 1% each time an instruction is executed...) I avoided wasting 
time by checking each stage by "casting out nines", this detects 
single digit errors, but not neccessarily multiple errors in a 
calculation, or transposition errors.
> 
>   STL137 asks in his post if there is a test similiar to the LL test for
> numbers of the form 2^N-k where k=3,5,7,....  A primality test of the
> Lucasian type depends on the factorization of N+1, so I guess not.
> However, for some k*2^N-1 there is a primality test that uses the familiar
> S{n} = S{n-1}^2-2 differing only in the starting value S{0}.  See
> Riesel, "Prime Numbers and Computer Methods of Factorization" for example.

In Knuth vol 2 (3rd ed) one of the worked examples refers to this. 
(My copy is not to hand at present, so I can't give a detailed 
pointer, but it's obvious enough if you follow the references to 
"Lucas, Edouard" in the index)

I guess that the problem in practical implementation is that you 
don't felicitously get calculation mod k*2^N-1 from the FFT unless k 
happens to be 1.



Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "Cornelius Caesar" <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 99 11:34:39 CET
Subject: Mersenne: How to factor further?

I got the idea to do some factoring with my now slower-than-average
machine (a P133), but I don't want to factor at the current assignments
(in the 9M range); instead I would like to fill up the factor limits of
small exponents to some common value (56 bits or 58 bits or so).

Of course, doing it manually using "Advanced - Factor" is out of question,
so I thought to create appropriate entries in worktodo.ini and send the
results unsolicited :-) to the PrimeNet server.

However, I seem to hit the automatic factor limit value in Prime95, or
something else:

  Error: Work-to-do-file contained bad factoring assignment: 65537,56

Is it possible to do what I am trying?

Cornelius Caesar


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "George Strohschein" <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 08:34:17 -0500
Subject: Mersenne: RE: Mersenne Digest V1 #534

> Almost no spam gets sent with the tacit approval of the underlying ISP,
> and most ISPs are anxious to kill spammers. The more people who complain,
> the better. Apologies if I insulted anyone's intelligence; I'm posting
> this on the off-chance someone doesn't know it already.
>
> jasonp
>

Thanks for the post.  This site is the best I have for learning, and I'm not
insulted in the least (yet).
George

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: Paul Leyland <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 06:58:37 -0800
Subject: RE: Mersenne: How to factor further?

> I got the idea to do some factoring with my now slower-than-average
> machine (a P133), but I don't want to factor at the current 
> assignments
> (in the 9M range); instead I would like to fill up the factor 
> limits of
> small exponents to some common value (56 bits or 58 bits or so).

If what you want to do is find factors, rather than just planting a marker
at a rather arbitrary limit, I'd recommend that you turn to ECM factoring.
For example, if you run a couple of thousand curves with a B1 limit of a
million, you will very probably find all factors under 100 bits --- vastly
in excess of what you will be able to do by trial division.  No-one can
guarantee that you won't miss one of, say, 70 bits but the odds are very
much in your favour.  I'll leave it to others to calculate the exact
probability.

I'm running ECM on a P166 laptop; it found a 102-bit factor of M1201 after a
few hundred curves with B1=1 million.


Paul
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: Henrik Olsen <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 18:59:38 +0100 (CET)
Subject: Re: Mersenne: How to factor further?

On Thu, 18 Mar 1999, Cornelius Caesar wrote:
> I got the idea to do some factoring with my now slower-than-average
> machine (a P133), but I don't want to factor at the current assignments
> (in the 9M range); instead I would like to fill up the factor limits of
> small exponents to some common value (56 bits or 58 bits or so).
> 
> Of course, doing it manually using "Advanced - Factor" is out of question,
> so I thought to create appropriate entries in worktodo.ini and send the
> results unsolicited :-) to the PrimeNet server.
> 
> However, I seem to hit the automatic factor limit value in Prime95, or
> something else:
> 
>   Error: Work-to-do-file contained bad factoring assignment: 65537,56
> 
> Is it possible to do what I am trying?
Only by recompiling Prime95/mprime, the limits are hardcoded in the code.

>From commonc.h in version 17.7:

/* Factoring limits based on complex formulas given the speed of the */
/* factoring code vs. the speed of the Lucas-Lehmer code */

#define FAC64   9150000L                /* How far to factor */
#define FAC63   7270000L
#define FAC62   5160000L
#define FAC61   LIMIT192                /* This is 3960000L  */
#define FAC60   2950000L
#define FAC59   2360000L
#define FAC58   1930000L
#define FAC57   1480000L
#define FAC56   1000000L


- -- 
Henrik Olsen,  Dawn Solutions I/S
URL=http://www.iaeste.dk/~henrik/
Get the rest there.

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: Chris Nash <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 01:14:20 -0500
Subject: Re: Mersenne: Re: Mersenne Digest V1 #533

Hi folks

>There is a technique for assessing the "naturalness" of economic data.
>This technique, known as Benford's Law, demonstrates that the
>first digits of naturally occurring phenomena do not occur with equal
>frequency.  In fact, lower digits occur with greater frequency in
>tabulated natural data than larger digits.

Great approach to this from Rodolfo... Benford's Law is visually familiar to
those of us old enough to remember such anachronisms as tables of logarithms
and slide rules! Statistically, it's because models of natural processes
(say, radioactive decay and general "time between" distributions) yield an
exponential decaying Poisson distribution. From a computing point of view,
it's the same effect as what happens if you generate floating-point numbers
of "random" bits, the distribution is skewed (the probability a number is
between 1 and 2 is the same as it is between 2 and 4). Pick a (unbounded)
random number... how can you do it? You can't make all numbers to infinity
equally probable, and any smooth trnasform of the number picked should be
"equally random" and have the same distribution. The answer turns out to be
logarithmic, hence Benford's Law.

(It may be apocryphal, but apparently some 8-bit machine (perhaps Atari?)
had a means of generating "random" numbers because some memory location was
subject to "noise" - effectively some component acted as a radio antenna. It
may even have been by design... but of course results obtained by sampling
this location for random bits were awful. Being natural they were not only
non-uniform and non-independent but also subject to their surroundings. Can
anyone validate this?).

Anyway, such logarithmic behavior is certainly visible in the Mersenne data.
Heuristically, the probability N=2^n-1 is prime is going to be proportional
to 1/log N, ie proportional to 1/n. (we ignore constraints such as n needing
to be prime and the factors being of specific form, but this is a good
enough start). Hence theoretically we expect the number of Mersenne primes
of exponent less than L to be a partial sum of this, proportional to log L.
Hence we expect the n'th Mersenne prime to have an exponent increasing
exponentially, and, in reverse, the logarithms of the exponents should be
statistically regularly spaced. (What follows may be *very* sensitive to a
better model of the distribution, but this will do as a first estimate).

In an argument similar to Benford's Law, the fractional part of these
logarithms should, for a "random" phenomenon, be uniformly distributed on
[0,1). If the phenomenon is truly random, then this result should hold no
matter what base of logarithm we choose. However, consider plotting the
statistical deviation of such observations from randomness for different
bases of logarithm. Any marked deviation from statistical "noise" and
sampling error is a good indicator of non-random data for which the
logarithm base is some sort of controlling parameter. (In effect this is a
similar approach as curve fitting our expected distribution model to the
observed data).

I'd be interested to hear from anyone who constructs such a statistical
deviation vs logarithm base plot. We may expect such a statistical approach
to suggest a distribution where the overall scaling, and artifacts such as
Noll's islands, manifest themselves in the plot as large deviations from
randomness and spikes in the plot. This is one for the statisticians, to
create a suitable measure of the deviation of these fractional parts from a
uniform distribution on [0,1). Perhaps the sample variance will be a good
first measure, but with only 37 samples and a high degree of
non-independence, beware!

Chris Nash
Lexington KY
UNITED STATES



________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: Bob Margulies <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 14:18:41 -0800
Subject: Mersenne: Continuation file format

Would someone please tell me where in the pxxx, qxxx continuation files
the iteration count is located? Thank you.
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "david campeau" <[EMAIL PROTECTED]>
Date: Thu, 18 Mar 1999 22:12:46 PST
Subject: Re: Mersenne: How to factor further?

>
>I got the idea to do some factoring with my now slower-than-average
>machine (a P133), but I don't want to factor at the current assignments
>(in the 9M range); instead I would like to fill up the factor limits of
>small exponents to some common value (56 bits or 58 bits or so).
>
>Of course, doing it manually using "Advanced - Factor" is out of 
question,
>so I thought to create appropriate entries in worktodo.ini and send the
>results unsolicited :-) to the PrimeNet server.
>
>However, I seem to hit the automatic factor limit value in Prime95, or
>something else:
>
>  Error: Work-to-do-file contained bad factoring assignment: 65537,56
>
>Is it possible to do what I am trying?
>
Hi,

I have just that, sitting in one of my folders. It`s a modification to 
prime95 ver 16.4. witch accept AdvancedFactor entry (but it does not 
generate valid checksum i.e WS1: 00000000). I also have a small program 
that generate a worktodo.ini file based on the file available at 
www.mersenne.org. I've switch to double-checking since then but if I 
remember correctly you could comunicate with Primenet and submit those 
results without a problem.

Sample input:
AdvancedFactor=2122363,55,55
AdvancedFactor=2099393,55,55

Sample output:
[Fri Oct 02 01:01:19 1998]
UID: diamonddave/seeker, M1319719 no factor from 2^55 to 2^56, WS1: 
00000000
[Fri Oct 02 01:45:30 1998]
UID: diamonddave/seeker, M1319741 no factor from 2^55 to 2^56, WS1: 
00000000

Regards,

David,

PS: Just mail me if you want it.


Get Your Private, Free Email at http://www.hotmail.com
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "Brian J Beesley" <[EMAIL PROTECTED]>
Date: Fri, 19 Mar 1999 09:36:19 GMT
Subject: Re: Mersenne: Re: Mersenne Digest V1 #533

Chris Nash writes:

> I'd be interested to hear from anyone who constructs such a 
statistical
> deviation vs logarithm base plot. We may expect such a 
statistical approach
> to suggest a distribution where the overall scaling, and artifacts 
such as
> Noll's islands, manifest themselves in the plot as large deviations 
from
> randomness and spikes in the plot. This is one for the 
statisticians, to
> create a suitable measure of the deviation of these fractional 
parts from a
> uniform distribution on [0,1). Perhaps the sample variance will be 
a good
> first measure, but with only 37 samples and a high degree of
> non-independence, beware!

Sadly the statistical inferences that can be drawn indicate no 
evidence of any deviation from a theoretical "smooth" exponenential 
decay curve. There is a message in the archive on this very point 
(search for "Noll island")

Studies of related large primes e.g. 3*2^n+1, 5*2^n+1 exhibit 
similar distributions, though they do "look less lumpy" to the naked 
eye. (The top limit is around 300,000 rather than 3 million)

The point is that random events *do* tend to occur in clusters. As 
an example, here in Northern Ireland we have already had more 
accidental deaths in house fires this year than we had in the whole 
of 1998, or in the whole of 1997. Politicians may panic, calling for 
compulsory fitting of smoke detectors, etc., but in fact there is no 
evidence that this is anything other than a run of "bad luck".
Similarly I can find no statistically convincing evidence, even at the 
5% level, that the "Noll islands" really do exist.

(The rest of this reply is off-topic. Stop reading now if you object)

> (It may be apocryphal, but apparently some 8-bit machine 
(perhaps Atari?)
> had a means of generating "random" numbers because some memory location was
> subject to "noise" - effectively some component acted as a radio antenna. It
> may even have been by design... but of course results obtained by sampling
> this location for random bits were awful. Being natural they were not only
> non-uniform and non-independent but also subject to their surroundings. Can
> anyone validate this?).

I've never seen a system with a built-in hardware RNG, however I do 
know that no less an authority than von Neumann suggested that 
this was a worthwhile feature to have built into the architecture. Of 
course it has to be properly designed to be of any value. I believe 
von Neumann suggested shot noise from a thermionic valve as a 
suitable source, nowadays few computers incorporate such 
elements, however thermal noise from a high-value resistor would 
do equally well. Or, for that matter, acoustic noise from a 
microphone... the point is that you want to amplify the signal way 
up (distortion, extra noise, interference etc. introduced by this is 
not a problem!) then take only a few of the least significant bits 
output by the ADC.

You *do* need to check the output of many samples taken from 
such a hardware RNG using a variety of statistical tests before you 
can trust it, and you *do* need to test each completed hardware 
RNG individually.

There may also be a need to non-linearly transform values output 
from the RNG if you need to have a smooth flat distribution of 
random values to feed into your application. (Especially if the RNG 
is based on time intervals between shot noise / radioactive decay 
type events)

Nevertheless, done properly, such a technique for generating 
random numbers is *far* superior to the pseudo-random number 
generator functions in standard programming languages.

Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "Brian J Beesley" <[EMAIL PROTECTED]>
Date: Fri, 19 Mar 1999 10:12:40 GMT
Subject: Re: Mersenne: How to factor further?

> On Thu, 18 Mar 1999, Cornelius Caesar wrote:
> > I got the idea to do some factoring with my now slower-than-average
> > machine (a P133), but I don't want to factor at the current assignments
> > (in the 9M range); instead I would like to fill up the factor limits of
> > small exponents to some common value (56 bits or 58 bits or so).

How small an exponent? Bear in mind that the number of candidate 
factors of 2^p-1 in the range (2^k,2^(k+1)) doubles with a unit 
increase in k but is inversely proportional to p. i.e. trial factoring 
from 61 to 62 bits for an exponent around 4 million takes about the 
same time as trial factoring from 59 to 60 bits for an exponent 
around 1 million. (Once you go over 62 bits, hardware starts to 
interfere, testing each candidate > 2^62 takes significantly longer 
than it would do if the candidate was < 2^62)
> > 
> > Of course, doing it manually using "Advanced - Factor" is out of question,
> > so I thought to create appropriate entries in worktodo.ini and send the
> > results unsolicited :-) to the PrimeNet server.

Well, the PrimeNet server will mutter at you, but I think the results 
would nevertheless be accepted. I'm not sure if doing things this 
way would make extra work for Scott and/or George, so I think 
you'd better check before embarking on a stunt like this "big time".
Also, you won't get "credit" for assignments that were not issued 
by PrimeNet as part of the normal, automatic process.
> > 
> > However, I seem to hit the automatic factor limit value in Prime95, or
> > something else:
> > 
> >   Error: Work-to-do-file contained bad factoring assignment: 65537,56
> > 
> > Is it possible to do what I am trying?

What are you trying to do? Factor=65537,56 says that exponent 
65537 has already been trial-factored to 56 bits, please trial factor 
until the limit coded into Prime95 has been reached.

> Only by recompiling Prime95/mprime, the limits are hardcoded in 
the code.

A smart hacker would be able to patch the constants directly in the 
executable - but I didn't suggest it, and will strenuously deny any 
liability for any consequential damage...
> 
> From commonc.h in version 17.7:
> 
> /* Factoring limits based on complex formulas given the speed of the */
> /* factoring code vs. the speed of the Lucas-Lehmer code */
> 
> #define FAC64   9150000L                /* How far to factor */
> #define FAC63   7270000L
> #define FAC62   5160000L
> #define FAC61   LIMIT192                /* This is 3960000L  */
> #define FAC60   2950000L
> #define FAC59   2360000L
> #define FAC58   1930000L
> #define FAC57   1480000L
> #define FAC56   1000000L

These limits were in fact changed recently (when v17 was 
introduced). Therefore there are a number of exponents which have 
had one or two LL tests run, by old versions of Prime95 and/or 
other programs, which have not been trial factored to the current 
limit. It would be possible to winkle these out (nofactor.zip from the 
database) & run trial factoring to the current limit.

It would be better to stick to the exponents that have only been 
tested once - the point is that, if you find any new factors, you will 
save a double-check LL test. Those exponents which have already 
been double-checked, we are sure are composite, even though we 
don't know any factors.

Nevertheless I feel that it would be better to run ECM instead, if 
you're looking for new factors of small(ish) exponents.

Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: Henk Stokhorst <[EMAIL PROTECTED]>
Date: Fri, 19 Mar 1999 11:19:08 +0100
Subject: Re: Mersenne: How to factor further?

david campeau wrote:

> Hi,
>
> I have just that, sitting in one of my folders. It`s a modification to
> prime95 ver 16.4. witch accept AdvancedFactor entry (but it does not
> generate valid checksum i.e WS1: 00000000). I also have a small program
> that generate a worktodo.ini file based on the file available at
> www.mersenne.org. I've switch to double-checking since then but if I
> remember correctly you could comunicate with Primenet and submit those
> results without a problem.
>
> Sample input:
> AdvancedFactor=2122363,55,55
> AdvancedFactor=2099393,55,55
>
> Sample output:
> [Fri Oct 02 01:01:19 1998]
> UID: diamonddave/seeker, M1319719 no factor from 2^55 to 2^56, WS1:
> 00000000
> [Fri Oct 02 01:45:30 1998]
> UID: diamonddave/seeker, M1319741 no factor from 2^55 to 2^56, WS1:
> 00000000

Given this demand, can you include something like this in 17.2, George, in
the advanced menu? We would have valid WS1.

YotN,

Henk

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "Aaron Blosser" <[EMAIL PROTECTED]>
Date: Fri, 19 Mar 1999 08:33:26 -0700
Subject: RE: Mersenne: Re: Mersenne Digest V1 #533

> From: Brian J Beesley
> Sent: Friday, March 19, 1999 2:36 AM

> I've never seen a system with a built-in hardware RNG, however I do
> know that no less an authority than von Neumann suggested that
> this was a worthwhile feature to have built into the architecture.

FWIW, that whole Pentium III fiasco regarding privacy was about the CPU ID
on the chip.  Well, Intel built a hardware RNG onto the chip which is used
in the process of psuedo-encrypting this value.  Not sure about the details,
but I'm pretty sure it's just picking up thermal noise.

> There may also be a need to non-linearly transform values output
> from the RNG if you need to have a smooth flat distribution of
> random values to feed into your application. (Especially if the RNG
> is based on time intervals between shot noise / radioactive decay
> type events)

I don't know...once you start monkeying with the output, it then becomes
pseudo-random again.  Basically, *someone* is telling the numbers "sorry,
you're not random enough for me" so they "adjust" them.  Hmmm...

> Nevertheless, done properly, such a technique for generating
> random numbers is *far* superior to the pseudo-random number
> generator functions in standard programming languages.

What about the way PGP gets it's random number seed?  You move the mouse,
hit the keyboard for a good 10-15 seconds to generate a "random" sample.

I doubt anyone could exactly duplicate their keystrokes and mouse movements
from one time to the next, and this is about as random as thermal noise from
a resistor.  Is it pseudo-random because a human is involved?  Couldn't I
affect the results of thermal noise in a non-predictable way by merely
waving my hand over the resistor, or the transistors in the amp stage?

Just food for thought. :-)

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "Robert Clark" <[EMAIL PROTECTED]>
Date: Fri, 19 Mar 1999 10:26:00 -0800
Subject: Mersenne: RE: True random numbers

> FWIW, that whole Pentium III fiasco regarding privacy was
> about the CPU ID
> on the chip.  Well, Intel built a hardware RNG onto the chip
> which is used
> in the process of psuedo-encrypting this value.  Not sure
> about the details,
> but I'm pretty sure it's just picking up thermal noise.

For modest quantities of true random numbers - see
http://www.fourmilab.ch/hotbits/

Robert



________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

From: "David L. Nicol" <[EMAIL PROTECTED]>
Date: Fri, 19 Mar 1999 19:20:24 +0000
Subject: Re: Mersenne: Re: Mersenne Digest V1 #533

> Intel built a hardware RNG onto the chip which is used
> in the process of psuedo-encrypting this value.  Not sure about the details,
> but I'm pretty sure it's just picking up thermal noise.

Does anyone know if those con men with the video cameras and the three
lava lamps are still in business? I find no end to the wonder at how
brainwashed the computer hardware purchasing public can be when there
are otherwise reasonable people willing to lay out ten large for a
result you can get just as reliably by jamming a dime's worth of
hardware
into an otherwise unused serial port.
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

------------------------------

End of Mersenne Digest V1 #535
******************************

Reply via email to