Cryptography-Digest Digest #514, Volume #9 Fri, 7 May 99 13:13:05 EDT
Contents:
Re: Shamir's Discover: to those in the know (Robert Harley)
Re: Random permutation (Mok-Kong Shen)
Re: Crypto export limits ruled unconstitutional ("hapticz")
Re: Random permutation (Mok-Kong Shen)
Re: The simplest to understand and as secure as it gets. (SCOTT19U.ZIP_GUY)
Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO (SCOTT19U.ZIP_GUY)
Re: The simplest to understand and as secure as it gets. ([EMAIL PROTECTED])
Re: Discrete Logarithm Question (Emmanuel BRESSON)
Re: AES ([EMAIL PROTECTED])
Re: Random permutation ([EMAIL PROTECTED])
Re: True Randomness & The Law Of Large Numbers (Herman Rubin)
Re: Fast random number generator (Herman Rubin)
----------------------------------------------------------------------------
From: Robert Harley <[EMAIL PROTECTED]>
Subject: Re: Shamir's Discover: to those in the know
Date: 07 May 1999 17:48:27 +0200
[EMAIL PROTECTED] (DJohn37050) writes:
> ECC remains a very hard problem and the methods used to break it can
> essentially break any method at those keysizes, that is, the ECDLP
> continues to appear to be an ideal problem to base a cryptosystem on.
You have to be careful with statements like that. The elliptic-curve
discrete logarithm problem in a large finite field is assumed to be
difficult when the number of points on the curve is divisible by a
large prime p (it can be roughly as big as the field size). The
amount of work to solve the ECDL problem appears to be O(sqrt(p))
field operations in general.
But for some classes of curves it is less. For instance a while ago
Semaev showed that some could be done in polynomial time!
Last year, I *cough* found how to quickly solve ECDL problems in large
fields, when the coefficients defining the curve are in a small
subfield. With people around the Net contributing machine time we
solved a problem in GF(2^97) with a curve defined over GF(2) in a
fraction of the time it would take for a hard case even though the
problem, set by Certicom, was supposed to be a hard case!
I think the lesson here is that ECDLs are an ideal problem as long as
you stick to randomly chosen ones with a large prime dividing the
number of points. If you go and pick some ultra-special curves you
are likely to get bitten sooner or later. Doesn't matter if it makes
your arithmetic faster or counting points easier or falls under some
patent you hold. Just don't do it, 'cause if you do all bets about
security are off.
Anybody interested in solving the next Certicom problem?
It is in the field GF(2^97) and the curve is not defined over any
small subfield so it ought to take about 4 x 10^14 elliptic curve
operations.
I've got code to solve it ready to run that does about 110 K
operations per second on a 450 Mhz Pentium II.
It does 85 K on a 333 MHz UltraSparc with Sun's 32-bit compiler but
200 K with their new 64-bit compiler (64 bit registers really help!)
It does about 350 K on a 500 MHz Alpha 21164 and 450 K on the new 21264.
If people out there with fast machines are interested, let me know!
Assuming there's enough interest i.e., hundreds of machines, we can go
ahead. There's prize money of $5000 to be won which would go to a
good cause (before we gave $4000 to the Free Software Foundation and
the rest to the people who found the two halves of the solution).
Bye,
Rob.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Random permutation
Date: Fri, 07 May 1999 16:06:35 +0200
Volker Hetzer wrote:
>
> > Question: Are there more efficient ways of achieving the same goal?
> Like modulo?
The problem is that there is a non-zero chance that one has to
wait very long till one gets the result. (It should be mentioned
that one needs to get only 15 distint numbers, the 16th being
then unique.)
M. K. Shen
------------------------------
From: "hapticz" <[EMAIL PROTECTED]>
Subject: Re: Crypto export limits ruled unconstitutional
Date: Fri, 7 May 1999 12:02:11 -0400
being as limited scope as it is, it may have little impact on entire export
restriction stipulations.
"in part or in whole" is what i am concerned with here.
--
best regards
[EMAIL PROTECTED]
remove first "email" from address, sorry i had to do this!
MegaZone wrote in message <7gt70e$e0a$[EMAIL PROTECTED]>...
|I hope this stands up on appeal:
|
|<URL:http://www.news.com/News/Item/0,4,0-36217,00.html?st.ne.lh..n>
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Random permutation
Date: Fri, 07 May 1999 16:12:48 +0200
[EMAIL PROTECTED] wrote:
>
>
> I think the S values can have a cycle length longer then the key (or seed)
> but I will have to look into it..... I wrote a paper on this method, but it
> won't be ready for two more days... (about that...)
If you have complete mathematical analysis, I should like very
much to read you paper. But sofar it is too difficult for me to
follow your arguments with my humble knowledge.
M. K. Shen
------------------------------
From: SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.privacy
Subject: Re: The simplest to understand and as secure as it gets.
Date: Fri, 07 May 1999 11:51:25 GMT
In article <7gug7j$tk5$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
>
> > Well Tom I feel my encryption works well on any file. However
> > my compression methods are more for those that are stuck with weak
> > runt key methods that the NSA can most likely break such as whatever
> > gets selected by the AES process. If the file does not compress very
> > well and if using the method of a forward compression pass and the
> > reverse compression pass. The file would grow in length. From an
> > entropy point of view the average entopy per bit would be worse
> > since files longer. However if one tended to use files that are
> > very close with only a few changes. I think certain weaknesses would
> > be diffused and "partial plain text attacks" would be hard to mount
> > However if the enemy can mount "choosen whole file plain text attacks"
> > then there is no protection what so ever. And if encyption could be
> > broken the compression would add little. However if one is just using
> > low entropy files that compress this would not occur unless the enemy
> > can trick you into using very speacial whole length binary files for
> > you to encrypt. I hope this anwsers your question
>
> Sorta. I still don't see the strength on files that don't compress...
You
> will have literal encodings... and if it's a file such as a MP3 I could pick
> out the literals even from a encrypted file (if I had the time....).
>
> BTW, what 'encrypt' are you using anyways? A stream or block cipher?
>
> Tom
>
I use scott16u for most of my stuff that is passed to others
it is kind of a block cipher but the block size is the size of
the file.
David A. Scott
--
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
to email me use address on WEB PAGE
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]>
Subject: Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO
Date: Fri, 07 May 1999 13:13:34 GMT
In article <[EMAIL PROTECTED]>,
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> SCOTT19U.ZIP_GUY wrote:
> >
>
> > Like I said these are my comments internal to the program. The
> > Flag or flags was for various states internal to the program and
> > not actaully an EOF symbol being written out. That being said there
> > are several basic ending to the file. The comment "case 1 added .."
> > does happen to be in the code path where padding can occur. The
> > Huffman tree is not a typical tree. For example there is never a
> > token that can have more than 8 one's connected in series. The tree
> > is built so that there is never a single tokem made up of all
> > zero's such that the symbol could can less than 8 zero's.
> > in other words when one compresses there is never a token of exactly
> > one zero or two zeroes or three zeroes up to seven. There might be
> > one that has exactly 8 but in most cases when real compression takes
> > place there may not even be one with 8 zeros. Though there is in the
> > very begining and for certain special cases. That being said.
> > For a large number of cases when compression occurs and does not end
> > of a byte boundary. The last byte is padded with zeros. This padding
> > would consist of one (or zero if you think about it) to seven zeros.
> > Note this is not a token since the smallest all zero token can only
> > be 8 or more bits in length. Know some of the specail cases.
> > Suppose the last symbol being written crosses a byte boundary and
> > on the last byte boudary it would have written only 1's and on the
> > token before the spanning it is not all zeros. In this case I don't
> > even write the last byte at all. The flags and such where to keep
> > track of conditions as I did testing. You may want to edit out
> > all comments. But what I hope works is that for any file X if you
> > decompress it to Y and then recompress it to Z then file X and Z
> > are exactly the same. But note the EOF or zeros in my code are not
> > always there. And when using even just ascii many differet cases
> > can pop up. The trick is since any file can be a valid compression
> > there is no easy handle for the attacker to use when he guess a key
> > to an encrypted file that was compressed by my method.
>
> You never know what kind of files are going to be compressed by
> your program. Someone might try to compress ANY binary file with
> that. So you can't ignore some cases because they are rare in your
> opinion. Consider an extremely unbalanced tree with 8 nodes such
> that each right branch (taking 1) leads to a terminating node. Then
> 8 0's is a valid symbol. Well, in this case you could mirror the
It handles the case of 8 0's. The tree is such that there is
never a symbol made up of all zeros less than eight.
> tree and avoid the situation. But you are handling 256 symbols and
> the tree is constantly being updated. Of course, you could build
> in a barrier while building the tree in order to avoid 8 0's being
> the code of a valid symbol. But is then the tree optimal? Perhaps
Yes the tree is optimal since there is always more than tree that
preserves same symbol lengths.
> you could find some way in arbitrairy situations to achieve
> that similar to what I illustrated above. But that would make your
> program logic unnecessarily complicated. And I am not sure that you
> can easily show that the optimality is always preserved.
Yes I am sure the logic is somewhat complicated but that is
what computers are made for.
>
> The second point is that you are (explicitly or implicitly) using
> 8 0's as EOF. If instead of that you simply have another group
> of bits (also not in the Huffman tree) as EOF for termination there
> can be no functional difference I can see, excepting that your trick
> of leaving out the last byte (under certain circumstances) cannot
> be applied. (This EOF will be hanging around during updates of the
> Huffman table and will have different codes assigned to it at
> different time points. It is only the last version that you output.)
No there is no EOF hanging around in the tree. However many times
like you note above O to 7 zeros could be thought of as the effective
EOF in many of the cases. But not all. If you can code an EOF such
that any file could be treated as decompressed file of another then
fine.
>
> I am not sure of having time to do extensive tests of your program,
> for I am mainly interested in the principles of doing compression
> and in that direction my doubts concerning your program have not
> yet been eliminated, as argued above. But my argument points out
> at least that one useful way to test your program is to feed it with
> files having very extreme frequency distributions (one should also
> take care to have all 256 symbols, thus forcing the tree to be
> as big as possible).
I have tested all 2 byte files and only a few that use all 256
symbols. So you may have a point but the tree is consturcted such
that there are "always exacatly 256 symbols" I start with a very
small weight for unused symbols. Then when a symbol occurs its weight
is greatly increased so that if one looks at node on top of unused
symbols it would be the node that the old code uses as a NOT YET
OCCURED symbol. But when all 256 synbols have occured then you
just have a normal looking huffman tree of 256 symbols. My method
may not be as fast as the old method since tree is always 256 nodes
but it uses less bits for the introduction of unused symbols.
That fact alone makes it compress to shorter file. ALso the dropping
of header information makes it smaller.
For short files I think the method is very hard to beat even for
non huffman compression. Take any two byte file. There are many such
files. most would compress as 2 bytes in length only. While less
than 1 percent would contain 3 bytes. Granted this is no net gain
for files of this size. But the old adaptive huffman compression
would pass first byte unchanged and if second byte different from
first it would pass second token as 9 bits. This means you are already
using 3 byte for the vast majority of the file before you even decide
to add a EOF symbol or whatever to indicate the end of file.
David A. Scott
--
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
to email me use address on WEB PAGE
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED]
Crossposted-To: talk.politics.crypto,alt.privacy
Subject: Re: The simplest to understand and as secure as it gets.
Date: Fri, 07 May 1999 13:43:01 GMT
> No it is not a OTP but there is a write up by a guy at my site.
> There are links to the code at my webpage. I hope your in the
> US so you can get scott19u it is better. But the main difference
> is I use 19 bit fields in it while the 16u uses 16 bit fields.
19 bit fields? You mean GF(2**19)? Isn't that slower? Just a wondering..
I am in Canada (eh?) so I will take a peek at your code this weekend when I
have the time (I am at school right now, hehe, what else if grade 13 comp sci
for?)
I will get back to you.... :)
Tom
--
PGP public keys. SPARE key is for daily work, WORK key is for
published work. The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'. Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'. Try SPARE first!
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
Date: Fri, 07 May 1999 10:44:12 -0400
From: Emmanuel BRESSON <[EMAIL PROTECTED]>
Subject: Re: Discrete Logarithm Question
> >you should have phi(N)>p+q almost everytime...
> That was the first impression I had. However, Phi(N) is lcm(p-1, q-1)
What is your definition of Phi(N) ???
Mine is " Phi(N):=number of integers in [1,...,N-1] which are relatively prime
to N",
Then Phi(n)=(p-1)(q-1) since that n=p*q and p,q two distinct primes.
(e.g. 35=7*5, phi(35)=24)
> , which
> is (p-1)*(q-1) only when p-1 and q-1 are coprime.
In 99% cases, both (p-1) and (q-1) are even, hmm ?
Emmanuel
------------------------------
Date: Fri, 07 May 1999 10:42:38 -0400
From: [EMAIL PROTECTED]
Subject: Re: AES
Patrick Juola wrote:
>
> In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]> wrote:
> >
> >On Wed, 05 May 1999 14:13:47 GMT, in <[EMAIL PROTECTED]>,
> >in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:
> >
> >>On Wed, 05 May 1999 04:46:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
> >>
> >>>
> >>>On Tue, 04 May 1999 21:39:14 GMT, in
> >>><[EMAIL PROTECTED]>, in sci.crypt
> >>>[EMAIL PROTECTED] (Bruce Schneier) wrote:
> >>>
> >>>>On Fri, 30 Apr 1999 13:12:52 -0500, "Anthony King Ho"
> >>>><[EMAIL PROTECTED]> wrote:
> >>>
> >>>>[...]
> >>>>>and algorithm such as Triple-DES is proven
> >>>>>to be very secure.
> >>>>
> >>>>Agreed.
> >>>
> >>>Sorry. There is NO proof of strength for Triple-DES or any other
> >>>cipher in cryptography.
> >>
> >>Sorry. I thought he meant "proven" in the vernacular, as in "this
> >>meal has proven to be very tasty." I agree that there is no
> >>mathematical proof of the strength of triple-DES, which has
> >>nonetheless proven to be very secure (ans tasty).
> >
> >It turns out to be difficult to justify such a statement even using a
> >non-mathematical definition of "proof." For example:
> >
> >We can say a race car is "proven" in practice: We can see it run for
> >some time at some speed and compare it to other entrants. Basically a
> >car is movement at speed and we can see that.
> >
> >We can say a cipher program is "proven" in practice: We can see it
> >encipher data, produce junk, then recover the original data. We can
> >see whether or not the program crashes. We thus see the program
> >perform its functions.
> >
> >But the function of a cipher is to hide information from others.
> >Simply using a cipher for a long time while not specifically knowing
> >that it exposes secrets hardly means the secrets are secure. We have
> >no way to *see* whether or not hiding occurs; we have no way to see a
> >cipher perform its function. So we simply do not have the same sort
> >of practical experience that we commonly call "proven."
>
> But it's no more difficult than Bruce Schneier's original example :
> "this meal has proven to be very tasty." There's no objective
> standard for "tasty"; the only "proof" I can offer is that I got
> a hundred people together and fed them the meal, and they all
> responded positively about how it tasted.
>
> This examples illustrates nicely some of the issues involved :
> at one level of "proof", all it really means is that *I* liked the
> meal, and you have no reason to believe me or to trust my judgement.
> (Are you paying attention, Mr. Scott?) If I tell you I made egg-and-olive
> ice cream and it proved delicious, you may not regard this as sufficient
> evidence. Certainly not if you were TCBY and looking for new flavors.
>
> At another level, I could present my meal to a large collection of people
> and see what they all said. Yes, some of them might lie. Yes, some of
> them might not like the meal -- but if 99% of the people said that they
> liked it, that might be sufficient for you (or TCBY) to believe
> that it's "tasty."
>
> A third level of "proof" might involve a carefully selected group of
> food critics and chefs known for their discriminating palettes; when
> they say it's "tasty", that means even more than when the man on the
> street says so.
Does it? I suspect not. The man-on-the-street (MOTS) is expressing a
willingness to buy. The critics and experts are selling something:
their expertise.
The field of subjective judgements is filled with examples of contrarian
reactions from experts vs MOTS. Movies and theatrical plays are
notorious for the inverse relationship between critical success and
box-office success.
Similarly, figure skating competition features critics and experts
(judges) who have vastly different criteria than the average audience.
A Disney skating show does not resemble a competition at all. The
former is aimed at the critics, and the latter at the audience.
Let's consider judging ciphers by the same dual standard. Financial
sucess can be considered orthogonal to critical success. And neither
have any rigorous relationship to security. They simply pander to
different tastes in information security.
>
> >I would say that simply using a cipher for a long time -- and not
> >specifically knowing it is broken -- does *not* constitute "proven
> >security," by any definition of "proof."
>
> Simply using a cypher? No. But a lot of experts -- read : food
> critics and chefs -- have looked at DES for a long time, and so
> far no one has gone on record as saying they regard it as "not
> tasty." If you look at 20 years worth of restaurant reviews regarding
> a particular restaurant, and without exception every one of them was
> good, wouldn't that constitute effective "proof" that the restaurant
> served good food?
It would depend on who assembled the collection of reviews. Note that a
collection without any negative criticism would convince me that someone
was distoring the truth.
Let's let our paranoia run a little loose and inspect the relationships
between academic funding and the national security interests. Before we
ask how tightly connected these issue are, let's ask how visible the
connections are. It is demonstrably within the capability, interest,
and practice of the national security interests to /i/n/t/e/r/f/e/r/e/
intervene in the academic funding world. Thus the collection of reviews
that I would like to rely upon is tainted. How badly? I wish I knew.
>
> -kitten
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Random permutation
Date: Fri, 07 May 1999 13:48:41 GMT
> Is this your own method or is there literature where the method
> is dealt with in more detail? I have never heard of the name
> bisection method in the present context.
Here is how it works... You take a deck, and you split in at card S, each
pile of cards (there are now two piles) have n/2 cards each. Then you take
alternating cards from the two piles to form the deck. (I.e one card from
pile one, then from pile two, then from pile one)... If you don't reapeat the
S values this has a maximal cycle.
The obvious thing is any non-repeating S value is maximal length but this
makes n outputs which have a entire cycle length of n! (assuming S doesn't
repeat until then). I suggested using the key length as the length of the S
cycle, since the output of the decks can be defined by the key, your S values
need not be any more complex. Of course you could use a longer S cycle, but
that would have little benefit.
In general with a 128 bit key (seed), you can generate n(2**128) outputs
before repeating.
I think the S values can have a cycle length longer then the key (or seed)
but I will have to look into it..... I wrote a paper on this method, but it
won't be ready for two more days... (about that...)
Tom
--
PGP public keys. SPARE key is for daily work, WORK key is for
published work. The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'. Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'. Try SPARE first!
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 7 May 1999 08:57:52 -0500
In article <[EMAIL PROTECTED]>,
Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
>"R. Knauer" wrote:
>> So now you appear to be saying that each bit in the sequence is a
>> separate measurement of the random generation process.
>Each bit is certainly supposed to be an *independent sample* of the
>output of the process, when the process meets its randomness specs.
When it comes to the problem of getting good quality random bits,
this can be taken at best to be an approximation, and subject to
careful test. This is a major problem of concern.
It is easy to give many reasons why this can happen. One of them
is a gradual shift in some system parameter, but there are other
transient effects, which do not affect the uniformity more than
they affect the independence.
>Perhaps it may clarify something if you consider the testing as an
>attempt at "proof by contradiction", which is of course a standard
>method in mathematics and other rational disciplines. It goes
>something like this:
> 1) We assume the TRNG instance is functioning correctly.
What does this mean? We can only assume that it is a close
approximation.
> 2) We deduce from (1) and the definition of correct
> functioning that the output stream has certain simple
> (UBP) statistical characteristics.
This needs clarification. It has a probability distribution
of statistical properties.
> 3) We observe the actual output and find that it does
> not have those statistical characteristics.
We find that it has only unlikely results under those assumptions.
> 4) (2) and (3) are contradictory; therefore the assumption
> (1) cannot have been correct.
The most we can say is that it is not likely to have been close
to correct. Highly improbable outcomes can occur.
> 5) Therefore the TRNG instance is not functioning correctly.
We should act as if it is not functioning close to what is desired.
>In the above, I intentionally neglected the distinction between exact
>knowledge and probabilistic knowledge (with high degree of certainty),
>for pedagogical reasons -- because it is easier to understand the
>argument in a purely Aristotelian (Boolean) setting.
The point is that one cannot act this way. I have indicated the
modifications which need to be made to take into account randomness.
These differences are important. Also, the typical use of testing,
setting p-values, is not what should be done. It takes a lot of
data to test between a physical RNG doing a quite good job, and one
which has lots of weakness.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
------------------------------
From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: Fast random number generator
Date: 7 May 1999 09:43:07 -0500
In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]> wrote:
>On Wed, 05 May 1999 17:56:55 GMT, in
><[EMAIL PROTECTED]>, in sci.crypt
>[EMAIL PROTECTED] (John Savard) wrote:
..............
>>>>The classical shuffling algorithm depends on a good random number
>>>generator to produce small random numbers in the range [0, size-1].
>>No, it needs more than that.
>>It needs one random number in the range {0, 1, 2, ... size-1 }, and
>>then it needs one random number in the range {0, 1, 2, ... size-2 },
>>and then one in the range {0, 1, 2, ... size-3 }.
................
>In particular, an appropriate way to develop a random value of
>arbitrary range is to first mask the random value by the next higher
>power of 2 less one, and then reject any value beyond the desired
>range.
This method is quite inefficient, as far as bit usage, especially
as the value of the number m for which a number in the range
0 - m-1 is needed. Here is an algorithm which will use the
minimum expected number of bits. The algorithm given is optimal
if m = 2^k or m = 2^k - 1, but not otherwise. For m = 5, it
uses an average of 4.8 bits, rather than the 3.39 possible.
For m = 65, it is 13.75, rather than 7.72.
A version of the optimal bit algorithm is as follows, assuming
that m > 1. B is a new random bit at each stage. It uses
multiplication by 2, addition, and comparison.
i = 1; j = 0;
loop: i = i*2; j = j*2+B;
if(i < m)goto loop;
if(j < m){return j:
exit}
i = i-m; j = j-m;
goto loop;
The basis of the proof of correctness is that, on the loop: line,
j is always uniform 0 - i-1, and that this remains true after the
subtraction if that is done. No procedure returning one result
uniform random numbers in the range 0 - m-1, considering no other
results from random processes, can be more efficient.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************