On 22/11/2011 3:11 PM, Philippe Michel wrote: > Are there list members who are knowledgeable on rng robustness and could > confirm that, if one starts with a "good" rng for integers uniformly > distributed between 0 and n and discards any occurence of the top p > values, one gets a good rng for the [0, n-p] interval ? > > Intuitively, it is tempting to say : "yes, and it is a condition for the > original generator being any good" but I'm afraid it may be a treacherous > domain. >
I brought this up a good 8 years ago. Previously it was using straight modulo. My original post was: http://lists.gnu.org/archive/html/bug-gnubg/2003-08/msg00410.html Joern Thyssen http://lists.gnu.org/archive/html/bug-gnubg/2003-08/msg00438.html The fix he put in was based on this information which you will note in his response: ------- I've modified the code to use: anDice[ 0 ] = 1+(int) (6.0*rand()/(RAND_MAX+1.0)); instead. This is suggested on the rand(3) man page: In Numerical Recipes in C: The Art of Scientific Computing (William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling; New York: Cambridge University Press, 1992 (2nd ed., p. 277)), the following comments are made: "If you want to generate a random integer between 1 and 10, you should always do it by using high-order bits, as in j=1+(int) (10.0*rand()/(RAND_MAX+1.0)); and never by anything resembling j=1+(rand() % 10); (which uses lower-order bits)." -- Michael Petch CApp::Sysware Consulting Ltd. OpenPGP FingerPrint=D81C 6A0D 987E 7DA5 3219 6715 466A 2ACE 5CAE 3304 _______________________________________________ Bug-gnubg mailing list [email protected] https://lists.gnu.org/mailman/listinfo/bug-gnubg
