Hi,
Le 2010-10-18 à 11:30, Bogdan Opanchuk a écrit :
> Hi all,
>
> Consider the following program, which is supposed to check the
> "randomness" of pycuda random number generator:
[...]
> def test(size, dtype):
> a = curandom.rand((size,), dtype=dtype).get()
> return numpy.sum(a) / size
[...]
> I get the following output on CentOS, PyCuda 0.94, GF GTX280:
> <type 'numpy.float32'>
[...]
> 8.0M elements: 0.499930769205
> 16.0M elements: 0.49999204278
> 32.0M elements: 0.5
> 64.0M elements: 0.25
> 128.0M elements: 0.125
[...]
> It looks like for single precision generator starts repeating itself
> after ~16M numbers. Is it some restriction of the algorithm or a bug
> in code?
I'm not sure what is happening exactly, but there is no indication that the
random numbers are *repeating themselves*
However you seem to hit a floating point issue *when computing the sum* => if
you replace numpy.sum(a)/size by numpy.sum(a.astype(numpy.float64))/size, you
get correct results for float32 (always around 0.5).
It is actually not suprising: when you sum 2**25 times numbers which are on
average 0.5, you get ~ 2**24. And in single precision floating point, the
mantissa is indeed stored using 24 bits => when you try adding the next
numbers, they are simply too small to register in the mantissa, and are not
added. e.g. try: float32(2**24)+float32(.5)= float32(2**24) !
Funny example of the limits of single precision floating point...
Vincent
--
Vincent Favre-Nicolin http://inac.cea.fr
CEA/Grenoble Institut Nanosciences & Cryogénie
Laboratoire SP2M/Nano-structures et Rayonnement Synchrotron
17, rue des Martyrs
38054 Grenoble Cedex 9 - France
Université Joseph Fourier http://www.ujf-grenoble.fr
tél: (+33) 4 38 78 95 40 fax: (+33) 4 38 78 51 38
_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda