Hi Frank,

Oh ok, only the examples fail, and the error messages are pretty clear. I 
thought you were speaking of the automated tests, which definitely should run 
on every card. The examples serve mainly as a place to copy code from, and the 
errors below should be easy enough to fix.

Thanks for sending this, though!

Andreas

On Dienstag 07 Juli 2009, you wrote:
> Thank you for creating PyCUDA.  Here is the output from dump_properties.py,
> and some of the demos that had issues:
>
> Device #0: Quadro NVS 135M
>   Compute Capability: 1.1
>   Total Memory: 130752 KB
>   CAN_MAP_HOST_MEMORY: 0
>   CLOCK_RATE: 800000
>   COMPUTE_MODE: DEFAULT
>   GPU_OVERLAP: 1
>   INTEGRATED: 0
>   KERNEL_EXEC_TIMEOUT: 1
>   MAX_BLOCK_DIM_X: 512
>   MAX_BLOCK_DIM_Y: 512
>   MAX_BLOCK_DIM_Z: 64
>   MAX_GRID_DIM_X: 65535
>   MAX_GRID_DIM_Y: 65535
>   MAX_GRID_DIM_Z: 1
>   MAX_PITCH: 262144
>   MAX_REGISTERS_PER_BLOCK: 8192
>   MAX_SHARED_MEMORY_PER_BLOCK: 16384
>   MAX_THREADS_PER_BLOCK: 512
>   MULTIPROCESSOR_COUNT: 1
>   TEXTURE_ALIGNMENT: 256
>   TOTAL_CONSTANT_MEMORY: 65536
>   WARP_SIZE: 32
>
>
>
>   C:\downloads\cuda\pycuda\examples>matrix-transpose.py
>   Traceback (most recent call last):
>     File "C:\downloads\cuda\pycuda\examples\matrix-transpose.py", line 218,
> in <module>
>       run_benchmark()
>     File "C:\downloads\cuda\pycuda\examples\matrix-transpose.py", line 176,
> in run_benchmark
>       target = gpuarray.empty((size, size), dtype=source.dtype)
>     File
> "c:\python25\lib\site-packages\pycuda-0.94beta-py2.5-win32.egg\pycuda\gpuar
>ray.py", line 81, in __init__
>       self.gpudata = self.allocator(self.size * self.dtype.itemsize)
>   pycuda._driver.MemoryError: cuMemAlloc failed: out of memory
>
>
>
> C:\downloads\cuda\pycuda\examples>select-to-list.py
> kernel.cu
> tmpxft_000003bc_00000000-3_kernel.cudafe1.gpu
> tmpxft_000003bc_00000000-8_kernel.cudafe2.gpu
> kernel.cu
> tmpxft_0000073c_00000000-3_kernel.cudafe1.gpu
> tmpxft_0000073c_00000000-8_kernel.cudafe2.gpu
> ptxas
> C:\DOCUME~1\fbuckle\LOCALS~1\Temp/tmpxft_0000073c_00000000-4_kernel.ptx,
> line 103; error   : Shared-space reduction operations require SM 1.2 or
> higher
> ptxas fatal   : Ptx assembly aborted due to errors
> Traceback (most recent call last):
>   File "C:\downloads\cuda\pycuda\examples\select-to-list.py", line 100, in
> <module>
>     """ % {"block_size": block_size, "el_per_thread": el_per_thread})
>   File
> "c:\python25\lib\site-packages\pycuda-0.94beta-py2.5-win32.egg\pycuda\compi
>ler.py", line 180, in __init__
>     arch, code, cache_dir, include_dirs)
>   File
> "c:\python25\lib\site-packages\pycuda-0.94beta-py2.5-win32.egg\pycuda\compi
>ler.py", line 170, in compile
>     return compile_plain(source, options, keep, nvcc, cache_dir)
>   File
> "c:\python25\lib\site-packages\pycuda-0.94beta-py2.5-win32.egg\pycuda\compi
>ler.py", line 79, in compile_plain
>     raise CompileError, "nvcc compilation of %s failed" % cu_file_path
> pycuda.driver.CompileError: nvcc compilation of
> c:\docume~1\fbuckle\locals~1\temp\tmpfbxzhp\kernel.cu failed
>
> C:\downloads\cuda\pycuda\examples>measure_gpuarray_speed_random.py
> 1024
> 2048
> 4096
> 8192
> 16384
> 32768
> 65536
> 131072
> 262144
> 524288
> 1048576
> 2097152
> 4194304
> 8388608
> 16777216
> Traceback (most recent call last):
>   File
> "C:\downloads\cuda\pycuda\examples\measure_gpuarray_speed_random.py", line
> 88, in <module>
>     main()
>   File
> "C:\downloads\cuda\pycuda\examples\measure_gpuarray_speed_random.py", line
> 39, in main
>     curandom.rand((size, ))
>   File
> "c:\python25\lib\site-packages\pycuda-0.94beta-py2.5-win32.egg\pycuda\curan
>dom.py", line 182, in rand
>     result = GPUArray(shape, dtype)
>   File
> "c:\python25\lib\site-packages\pycuda-0.94beta-py2.5-win32.egg\pycuda\gpuar
>ray.py", line 81, in __init__
>     self.gpudata = self.allocator(self.size * self.dtype.itemsize)
> pycuda._driver.MemoryError: cuMemAlloc failed: out of memory
>
>
> On Mon, Jul 6, 2009 at 11:29 AM, Andreas Klöckner
>
> <[email protected]>wrote:
> > On Samstag 04 Juli 2009, Frank Buckle wrote:
> > > I am new to CUDA, and just have a little python experience.  I was able
> >
> > to
> >
> > > get the PyCUDA demos to run on Windows using MinGW.  I added to the
> >
> > Windows
> >
> > > installation instructions at
> > > http://wiki.tiker.net/PyCuda/Installation/Windows.
> > >
> > > I am not sure if there are any python or PyCUDA issues that might arise
> > > from instructing distutils to use the mingw32 compiler.  The demos
> > > appear to work OK, except for some out of memory errors that I think
> > > are related to my puny laptop graphics card.  For now, MinGW seems to
> > > be a decent option.
> > >
> > > Let me know if you have any questions, or can foresee any issues with
> >
> > this
> >
> > > option.
> >
> > Sweet! Thank you for taking the time to write this up.
> >
> > Also, I'd be happy if you could copy and paste the output of the
> > tests--I'd like to see which ones I need to fix to run with smaller
> > cards.
> >
> > Andreas
> >
> >
> > _______________________________________________
> > PyCUDA mailing list
> > [email protected]
> > http://tiker.net/mailman/listinfo/pycuda_tiker.net


Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
PyCUDA mailing list
[email protected]
http://tiker.net/mailman/listinfo/pycuda_tiker.net

Reply via email to