Re: [PyCUDA] How to manually free GPUarray to avoid leak?

2010-05-05 Thread Andreas Klöckner
On Sonntag 25 April 2010, gerald wrong wrote:
 Can I manually free GPUarray instances? 

In addition to Bogdan's comments (which are more likely to help you with
what you're seeing): If you must free the memory by hand, you can use

ary.gpudata.free()

to do so.

HTH,
Andreas


signature.asc
Description: This is a digitally signed message part.
___
PyCUDA mailing list
pyc...@host304.hostmonster.com
http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net


Re: [PyCUDA] How to manually free GPUarray to avoid leak?

2010-05-05 Thread Louis Theran
I have a question about the .gpudata contract that I couldn't figure out by
experimentation.  If I construct gpuarrays with a call like

  GPUArray(...,gpudata=xxx)

is it sufficient that

  xxx.___int___()

and

  xxx.free()

be defined for things to work out correctly?  That's what I gleaned from the
documentation, but it didn't work when I tried it with a Python class that
had those two methods.  (I wanted to allocate memory on the device some
other way.)


^L

On Wed, May 5, 2010 at 9:25 PM, Andreas Klöckner li...@informa.tiker.netwrote:

 On Sonntag 25 April 2010, gerald wrong wrote:
  Can I manually free GPUarray instances?

 In addition to Bogdan's comments (which are more likely to help you with
 what you're seeing): If you must free the memory by hand, you can use

 ary.gpudata.free()

 to do so.

 HTH,
 Andreas

 ___
 PyCUDA mailing list
 pyc...@host304.hostmonster.com
 http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net


___
PyCUDA mailing list
pyc...@host304.hostmonster.com
http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net


[PyCUDA] How to manually free GPUarray to avoid leak?

2010-04-25 Thread gerald wrong
I think I may be running into a memory leak using GPUarray.  I have a
function using GPUarrays that is working stable on single calls.  If I loop
this function within python from another script like this:

for i in xrange(m):
do_some_gpuarray_stuff()

I can watch the memory pointers of the gpuarrays increase until I get a
launch error... presumably due to lack of memory.  ie I need gpu mem to free
upon exit of do_some_gpuarray_stuff(), so I can repeat same gpu calculation
many times on new data sets.

Can I manually free GPUarray instances?  If not, can I somehow manually
remove all PyCUDA stuff from memory? like...

for i in xrange(m):
do_some_gpuarray_stuff()
de_init_pycuda_mem

  I could not find this in the docs, and I understand everything is supposed
to be automagically handled by PyCUDA, but manually freeing will be an easy
confirmation/workaround for my problem.  I know this can be done with
pycuda.driver completely manually, but gpu_array is already working nicely
and cleanly except for this leak.  Any input from the experts would be
much appreciated.

Thanks much :)
Garrett Wright
___
PyCUDA mailing list
pyc...@host304.hostmonster.com
http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net


Re: [PyCUDA] How to manually free GPUarray to avoid leak?

2010-04-25 Thread Bogdan Opanchuk
Hi Gerald,

 I can watch the memory pointers of the gpuarrays increase until I get a
 launch error... presumably due to lack of memory.

Are you sure that failure is caused by the lack of memory? I think,
this would rather result in an error during memory allocation, not
during kernel execution.

 Can I manually free GPUarray instances?  If not, can I somehow manually
 remove all PyCUDA stuff from memory?

Python deinitialises objects as soon as the reference count for them
becomes zero. If you need to do it explicitly, I think just del
gpuarray_obj will be enough. At least, it worked for me.

Best regards,
Bogdan

On Mon, Apr 26, 2010 at 2:53 AM, gerald wrong psillymathh...@gmail.com wrote:
 I think I may be running into a memory leak using GPUarray.  I have a
 function using GPUarrays that is working stable on single calls.  If I loop
 this function within python from another script like this:

     for i in xrange(m):
     do_some_gpuarray_stuff()

 I can watch the memory pointers of the gpuarrays increase until I get a
 launch error... presumably due to lack of memory.  ie I need gpu mem to free
 upon exit of do_some_gpuarray_stuff(), so I can repeat same gpu calculation
 many times on new data sets.

 Can I manually free GPUarray instances?  If not, can I somehow manually
 remove all PyCUDA stuff from memory? like...

     for i in xrange(m):
     do_some_gpuarray_stuff()
     de_init_pycuda_mem

   I could not find this in the docs, and I understand everything is supposed
 to be automagically handled by PyCUDA, but manually freeing will be an easy
 confirmation/workaround for my problem.  I know this can be done with
 pycuda.driver completely manually, but gpu_array is already working nicely
 and cleanly except for this leak.  Any input from the experts would be
 much appreciated.

 Thanks much :)
 Garrett Wright
 ___
 PyCUDA mailing list
 pyc...@host304.hostmonster.com
 http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net



___
PyCUDA mailing list
pyc...@host304.hostmonster.com
http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net