I think I may be running into a memory leak using GPUarray.  I have a
function using GPUarrays that is working stable on single calls.  If I loop
this function within python from another script like this:

        for i in xrange(m):
            do_some_gpuarray_stuff()

I can watch the memory pointers of the gpuarrays increase until I get a
launch error... presumably due to lack of memory.  ie I need gpu mem to free
upon exit of do_some_gpuarray_stuff(), so I can repeat same gpu calculation
many times on new data sets.

Can I manually free GPUarray instances?  If not, can I somehow manually
remove all PyCUDA stuff from memory? like...

        for i in xrange(m):
            do_some_gpuarray_stuff()
            de_init_pycuda_mem

  I could not find this in the docs, and I understand everything is supposed
to be automagically handled by PyCUDA, but manually freeing will be an easy
confirmation/workaround for my problem.  I know this can be done with
pycuda.driver completely manually, but gpu_array is already working nicely
and cleanly.... except for this leak.  Any input from the experts would be
much appreciated.

Thanks much :)
Garrett Wright
_______________________________________________
PyCUDA mailing list
pyc...@host304.hostmonster.com
http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net

Reply via email to