Hi,

I ran into a gpu memory leak that appears to happen when gc frees GPUArrays 
that were created in a thread other than the one gc is running in.  I did not 
see a github issue tracking this.  Is this a known issue that others have run 
into?  I'm using pycuda 2011.2.2 with python 2.6.7.

This happens when gc frees a DeviceAllocation that had been created in another 
thread.  Since there are no refs to it, it is indeed freed, and the destructor 
tries to free the corresponding CUdeviceptr.  However, the following lines 
cause mem_free not to be called: the scoped_context_activation checks that the 
running thread matches the context's thread; since it doesn't, it throws an 
exception, which is silently caught in CUDAPP_CATCH_CLEANUP_ON_DEAD_CONTEXT:

class device_allocation ...
      void free()
      {
        if (m_valid)
        {
          try
          {
            scoped_context_activation ca(get_context());
            mem_free(m_devptr);
          }
          CUDAPP_CATCH_CLEANUP_ON_DEAD_CONTEXT(device_allocation);

I was wondering how to go about fixing or working with this, or if anyone has 
any advice?

Thanks,
David


_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to