On Thu, 24 Jun 2010 12:22:19 -0400, Louis Theran <the...@temple.edu> wrote:
> On Wed, Jun 23, 2010 at 11:56 PM, Andreas Kloeckner <li...@informa.tiker.net
> > wrote:
> > There's only *the* current runtime context for the current
> > thread. "Specified" makes no sense--there isn't even a data type for
> > it in the runtime. (also see previous email)
> 
> If these context stack operations are totally blind, which I didn't
> understand, then I don't really understand the comment from the other email:
> 
> To get garbage collection right in the face of multiple contexts, PyCUDA
> > must assume there *is* a context and try to restore it at destruction time
> > of every object created.
> 
> What's the heuristic being used?  I was under the impression that there was
> a CUContext type in the driver API.

I'm not following you here. I'm trying to say three things:

1) Contexts are entirely implicit in the runtime. No switching, no
nothing. One thread, one context, end of story. The driver API on the
other hand has context switching, but the active context is still
thread-global state.

2) In the face of switchable contexts, correct GC is difficult, because
a different context might be active than the one in which your (e.g.)
memory was allocated. Thus PyCUDA's destructors need to worry about
context management.

3) To be interoperable with the runtime, *all* of PyCUDA's context
switching logic must be turned off, which is probably best achieved
through a flag. (A pattern that seems to also work reasonably for now is
to have PyCUDA create the context and have the runtime bits it.)

> I will try to put together a patch along the lines you describe, although
> the exact design pattern for mixing kernel code and runtime code seems to
> not be fully worked out.

It is in my head, unless I'm missing something. If something's unclear,
ask.

> Our experience with CULA is that it provides its
> own wrapper to CUDA's allocator, which we used to dummy up objects with the
> right interface for GPUArray.  You can see the specific hack at:
> 
>   http://bitbucket.org/louistheran/pycula/src/tip/CULApy/cula.py
> 
> But it has a few obvious problems, such as being likely to leak
> memory.

Why would it leak? That object will never be part of a ref cycle, and
will thus always get its __del__ called.

> I would be more confident if PyCUDA just tried to cast to int or long
> instead, since the idiom would be cleaner, 

You should derive from PointerHolderBase [1]--that gets rid of the extra
cast.

http://documen.tician.de/pycuda/driver.html#pycuda.driver.PointerHolderBase

> but the context issue is still there.  

(see above)

> (There is a side issue that
> being able to use other allocators has some value for libraries that
> may want to optimize layout, which makes me think this idiom isn't
> totally useless.)

Absolutely.

> The thing I still don't quite get is how PyCUDA is managing the context.  Is
> it done in DeviceAllocation itself or somewhere else?  In other words, what
> is the lifecycle of these various pushes and pops of the context as it
> relates to Python's GC?

DeviceAllocation's destructor is a pretty good case study (see
src/cpp/cuda.hpp). It derives from the class 'context_dependent', which
makes a note of the context active at instance construction time. Then,
when destroying the object, it finds what context *was* active back then
by 'get_context()', activates that using the RAII wrapper
'scoped_context_activation', then frees the memory, and then the
destructor of scoped_context_activation restores the previously active
context.

void free()
{
  if (m_valid)
  {
    try
    {
      scoped_context_activation ca(get_context());
      mem_free(m_devptr);
    }
    CUDAPP_CATCH_CLEANUP_ON_DEAD_CONTEXT(device_allocation);

    release_context();
    m_valid = false;
  }
  else
    throw cuda::error("device_allocation::free", CUDA_ERROR_INVALID_HANDLE);
}


> Sorry for the long and somewhat confused mail, but this seems to be a subtle
> issue that I haven't thought through.

HTH,
Andreas

Attachment: pgplpCYI3ZEUQ.pgp
Description: PGP signature

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to