On Wed, Jun 23, 2010 at 11:56 PM, Andreas Kloeckner <li...@informa.tiker.net
> wrote:

>
> There's only *the* current runtime context for the current
> thread. "Specified" makes no sense--there isn't even a data type for
> it in the runtime. (also see previous email)
>


If these context stack operations are totally blind, which I didn't
understand, then I don't really understand the comment from the other email:

To get garbage collection right in the face of multiple contexts, PyCUDA
> must assume there *is* a context and try to restore it at destruction time
> of every object created.


What's the heuristic being used?  I was under the impression that there was
a CUContext type in the driver API.

I will try to put together a patch along the lines you describe, although
the exact design pattern for mixing kernel code and runtime code seems to
not be fully worked out.  Our experience with CULA is that it provides its
own wrapper to CUDA's allocator, which we used to dummy up objects with the
right interface for GPUArray.  You can see the specific hack at:

  http://bitbucket.org/louistheran/pycula/src/tip/CULApy/cula.py

But it has a few obvious problems, such as being likely to leak memory.  I
would be more confident if PyCUDA just tried to cast to int or long instead,
since the idiom would be cleaner, but the context issue is still there.
 (There is a side issue that being able to use other allocators has some
value for libraries that may want to optimize layout, which makes me think
this idiom isn't totally useless.)

The thing I still don't quite get is how PyCUDA is managing the context.  Is
it done in DeviceAllocation itself or somewhere else?  In other words, what
is the lifecycle of these various pushes and pops of the context as it
relates to Python's GC?

Sorry for the long and somewhat confused mail, but this seems to be a subtle
issue that I haven't thought through.

^L


-- 
Louis Theran
Research Assistant Professor
Math Department, Temple University
http://math.temple.edu/~theran/
+1.215.204.3974
_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to