That was it! I changed my code to carry the context etc around in an object to 
be reused each time... not it's much better. No need to inspect my crazy code...

I do see that my cpu core with this process is at 100% while this code is 
running. Does wait() do something that takes cpu cycles? I had hoped that with 
the GPU busy computing, I'd have most of my cpu left for other stuff.

thanks!
-steve

On Jan 27, 2012, at 9:05 AM, Andreas Kloeckner <[email protected]> wrote:

> On Fri, 27 Jan 2012 08:59:04 -0700, Steve Spicklemire <[email protected]> wrote:
>> Two more quick points...
>> 
>> If I let the code keep running on the ION2 system I get this:
>> 
>> <http://www.spvi.com/files/bccd-out-9.txt>
>> 
>> And... if I set the environment variable to show compiler output on the ION2 
>> system.. I see this.
>> 
>> <http://www.spvi.com/files/bccd-compiler-output-9.txt>
>> 
>> I'm struggling to interpret what that all means. ;-)
>> 
>> Any hints appreciated.
> 
> I'll look through your code/results once I have more time, but these two
> things I can answer off the bat:
> 
>> BTW... is there a 'release' memory method needed when using pyopencl?
> 
> No, *if* you're ok with waiting for the point in time when the Python GC
> decides it's time to get rid of the host-side object managing the CL
> allocation. If you're very close to resource limits, you can use 
> 
> http://documen.tician.de/pyopencl/runtime.html#pyopencl.MemoryObject.release
> 
>> Do I need to create my context/queue only once and pass it around to
>> be reused all the time?
> 
> Definitely. Context creation (especially on Nv) is expensive, *and*
> there's a fairly low limit on the number of concurrent contexts.
> 
> Andreas
> 

_______________________________________________
PyOpenCL mailing list
[email protected]
http://lists.tiker.net/listinfo/pyopencl

Reply via email to