Gyorgy Vizkelethy
Sandia National Laboratories
Radiation-Solid Interactions
PO Box 5800, MS 1056
Albuquerque, NM 87185
USA
+1 (505) 284-3120
gviz...@sandia.gov<mailto:gviz...@sandia.gov>

On Mar 15, 2016, at 12:28 PM, Andreas Kloeckner 
<li...@informa.tiker.net<mailto:li...@informa.tiker.net>> wrote:

Geroge,

"Vizkelethy, Gyorgy" <gviz...@sandia.gov<mailto:gviz...@sandia.gov>> writes:
I am very new to OpenCl, I am actually investigating if this is what I
need. So if I ask something obvious please just point me where the
information can be found. I installed pyopencl after some troubles. My
OS is 10.10 (Yosemite) but I have XCode 7, which has the 10.11 SDK. So
the install script did not find the SDK for the OS and crashed. The
solution was easy, just to make a link to the 10.11 SDK with the right
name. Then the first thing I did was to run the benchmark script. With
the default settings, 256 workers, it failed with
LogicError: clenqueuendrangekernel failed: INVALID_WORK_GROUP_SIZE in
the CPU section. If I skipped the CPU it ran just fine on my Radeon
GPUs. I changed the number of workers to 128 and it ran just fine on
the CPU. When I query the CPU OpenCL properties it says the maximum
work group size is 1024, but it fails with 256. Is that normal? If yes
why? I tried it on four different Macs, 24 core MacPro, 8 core iMac
5k, and two different 8 core MacBook Pros, with same result.

Interesting. Apple's CPU CL implementation used to only support exactly
one work item per workgroup. It appears that they've removed that
particular limitation. When you say 'benchmark script', you mean
'dump-performance.py’?
It was the benchmark.py script, dump-performance.py works fine. When you are 
saying that exactly one item workgroup, you mean the CPU test should have 
failed for anything but workers = 1?  Here is what I get for the CPU:
Device compute units: 24
Device max work group size: 1024
Device max work item sizes: [1024L, 1L, 1L]

For the Radeon GPUs I get:
Device compute units: 24
Device max work group size: 256
Device max work item sizes: [256L, 256L, 256L]


One more thing. There are couple of problems with some of the examples.
narray fails on the CPU with clbuildprogram failed: BUILD_PROGRAM_FAILURE -

Sounds like a driver bug. Is there something about "CVMS server" in the
build log?
Yes, there is. Here is the rest of the error message:
Build on <pyopencl.Device 'Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz' on 
'Apple' at 0xffffffff>:

CVMS_ERROR_COMPILER_FAILURE: CVMS compiler has crashed or hung building an 
element.
(options: -I 
/Users/gvizkel/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pyopencl-2015.2.4-py2.7-macosx-10.6-x86_64.egg/pyopencl/cl)
(source saved as /var/folders/s1/y1wx_5cj49qdyk7043f80spw000blv/T/tmpormiK4.cl)

transpose fails on the CPU with clenqueuendrangekernel failed: 
INVALID_WORK_GROUP_SIZE after benchmarking Silly 7488 5.25572997632 GB/s

Same issue as you noted above--the code is written to use a workgroup
size that's not supported.

gl_interop fails with AttributeError: type object 'context_properties'
has no attribute ‘CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE in
tools.py

It's likely that you built PyOpenCL without support for GL
interoperability.
I just did a pip install pyopencl. How do I build it with GL interoperability?

gl_particle_animation.py fails with AttributeError: type object
'context_properties' has no attribute
‘CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE’ then many more error
messages are coming, and since the particle window cannot be closed I
have to kill python.

Same.

Did things change between now when the examples were written?

No, I don't believe so. It's just that OpenCL devices vary considerably
in functionality. Core (documented) PyOpenCL (and the tests) go to great
lengths to support any and every machine you might care to run on, but
the examples do not (yet? patches welcome!) live up to that same
standard.

Andreas

_______________________________________________
PyOpenCL mailing list
PyOpenCL@tiker.net
https://lists.tiker.net/listinfo/pyopencl

Reply via email to