Hi Freddie, Peter, et. al,

I was testing out the latest release, and came across a small issue.  When 
partitioning a case across 4 OMP processes and 1 OpenCL process I encountered 
the following warnings:

(venv) [zdavis@Minerva cubes_3d]$ mpirun -np 5 ./launcher.sh cube_hex24.pyfrm 
cube.ini
/usr/local/lib/python3.4/site-packages/pyopencl/__init__.py:59: 
CompilerWarning: Built kernel retrieved from cache. Original from-source build 
had warnings:
Build on <pyopencl.Device 'Iris Pro' on 'Apple' at 0x1024500> succeeded, but 
said:

<program source>:26:23: warning: double precision constant requires 
cl_khr_fp64, casting to single precision
       if (alpha0 == 0.0)
                     ^
<program source>:28:28: warning: double precision constant requires 
cl_khr_fp64, casting to single precision
       else if (alpha0 == 1.0)
                          ^

 warn(text, CompilerWarning)
/usr/local/lib/python3.4/site-packages/pyopencl/__init__.py:59: 
CompilerWarning: From-binary build succeeded, but resulted in non-empty logs:
Build on <pyopencl.Device 'Iris Pro' on 'Apple' at 0x1024500> succeeded, but 
said:

<program source>:26:23: warning: double precision constant requires 
cl_khr_fp64, casting to single precision
       if (alpha0 == 0.0)
                     ^
<program source>:28:28: warning: double precision constant requires 
cl_khr_fp64, casting to single precision
       else if (alpha0 == 1.0)
                          ^

 warn(text, CompilerWarning)
100.0% [===============================>] 0.10/0.10 ela: 00:08:25 rem: 00:00:00

Here you can see that PyOpenCL is attempting to use the integrated graphics 
card in this situation, rather than the discrete card.  Given the amount of 
time in which it takes to complete this simulation, I am fairly certain it 
isn’t using the discrete card at all.  Is there a way to be more explicit in 
the invocation to ensure the integrated graphics chip is ignored and the 
discrete card utilized?  I haven’t had this issue in the past with integrated 
graphics and nVidia cards using the CUDA backend, so was curious about this 
scenario.  I realize this isn’t your typical use case scenario, but if you have 
encountered this before I would be interested in any workarounds.

Best Regards,



Zach


Zach Davis
Pointwise®, Inc.
Sr. Engineer, Sales & Marketing
213 South Jennings Avenue
Fort Worth, TX 76104-1107

E: [email protected]
P: (817) 377-2807 x102
F: (817) 377-2799

-- 
You received this message because you are subscribed to the Google Groups "PyFR 
Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send an email to [email protected].
Visit this group at http://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to