Wednesday, 11 March 2015


Hi Freddie,

I can verify the changes that you made in your latest commit for PR-40 
resolve the issue I was running into.  I can also build the documentation 
without having to apply the 2to3 patch.  Thanks for addressing these so 
quickly.  I have noticed when doing these mixed backend simulations that it 
can be a bit of a challenge to determine the number of parts to distribute 
across the GPU and CPUs in order to ensure that the GPU is fully utilized. 
 Have you guys discovered any best-practices or guidelines?  I imagine it 
changes with the underlying hardware, backends being used, and elements in 
the model, so it isn't too hard to imagine that it's likely a bit difficult 
to nail down.  Otherwise looks awesome!

Best Regards,


Zach

On Tuesday, March 10, 2015 at 4:13:30 PM UTC-7, Freddie Witherden wrote:
>
> Hi Zach, 
>
> On 10/03/15 14:43, Zach Davis wrote: 
> > Here's what we have: 
> > 
> > (venv) [zdavis@Rahvin cubes]$ mpirun -np 5 ./launcher.sh 
> > cube_hex24.pyfrm cube.ini 
> > 
> >   99.8% [==============================> ] 0.10/0.10 ela: 00:06:55 rem: 
> > 00:00:00[<class 'int'>, <class 'int'>, <class 'memoryview'>, <class 
> > 'pyopencl._cl.Buffer'>, <class 'int'>, <class 'pyopencl._cl.Buffer'>, 
> > <class 'int'>, <class 'pyopencl._cl.Buffer'>, <class 'int'>] 
>
> As a follow-up I've submitted a pull request which resolves this issue: 
>
>   <https://github.com/vincentlab/PyFR/pull/40> 
>
> Let me know if it works. 
>
> Regards, Freddie. 
>
>

-- 
You received this message because you are subscribed to the Google Groups "PyFR 
Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to pyfrmailinglist+unsubscr...@googlegroups.com.
To post to this group, send an email to pyfrmailinglist@googlegroups.com.
Visit this group at http://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

Reply via email to