> On Jul 5, 2018, at 8:28 AM, Mark Adams <[email protected]> wrote:
> 
> 
> Please share the results of your experiments that prove OpenMP does not 
> improve performance for Mark’s users. 
> 
> This obviously does not "prove" anything but my users use OpenMP primarily 
> because they do not distribute their mesh metadata.

   I.e. my users have decided not to write scalable code. 

> They can not replicated the mesh on every core,
> on large scale problems and shared memory allows them to survive. They have 
> decided to use threads as opposed to MPI shared memory. (Not a big deal, once 
> you decide not to use distributed memory the damage is done and NERSC seems 
> to be OMP centric so they can probably get better support for OMP than MPI 
> shared memory.)
> 
> BTW, PETSc does support OMP, that is what I have been working on testing for 
> the last few weeks. First with Hypre (numerics are screwed up from an 
> apparent compiler bug or a race condition of some sort; it fails at higher 
> levels of optimization), and second with MKL kernels. The numerics are 
> working with MKL and we are working on packaging this up to deliver to a user 
> (they will test performance).
>  

Reply via email to