Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Mark Adams
And we found that the code runs fine on Haswell. A KNL compiler bug not a PETSc/hypre bug. Mark On Wed, Feb 14, 2018 at 3:58 PM, Mark Adams wrote: > >> Your point about data decomposition is a good one. Even if you want to >> run with threads, you must decompose your data

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Mark Adams
> > > Your point about data decomposition is a good one. Even if you want to run > with threads, you must decompose your data intelligently > to get good performance. Can't you do the MPI shared work and still pass > it off as work necessary for threading anyway? > > We don't have any resources to

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Matthew Knepley
On Wed, Feb 14, 2018 at 5:36 AM, Mark Adams wrote: > >> > >> > We have been tracking down what look like compiler bugs and we have >> only taken at peak performance to make sure we are not wasting our time >> with threads. >> >>You are wasting your time. There are better

Re: [petsc-users] with-openmp error with hypre

2018-02-14 Thread Mark Adams
> > > > > > We have been tracking down what look like compiler bugs and we have only > taken at peak performance to make sure we are not wasting our time with > threads. > >You are wasting your time. There are better ways to deal with global > metadata than with threads. > OK while agree with

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Smith, Barry F.
> On Feb 13, 2018, at 8:56 PM, Mark Adams wrote: > > I agree with Matt, flat 64 will be faster, I would expect, but this code has > global metadata that would have to be replicated in a full scale run.\ Use MPI 3 shared memory to expose the "global metadata" and forget

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Mark Adams
I agree with Matt, flat 64 will be faster, I would expect, but this code has global metadata that would have to be replicated in a full scale run. We are just doing single socket test now (I think). We have been tracking down what look like compiler bugs and we have only taken at peak performance

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Kong, Fande
Curious about the comparison of 16x4 VS 64. Fande, On Tue, Feb 13, 2018 at 11:44 AM, Bakytzhan Kallemov wrote: > Hi, > > I am not sure about 64 flat run, > > unfortunately I did not save logs since it's easy to run, but for 16 - > here is the plot I got for different number

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Matthew Knepley
On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. wrote: > > > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: > > > > FYI, we were able to get hypre with threads working on KNL on Cori by > going down to -O1 optimization. We are getting about 2x speedup with

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Mark Adams
> > > > > > There error, flatlined or slightly diverging hypre solves, occurred even > in flat MPI runs with openmp=1. > > But the answers are wrong as soon as you turn on OpenMP? > > No, that is the funny thing, the problem occurs with flat MPI, no OMP. Just an openmp=1 build. I am trying to

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Smith, Barry F.
> On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: > > FYI, we were able to get hypre with threads working on KNL on Cori by going > down to -O1 optimization. We are getting about 2x speedup with 4 threads and > 16 MPI processes per socket. Not bad. In other works using 16

Re: [petsc-users] with-openmp error with hypre

2018-02-13 Thread Mark Adams
FYI, we were able to get hypre with threads working on KNL on Cori by going down to -O1 optimization. We are getting about 2x speedup with 4 threads and 16 MPI processes per socket. Not bad. There error, flatlined or slightly diverging hypre solves, occurred even in flat MPI runs with openmp=1.