Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Stefano Zampini
Eric You should report these HYPRE issues upstream https://github.com/hypre-space/hypre/issues > On Mar 14, 2021, at 3:44 AM, Eric Chamberland > wrote: > > For us it clearly creates problems in real computations... > > I understand the need to

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Eric Chamberland
For us it clearly creates problems in real computations... I understand the need to have clean test for PETSc, but for me, it reveals that hypre isn't usable with more than one thread for now... Another solution:  force single-threaded configuration for hypre until this is fixed? Eric On

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Mark Adams
On Sat, Mar 13, 2021 at 8:50 AM Pierre Jolivet wrote: > -pc_hypre_boomeramg_relax_type_all Jacobi => > Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 3 > FYI, You need to use Chebyshev "KSP" with jacobi. > -pc_hypre_boomeramg_relax_type_all l1scaled-Jacobi => > OK,

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Pierre Jolivet
-pc_hypre_boomeramg_relax_type_all Jacobi => Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 3 -pc_hypre_boomeramg_relax_type_all l1scaled-Jacobi => OK, independently of the architecture it seems (Eric Docker image with 1 or 2 threads or my macOS), but contraction

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Mark Adams
Hypre uses a multiplicative smoother by default. It has a chebyshev smoother. That with a Jacobi PC should be thread invariant. Mark On Sat, Mar 13, 2021 at 8:18 AM Pierre Jolivet wrote: > > On 13 Mar 2021, at 9:17 AM, Pierre Jolivet wrote: > > Hello Eric, > I’ve made an “interesting”

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Pierre Jolivet
> On 13 Mar 2021, at 9:17 AM, Pierre Jolivet wrote: > > Hello Eric, > I’ve made an “interesting” discovery, so I’ll put back the list in c/c. > It appears the following snippet of code which uses Allreduce() + lambda > function + MPI_IN_PLACE is: > - Valgrind-clean with MPICH; > -

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Eric Chamberland
Hi Pierre! On 2021-03-13 3:17 a.m., Pierre Jolivet wrote: Hello Eric, I’ve made an “interesting” discovery, so I’ll put back the list in c/c. It appears the following snippet of code which uses Allreduce() + lambda function + MPI_IN_PLACE is: - Valgrind-clean with MPICH; - Valgrind-clean with

Re: [petsc-dev] Argonne GPU Virtual Hackathon - Accepted

2021-03-13 Thread Patrick Sanan
Another thing perhaps of interest is the stencil-based GPU matrix assembly functionality that Mark introduced. > Am 13.03.2021 um 07:59 schrieb Stefano Zampini : > > The COO assembly is entirely based on thrust primitives, I don’t have much > experience to say we will get a serious speedup by

Re: [petsc-dev] Petsc "make test" have more failures for --with-openmp=1

2021-03-13 Thread Pierre Jolivet
Hello Eric,I’ve made an “interesting” discovery, so I’ll put back the list in c/c.It appears the following snippet of code which uses Allreduce() + lambda function + MPI_IN_PLACE is:- Valgrind-clean with MPICH;- Valgrind-clean with OpenMPI 4.0.5;- not Valgrind-clean with OpenMPI 4.1.0.I’m not sure