Ok. MPICH/OpenMPI could be used for sequential runs and parallel within node.
[and not between nodes?] Satish On Fri, 16 Apr 2021, Mark Adams wrote: > On Fri, Apr 16, 2021 at 11:33 AM Satish Balay <[email protected]> wrote: > > > Mark, Why a no-mpi build on Fugaku? > > > > Kokkos Kernels + OMP barfed and Kokkos suggested gcc. There does not seem > to be an MPI for GCC here. > > > > > > Toby - there is one more issue with p4est - build error when using > > --with-batch=1 > > > > Attaching configure.log > > > > Satish > > > > On Fri, 16 Apr 2021, Isaac, Tobin G wrote: > > > > > p4est has a mode where it can compile without MPI, I don't know if PETSc > > is using it, will check. > > > > > > ________________________________________ > > > From: petsc-dev <[email protected]> on behalf of Mark Adams > > <[email protected]> > > > Sent: Friday, April 16, 2021 10:23 > > > To: For users of the development version of PETSc > > > Subject: [petsc-dev] p4est w/o MPI > > > > > > I don't have MPI (Fugaku w/ gcc) and p4est seems to need it. Is there a > > work around? > > > Thanks, > > > Mark > > > > > >
