Hello,

I’m scaling out my code on an HPC and I am having trouble around the
Pressure-Poisson equation when I use more than about 100 MPI processes.  I’m
solving Navier-Stokes and need to perform this solve every iteration.  I’m
getting around 120 µs/dof/proc with 8 processes (1M DOF), 600 µs/dof/proc
with 60 processes (2M DOF), and a scary 3,000 µs/dof/proc with 256 processes
(8M DOF).

The domain is a unit cube with third order Lagrange tetrahedral elements
being distributed by ParMETIS (default settings).

I’ve tried the ensemble of solvers and preconditioners available through
dolphin, with bicgstab being the best performing solver by a large amount.
Does bicgstab scale poorly?  I saw an improved (ibicgstab?) in the PETSc
documentation which reduces the gather operations during the solve – is that
worth pursuing?

I am use boomerAMG from HYPRE as a preconditioner, with 1 up and 1 down
cycle.  I’ve found this does better than the default bjacobi block LU
preconditioning.  (Thank you Garth)  

I understand this is an elliptic problem being distributed, am I just out of
luck here as far as solving pressure implicitly?

Any guidance or direction as to where I should do my reading/homework would
be appreciated!

Thank you,

Charles

 

_______________________________________________
fenics-support mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics-support

Reply via email to