Start with P1 elements, and make sure your matrix is symmetric-positive
definite and use CG. Also, to start use 50k - 500k dofs per process. Make
sure that the iteration count grows only very mildly with problem size and
is (almost) independent of the number of processes.

Garth

On 9 August 2015 at 21:05, Charles Cook <[email protected]> wrote:

> Hello,
>
> I’m scaling out my code on an HPC and I am having trouble around the
> Pressure-Poisson equation when I use more than about 100 MPI processes.
> I’m solving Navier-Stokes and need to perform this solve every iteration.
> I’m getting around 120 µs/dof/proc with 8 processes (1M DOF), 600
> µs/dof/proc with 60 processes (2M DOF), and a scary 3,000 µs/dof/proc with
> 256 processes (8M DOF).
>
> The domain is a unit cube with third order Lagrange tetrahedral elements
> being distributed by ParMETIS (default settings).
>
> I’ve tried the ensemble of solvers and preconditioners available through
> dolphin, with bicgstab being the best performing solver by a large amount.
> Does bicgstab scale poorly?  I saw an improved (ibicgstab?) in the PETSc
> documentation which reduces the gather operations during the solve – is
> that worth pursuing?
>
> I am use boomerAMG from HYPRE as a preconditioner, with 1 up and 1 down
> cycle.  I’ve found this does better than the default bjacobi block LU
> preconditioning.  (Thank you Garth)
>
> I understand this is an elliptic problem being distributed, am I just out
> of luck here as far as solving pressure implicitly?
>
> Any guidance or direction as to where I should do my reading/homework
> would be appreciated!
>
> Thank you,
>
> Charles
>
>
>
> _______________________________________________
> fenics-support mailing list
> [email protected]
> http://fenicsproject.org/mailman/listinfo/fenics-support
>
>
_______________________________________________
fenics-support mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics-support

Reply via email to