Ok, so piecewise constant pressures. The default DMDA multigrid is using piecewise linear interpolation. Isn't going to work well at all. You need to first add ierr = DMDASetInterpolationType(da, DMDA_Q0);CHKERRQ(ierr); after you create the DM. You should be seeing much better convergence. see ksp/ksp/examples/tutorials/ex34.c
Barry On Oct 2, 2013, at 5:25 PM, Matthew Knepley <[email protected]> wrote: > On Wed, Oct 2, 2013 at 5:12 PM, Michele Rosso <[email protected]> wrote: > Barry, > > sorry for not replying to your other e-mail earlier. > The equation I am solving is: > > <tblatex-5.png> > > where <tblatex-4.png> is the pressure field, <tblatex-3.png> the density > field and <tblatex-2.png> the velocity field. > Since I am using finite difference on a staggered grid, the pressure is > defined on "cell" centers, while the velocity components on "cell" faces, > even if > no cell is actually defined. > > If you just prescribe a constant rho, do you get 3-5 iterates? What about a > simple step through the domain? > Starting on the hardest problem does not sound like that way to understand > what is going on. > > Matt > > I am simulating a bi-phase flow, thus both density and pressure are > discontinuos, but not the velocity (no mass trasfer is included at the > moment). > Therefore the right-hand-side (rhs) of the above equation does not have > jumps, while $p$ and $rho$ do. > In order to deal with such jumps, I am using a Ghost Fluid approach. > Therefore the resulting linear system is slighly different from the typical > Poisson system, though > simmetry and diagonal dominance of the matrix are mantained. > The boundary conditions are periodic in all the three space directions, > therefore the system is singular. Thus I removed the nullspace of the matrix > by using: > > call MatNullSpaceCreate( > PETSC_COMM_WORLD,PETSC_TRUE,PETSC_NULL_INTEGER,& > & PETSC_NULL_INTEGER,nullspace,ierr) > call KSPSetNullspace(ksp,nullspace,ierr) > call MatNullSpaceDestroy(nullspace,ierr) > > Hope this helps. Please let me know if you need any other info. > Also, I use the pressure at the previous time step as starting point for the > solve. Could this be a reason why the convergence is so slow? > Thanks a lot, > > Michele > > > > > > > > On 10/02/2013 11:39 AM, Barry Smith wrote: >> Something is wrong, you should be getting better convergence. Please >> answer my other email. >> >> >> On Oct 2, 2013, at 1:10 PM, Michele Rosso >> <[email protected]> >> wrote: >> >> >>> Thank you all for your contribution. >>> So far the fastest solution is still the initial one proposed by Jed in an >>> earlier round: >>> >>> -ksp_atol 1e-9 -ksp_monitor_true_residual -ksp_view -log_summary >>> -mg_coarse_pc_factor_mat_solver_package superlu_dist >>> -mg_coarse_pc_type lu -mg_levels_ksp_max_it 3 -mg_levels_ksp_type >>> richardson -options_left -pc_mg_galerkin >>> -pc_mg_levels 5 -pc_mg_log -pc_type mg >>> >>> where I used -mg_levels_ksp_max_it 3 as Barry suggested instead of >>> -mg_levels_ksp_max_it 1. >>> I attached the diagnostics for this case. Any further idea? >>> Thank you, >>> >>> Michele >>> >>> >>> On 10/01/2013 11:44 PM, Barry Smith wrote: >>> >>>> On Oct 2, 2013, at 12:28 AM, Jed Brown <[email protected]> >>>> wrote: >>>> >>>> >>>>> "Mark F. Adams" <[email protected]> >>>>> writes: >>>>> >>>>>> run3.txt uses: >>>>>> >>>>>> -ksp_type richardson >>>>>> >>>>>> This is bad and I doubt anyone recommended it intentionally. >>>>>> >>>> Hell this is normal multigrid without a Krylov accelerator. Under >>>> normal circumstances with geometric multigrid this should be fine, often >>>> the best choice. >>>> >>>> >>>>> I would have expected FGMRES, but Barry likes Krylov smoothers and >>>>> Richardson is one of a few methods that can tolerate nonlinear >>>>> preconditioners. >>>>> >>>>> >>>>>> You also have, in this file, >>>>>> >>>>>> -mg_levels_ksp_type gmres >>>>>> >>>>>> did you or the recommenders mean >>>>>> >>>>>> -mg_levels_ksp_type richardson ??? >>>>>> >>>>>> you are using gmres here, which forces you to use fgmres in the outer >>>>>> solver. This is a safe thing to use you if you apply your BCa >>>>>> symmetrically with a low order discretization then >>>>>> >>>>>> -ksp_type cg >>>>>> -mg_levels_ksp_type richardson >>>>>> -mg_levels_pc_type sor >>>>>> >>>>>> is what I'd recommend. >>>>>> >>>>> I thought that was tried in an earlier round. >>>>> >>>>> I don't understand why SOR preconditioning in the Krylov smoother is so >>>>> drastically more expensive than BJacobi/ILU and why SOR is called so >>>>> many more times even though the number of outer iterations >>>>> >>>>> bjacobi: PCApply 322 1.0 4.1021e+01 1.0 6.44e+09 1.0 3.0e+07 >>>>> 1.6e+03 4.5e+04 74 86 98 88 92 28160064317351226 20106 >>>>> bjacobi: KSPSolve 46 1.0 4.6268e+01 1.0 7.52e+09 1.0 3.0e+07 >>>>> 1.8e+03 4.8e+04 83100100 99 99 31670065158291309 20800 >>>>> >>>>> sor: PCApply 1132 1.0 1.5532e+02 1.0 2.30e+10 1.0 1.0e+08 >>>>> 1.6e+03 1.6e+05 69 88 99 88 93 21871774317301274 18987 >>>>> sor: KSPSolve 201 1.0 1.7101e+02 1.0 2.63e+10 1.0 1.1e+08 >>>>> 1.8e+03 1.7e+05 75100100 99 98 24081775248221352 19652 >>>>> >>> <best.txt> >>> > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener
