Hello, I'd like to solve a Stokes-like equation with PETSc, i. e. div( mu * symgrad(u) ) = -grad(p) - grad(mu*q) div(u) = q with the spatially variable coefficients (mu, q) coming from another application, which will advect and evolve fields
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
 
ZjQcmQRYFpfptBannerEnd
Hello,


I'd like to solve a Stokes-like equation with PETSc, i.e.


div( mu * symgrad(u) ) = -grad(p) - grad(mu*q)

div(u) = q


with the spatially variable coefficients (mu, q) coming from another 
application, which will advect and evolve fields via the velocity field 
u from the Stokes solution, and throw back new (mu, q) to PETSc in a 
loop, everything using finite difference. In preparation for this and 
getting used to PETSc I wrote a simple inhomogeneous coefficient Poisson 
solver, i.e.

  div (mu*grad(u) = -grad(mu*q), u unknown,

based on src/ksp/ksp/tutorials/ex32.c which converges really nicely even 
for mu contrasts of 10^10 using -ksp_type fgmres -pc_type mg. Since my 
coefficients later on can't be calculated from coordinates, I put them 
on a separate DM and attached it to the main DM via PetscObjectCompose 
and used a DMCoarsenHookAdd to coarsen the DM the coefficients live on, 
inspired by src/ts/tutorials/ex29.c .

Adding another uncoupled DoF was simple enough and it converged 
according to -ksp_converged_reason, but the solution started looking 
very weird; roughly constant for each DoF, when it should be some 
function going from roughly -value to +value due to symmetry. This 
doesn't happen when I use a direct solver ( -ksp_type preonly -pc_type 
lu -pc_factor_mat_solver_type umfpack ) and reading the archives, I 
ought to be using -pc_type fieldsplit due to the block nature of the 
matrix. I did that and the solution looked sensible again.

Now here comes the actual problem: Once I try adding multigrid 
preconditioning to the split fields I get errors probably relating to 
fieldsplit not "inheriting" (for lack of a better term) the associated 
interpolations/added DMs and hooks on the fine DM. That is, when I use 
the DMDA_Q0 interpolation, fieldsplit dies because it switches to 
DMDA_Q1 and the size ratio is wrong ( Ratio between levels: (mx - 1)/(Mx 
- 1) must be integer: mx 64 Mx 32 ). When I use DMDA_Q1, once the KSP 
tries to setup the matrix on the coarsened problem the DM no longer has 
the coefficient DMs which I previously had associated with it, i.e. 
PetscCall(PetscObjectQuery((PetscObject)da, "coefficientdm", 
(PetscObject *)&dm_coeff)); puts a NULL pointer in dm_coeff and PETSc 
dies when trying to get a named vector from that, but it works nicely 
without fieldsplit.

Is there some way to get fieldsplit to automagically "inherit" those 
added parts or do I need to manually modify the DMs the fieldsplit is 
using? I've been using KSPSetComputeOperators since it allows for 
re-discretization without having to manage the levels myself, whereas 
some more involved examples like src/dm/impls/stag/tutorials/ex4.c build 
the matrices in advance when re-discretizing and set them with 
KSPSetOperators, which would avoid the problem as well but also means 
managing the levels.


Any advice concerning solving my target Stokes-like equation is welcome 
as well. I am coming from a explicit timestepping background so reading 
up on saddle point problems and their efficient solution is all quite 
new to me.


Best regards,

Marco




Reply via email to