Re: [petsc-users] Fieldsplit, multigrid and DM interaction

2024-03-13 Thread Marco Seiz




 Thank you, I already thought that HTML scrubbing made everyone shy away. I hope I fixed Thunderbird properly this time around. On 14. 03. 24 04: 00, Barry Smith wrote: > > > Sorry no one responded to this email sooner. > >> On




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




Thank you, I already thought that HTML scrubbing made everyone shy away.
I hope I fixed Thunderbird properly this time around.

On 14.03.24 04:00, Barry Smith wrote:
> 
> 
>    Sorry no one responded to this email sooner.
> 
>> On Mar 12, 2024, at 4:18 AM, Marco Seiz  wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>> Hello,
>>
>>
>> I'd like to solve a Stokes-like equation with PETSc, i.e.
>>
>>
>> div( mu * symgrad(u) ) = -grad(p) - grad(mu*q)
>>
>> div(u) = q
>>
>>
>> with the spatially variable coefficients (mu, q) coming from another 
>> application, which will advect and evolve fields via the velocity field 
>> u from the Stokes solution, and throw back new (mu, q) to PETSc in a 
>> loop, everything using finite difference. In preparation for this and 
>> getting used to PETSc I wrote a simple inhomogeneous coefficient Poisson 
>> solver, i.e.
>>
>>   div (mu*grad(u) = -grad(mu*q), u unknown,
>>
>> based on src/ksp/ksp/tutorials/ex32.c which converges really nicely even 
>> for mu contrasts of 10^10 using -ksp_type fgmres -pc_type mg. Since my 
>> coefficients later on can't be calculated from coordinates, I put them 
>> on a separate DM and attached it to the main DM via PetscObjectCompose 
>> and used a DMCoarsenHookAdd to coarsen the DM the coefficients live on, 
>> inspired by src/ts/tutorials/ex29.c .
>>
>> Adding another uncoupled DoF was simple enough and it converged 
>> according to -ksp_converged_reason, but the solution started looking 
>> very weird; roughly constant for each DoF, when it should be some 
>> function going from roughly -value to +value due to symmetry. This 
>> doesn't happen when I use a direct solver ( -ksp_type preonly -pc_type 
>> lu -pc_factor_mat_solver_type umfpack ) and reading the archives, I 
>> ought to be using -pc_type fieldsplit due to the block nature of the 
>> matrix. I did that and the solution looked sensible again.
> Hmm, this sounds like the operator has the constant null space that is
> accumulating in the iterative method. 
> 
> The standard why to handle this is to use MatSetNullSpace() to provide
> the nullspace information so the iterative solver can remove it at each
> iteration.
I already provided the constant null space since with all zero Neumann
BCs (for now) the solution is only determined up to a constant.
According to MatNullSpaceTest that one still works for the two
independent div(mu*grad(u)), though at higher contrasts the test fails
but the solution looks correct. That's probably just finite precision at
work.

> 
>> Now here comes the actual problem: Once I try adding multigrid 
>> preconditioning to the split fields I get errors probably relating to 
>> fieldsplit not "inheriting" (for lack of a better term) the associated 
>> interpolations/added DMs and hooks on the fine DM. That is, when I use 
>> the DMDA_Q0 interpolation, fieldsplit dies because it switches to 
>> DMDA_Q1 and the size ratio is wrong ( Ratio between levels: (mx - 1)/(Mx 
>> - 1) must be integer: mx 64 Mx 32 ). When I use DMDA_Q1, once the KSP 
>> tries to setup the matrix on the coarsened problem the DM no longer has 
>> the coefficient DMs which I previously had associated with it, i.e. 
>> PetscCall(PetscObjectQuery((PetscObject)da, "coefficientdm", 
>> (PetscObject *)_coeff)); puts a NULL pointer in dm_coeff and PETSc 
>> dies when trying to get a named vector from that, but it works nicely 
>> without fieldsplit.
>>
>> Is there some way to get fieldsplit to automagically "inherit" those 
>> added parts or do I need to manually modify the DMs the fieldsplit is 
>> using? I've been using KSPSetComputeOperators since it allows for 
>> re-discretization without having to manage the levels myself, whereas 
>> some more involved examples like src/dm/impls/stag/tutorials/ex4.c build 
>> the matrices in advance when re-discretizing and set them with 
>> KSPSetOperators, which would avoid the problem as well but also means 
>> managing the levels.
> 
> We don't have hooks to get your inner information automatically passed
> in from the outer DM but I think you can use
> 
> PCFieldSplitGetSubKSP() after KSPSetUp()
> 
>  to get your two sub-KSPs, you then can set the "sub" DM to these etc to
> get them to "largely" behave as in your previous "uncoupled" code. 
> Hopefully also use KSPSetOperators().
Thank you, that seems to have done the trick while still using
KSPSetComputeOperators even with huge contrasts. I will probably run
into problems again once I add continuity/coupling, but 

Re: [petsc-users] Fieldsplit, multigrid and DM interaction

2024-03-13 Thread Barry Smith


   Sorry no one responded to this email sooner.

> On Mar 12, 2024, at 4:18 AM, Marco Seiz  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
> 
> 
> I'd like to solve a Stokes-like equation with PETSc, i.e.
> 
> 
> div( mu * symgrad(u) ) = -grad(p) - grad(mu*q)
> 
> div(u) = q
> 
> 
> with the spatially variable coefficients (mu, q) coming from another 
> application, which will advect and evolve fields via the velocity field 
> u from the Stokes solution, and throw back new (mu, q) to PETSc in a 
> loop, everything using finite difference. In preparation for this and 
> getting used to PETSc I wrote a simple inhomogeneous coefficient Poisson 
> solver, i.e.
> 
>   div (mu*grad(u) = -grad(mu*q), u unknown,
> 
> based on src/ksp/ksp/tutorials/ex32.c which converges really nicely even 
> for mu contrasts of 10^10 using -ksp_type fgmres -pc_type mg. Since my 
> coefficients later on can't be calculated from coordinates, I put them 
> on a separate DM and attached it to the main DM via PetscObjectCompose 
> and used a DMCoarsenHookAdd to coarsen the DM the coefficients live on, 
> inspired by src/ts/tutorials/ex29.c .
> 
> Adding another uncoupled DoF was simple enough and it converged 
> according to -ksp_converged_reason, but the solution started looking 
> very weird; roughly constant for each DoF, when it should be some 
> function going from roughly -value to +value due to symmetry. This 
> doesn't happen when I use a direct solver ( -ksp_type preonly -pc_type 
> lu -pc_factor_mat_solver_type umfpack ) and reading the archives, I 
> ought to be using -pc_type fieldsplit due to the block nature of the 
> matrix. I did that and the solution looked sensible again.
Hmm, this sounds like the operator has the constant null space that is 
accumulating in the iterative method. 

The standard why to handle this is to use MatSetNullSpace() to provide the 
nullspace information so the iterative solver can remove it at each iteration.

> 
> Now here comes the actual problem: Once I try adding multigrid 
> preconditioning to the split fields I get errors probably relating to 
> fieldsplit not "inheriting" (for lack of a better term) the associated 
> interpolations/added DMs and hooks on the fine DM. That is, when I use 
> the DMDA_Q0 interpolation, fieldsplit dies because it switches to 
> DMDA_Q1 and the size ratio is wrong ( Ratio between levels: (mx - 1)/(Mx 
> - 1) must be integer: mx 64 Mx 32 ). When I use DMDA_Q1, once the KSP 
> tries to setup the matrix on the coarsened problem the DM no longer has 
> the coefficient DMs which I previously had associated with it, i.e. 
> PetscCall(PetscObjectQuery((PetscObject)da, "coefficientdm", 
> (PetscObject *)_coeff)); puts a NULL pointer in dm_coeff and PETSc 
> dies when trying to get a named vector from that, but it works nicely 
> without fieldsplit.
> 
> Is there some way to get fieldsplit to automagically "inherit" those 
> added parts or do I need to manually modify the DMs the fieldsplit is 
> using? I've been using KSPSetComputeOperators since it allows for 
> re-discretization without having to manage the levels myself, whereas 
> some more involved examples like src/dm/impls/stag/tutorials/ex4.c build 
> the matrices in advance when re-discretizing and set them with 
> KSPSetOperators, which would avoid the problem as well but also means 
> managing the levels.

We don't have hooks to get your inner information automatically passed in from 
the outer DM but I think you can use

PCFieldSplitGetSubKSP() after KSPSetUp()

 to get your two sub-KSPs, you then can set the "sub" DM to these etc to get 
them to "largely" behave as in your previous "uncoupled" code. 
Hopefully also use KSPSetOperators().

  Barry

> 
> 
> Any advice concerning solving my target Stokes-like equation is welcome 
> as well. I am coming from a explicit timestepping background so reading 
> up on saddle point problems and their efficient solution is all quite 
> new to me.
> 
> 
> Best regards,
> 
> Marco
> 
> 
> 
> 
> 



[petsc-users] Fieldsplit, multigrid and DM interaction

2024-03-12 Thread Marco Seiz




 Hello, I'd like to solve a Stokes-like equation with PETSc, i. e. div( mu * symgrad(u) ) = -grad(p) - grad(mu*q) div(u) = q with the spatially variable coefficients (mu, q) coming from another application, which will advect and evolve fields




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




Hello,


I'd like to solve a Stokes-like equation with PETSc, i.e.


div( mu * symgrad(u) ) = -grad(p) - grad(mu*q)

div(u) = q


with the spatially variable coefficients (mu, q) coming from another 
application, which will advect and evolve fields via the velocity field 
u from the Stokes solution, and throw back new (mu, q) to PETSc in a 
loop, everything using finite difference. In preparation for this and 
getting used to PETSc I wrote a simple inhomogeneous coefficient Poisson 
solver, i.e.

  div (mu*grad(u) = -grad(mu*q), u unknown,

based on src/ksp/ksp/tutorials/ex32.c which converges really nicely even 
for mu contrasts of 10^10 using -ksp_type fgmres -pc_type mg. Since my 
coefficients later on can't be calculated from coordinates, I put them 
on a separate DM and attached it to the main DM via PetscObjectCompose 
and used a DMCoarsenHookAdd to coarsen the DM the coefficients live on, 
inspired by src/ts/tutorials/ex29.c .

Adding another uncoupled DoF was simple enough and it converged 
according to -ksp_converged_reason, but the solution started looking 
very weird; roughly constant for each DoF, when it should be some 
function going from roughly -value to +value due to symmetry. This 
doesn't happen when I use a direct solver ( -ksp_type preonly -pc_type 
lu -pc_factor_mat_solver_type umfpack ) and reading the archives, I 
ought to be using -pc_type fieldsplit due to the block nature of the 
matrix. I did that and the solution looked sensible again.

Now here comes the actual problem: Once I try adding multigrid 
preconditioning to the split fields I get errors probably relating to 
fieldsplit not "inheriting" (for lack of a better term) the associated 
interpolations/added DMs and hooks on the fine DM. That is, when I use 
the DMDA_Q0 interpolation, fieldsplit dies because it switches to 
DMDA_Q1 and the size ratio is wrong ( Ratio between levels: (mx - 1)/(Mx 
- 1) must be integer: mx 64 Mx 32 ). When I use DMDA_Q1, once the KSP 
tries to setup the matrix on the coarsened problem the DM no longer has 
the coefficient DMs which I previously had associated with it, i.e. 
PetscCall(PetscObjectQuery((PetscObject)da, "coefficientdm", 
(PetscObject *)_coeff)); puts a NULL pointer in dm_coeff and PETSc 
dies when trying to get a named vector from that, but it works nicely 
without fieldsplit.

Is there some way to get fieldsplit to automagically "inherit" those 
added parts or do I need to manually modify the DMs the fieldsplit is 
using? I've been using KSPSetComputeOperators since it allows for 
re-discretization without having to manage the levels myself, whereas 
some more involved examples like src/dm/impls/stag/tutorials/ex4.c build 
the matrices in advance when re-discretizing and set them with 
KSPSetOperators, which would avoid the problem as well but also means 
managing the levels.


Any advice concerning solving my target Stokes-like equation is welcome 
as well. I am coming from a explicit timestepping background so reading 
up on saddle point problems and their efficient solution is all quite 
new to me.


Best regards,

Marco