Dear Jed,

The code I am working on is quite generic and at the solve step, the matrix C can be 'whatever' (but is supposed to be full rank). But in practice, in 99% of the cases, C contain MPCs that refers to boundary conditions applied to subsets of the mesh boundary. These MPCs can couple several dofs, and a given dofs can be involved in several MPCs. For example, one could impose that the average of the solution in the x-direction is null on a part of the boundary, and that this part of the boundary is in contact with another part of the boundary.

So yes, CCt is block diagonal, where each block is a set of MPCs that share dofs, and CtC is also block diagonal, where each block is a set of dofs that share MPCs. For the vast majority of cases, these blocks involve dofs/MPCs attached to a subset of the boundary, so they are small with respect to the total number of dofs (and their size grows slower than the total number of dofs when the mesh is refined).

I am not sure to understand what you mean by compute the MPCs explicitly: do you mean eliminating them? For very simple dirichlet conditions I see how to do that, but in a more generic case I don't see (but there may be some techniques I don't know about!).

I don't understand also what you mean by cleaning them in a small number of krylov iterations?

Many thanks,
Olivier

On 01/10/2020 20:47, Jed Brown wrote:
Olivier Jamond <[email protected]> writes:

Dear all,

I am working on a finite-elements/finite-volumes code, whose distributed
solver is based on petsc. For FE, it relies on Lagrange multipliers for
the imposition of various boundary conditions or interactions (simple
dirichlet, contact, ...). This results in saddle point problems:

[S Ct][x]=[f]
[C 0 ][y] [g]

As discussed in this mailing list ("Saddle point problem with nested
matrix and a relatively small number of Lagrange multipliers"), the
fieldsplit/PC_COMPOSITE_SCHUR approach involves (2 + 'number of
iterations of the KSP for the Schur complement') KSPSolve(S, Sp). I
would like to try the formula given by Ainsworth in [1] to solve this
problem:

x = (Sp)^(-1) * fp
y = Rt * (f - S*x)

where:
Sp= Ct*C + Qt*S*Q
I just want to observe here that Ct*C lives in the big space and is low rank.  
It's kinda like what you would get from an augmented Lagrangian approach.

The second term involves these commutators that destroy sparsity in general, 
but the context of the paper (as I interpreted it in a quick skim) is such that 
C*Ct consists of small decoupled blocks associated with each MPC.  The 
suggestion is that these can either be computed explicitly (possibly at the 
element level) or cleaned up in a small number of Krylov iterations.

Q = I - P
P = R * C
R = Ct * (C*Ct)^(-1)

My input matrices (S and C) are MPIAIJ matrices. I create a shell matrix
for Sp (because it involves (C*Ct)^(-1) so I think it may be a bad idea
to compute it explicitly...) with the MatMult operator to use it in a
KSPSolve. The C matrix and g vector are scaled so that the condition
number of Sp is similar to the one of S.

It works, but my main problem is that because Sp is a shell matrix, as
far as I understand, I deprive myself of all the petsc
preconditioners... I tried to use S as a preconditioning matrix, but
it's not good: With a GAMG preconditioner, my iteration number is about
4 times higher than in a "debug" version where I compute Sp explicitly
as a MPIAIJ matrix and use it as preconditioning matrix.
Are your coupling constraints nonlocal, such that C*Ct is not block diagonal?

Is there a way to use the petsc preconditioners for shell matrices or at
least to define a shell preconditioner that internally calls the petsc
preconditioners?

In the end I would like to have something like GAMG(Ct*C + Qt*S*Q) as a
preconditioner (here Q is a shell matrix), or something
like Qt*GAMG(S)*Q (which from matlab experimentation could be a
good preconditioner).

Many thanks,
Olivier

[1]: Ainsworth, M. (2001). Essential boundary conditions and multi-point
constraints in finite element analysis. Computer Methods in Applied
Mechanics and Engineering, 190(48), 6323-6339.

Reply via email to