Hum,
would a *-pc_fieldsplit_schur_precondition self *use the full Schur as preconditioner for itself? I made some special choices in order to keep A diagonal which makes it cheap to inverse. Actually I am assuming that Schur will be blazing fast with my type of discretization...

Best,
Luc

On 03/11/2014 11:36 AM, Matthew Knepley wrote:
On Tue, Mar 11, 2014 at 9:56 AM, Luc Berger-Vergiat <[email protected] <mailto:[email protected]>> wrote:

    Hi all,
    I am testing some preconditioners for a FEM problem involving
    different types of fields (displacements, temperature, stresses
    and plastic strain).
    To make sure that things are working correctly I am first solving
    this problem with:

        -ksp_type preonly -pc_type lu, which works fine obviously.


    Then I move on to do:

        -ksp_type gmres -pc_type lu, and I get very good convergence
        (one gmres linear iteration per time step) which I expected.


    So solving the problem exactly in a preconditioner to gmres leads
    to optimal results.
    This can be done using a Schur complement, but when I pass the
    following options:

        -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
        -pc_fieldsplit_schur_factorization_type full
        -pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
        -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type
        lu -fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type lu

    My results are terrible, gmres does not converge and my FEM code
    reduces the size of the time step in order to converge.
    This does not make much sense to me...


The problem is the Schur complement block. We have

  S = C A^{-1} B

PETSc does not form S explicitly, since it would require forming the dense
inverse of A explicitly. Thus we only calculate the action of A. If you look in
-ksp_view, you will see that the preconditioner for S is formed from A_11,
which it sounds like is 0 in your case, so the LU of that is a crud preconditioner.
Once you wrap the solve in GMRES, it will eventually converge.

You can try using the LSC stuff if you do not have a preconditioner matrix
for the Schur complement.

  Thanks,

     Matt

    Curiously if I use the following options:

        -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
        -pc_fieldsplit_schur_factorization_type full
        -pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
        -fieldsplit_0_ksp_type gmres -fieldsplit_0_pc_type
        lu -fieldsplit_1_ksp_type gmres -fieldsplit_1_pc_type lu

    then the global gmres converges in two iterations.

    So using a pair of ksp gmres/pc lu on the A00 block and the Schur
    complements works, but using lu directly doesn't.

    Because I think that all this is quite strange, I decided to dump
    some matrices out. Namely, I dumped the complete FEM jacobian, I
    also do a MatView on jac->B, jac->C and the result of
    KSPGetOperators on kspA. These returns three out of the four
    blocks needed to do the Schur complement. They are correct and I
    assume that the last block is also correct.
    When I import jac->B, jac->C and the matrix corresponding to kspA
    in MATLAB to compute the inverse of the Schur complement and pass
    it to gmres as preconditioner the problem is solved in 1 iteration.

    So MATLAB says:

        -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
        -pc_fieldsplit_schur_factorization_type full
        -pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
        -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type
        lu -fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type lu

    should yield only one iteration (maybe two depending on
    implementation).

    Any ideas why the Petsc doesn't solve this correctly?

    Best,
    Luc




--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

Reply via email to