Fande,

   I have done some work, mostly understanding and documentation, on handling 
singular systems with KSP in the branch barry/improve-matnullspace-usage. This 
also includes a new example that solves both a symmetric example and an example 
where nullspace(A) != nullspace(A') src/ksp/ksp/examples/tutorials/ex67.c  

   My understanding is now documented in the manual page for KSPSolve(), part 
of this is quoted below:

-------
   If you provide a matrix that has a MatSetNullSpace() and 
MatSetTransposeNullSpace() this will use that information to solve singular 
systems
   in the least squares sense with a norm minimizing solution.
$
$                   A x = b   where b = b_p + b_t where b_t is not in the range 
of A (and hence by the fundamental theorem of linear algebra is in the 
nullspace(A') see MatSetNullSpace()
$
$    KSP first removes b_t producing the linear system  A x = b_p (which has 
multiple solutions) and solves this to find the ||x|| minimizing solution (and 
hence
$    it finds the solution x orthogonal to the nullspace(A). The algorithm is 
simply in each iteration of the Krylov method we remove the nullspace(A) from 
the search
$    direction thus the solution which is a linear combination of the search 
directions has no component in the nullspace(A).
$
$    We recommend always using GMRES for such singular systems.
$    If nullspace(A) = nullspace(A') (note symmetric matrices always satisfy 
this property) then both left and right preconditioning will work
$    If nullspace(A) != nullspace(A') then left preconditioning will work but 
right preconditioning may not work (or it may).

   Developer Note: The reason we cannot always solve  nullspace(A) != 
nullspace(A') systems with right preconditioning is because we need to remove 
at each iteration
       the nullspace(AB) from the search direction. While we know the 
nullspace(A) the nullspace(AB) equals B^-1 times the nullspace(A) but except 
for trivial preconditioners
       such as diagonal scaling we cannot apply the inverse of the 
preconditioner to a vector and thus cannot compute the nullspace(AB).
------

Any feed back on the correctness or clarity of the material is appreciated. The 
punch line is that right preconditioning cannot be trusted with nullspace(A) != 
nullspace(A') I don't see any fix for this.

  Barry



> On Oct 11, 2016, at 3:04 PM, Kong, Fande <fande.k...@inl.gov> wrote:
> 
> 
> 
> On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> 
> > On Oct 11, 2016, at 12:01 PM, Kong, Fande <fande.k...@inl.gov> wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> >
> > > On Oct 11, 2016, at 9:33 AM, Kong, Fande <fande.k...@inl.gov> wrote:
> > >
> > > Barry, Thanks so much for your explanation. It helps me a lot.
> > >
> > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> > >
> > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande <fande.k...@inl.gov> wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I know how to remove the null spaces from a singular system using 
> > > > creating a MatNullSpace and attaching it to Mat.
> > > >
> > > > I was really wondering what is the philosophy behind this? The exact 
> > > > algorithms we are using in PETSc right now?  Where we are dealing with 
> > > > this, preconditioner, linear solver, or nonlinear solver?
> > >
> > >    It is in the Krylov solver.
> > >
> > >    The idea is very simple. Say you have   a singular A with null space N 
> > > (that all values Ny are in the null space of A. So N is tall and skinny) 
> > > and you want to solve A x = b where b is in the range of A. This problem 
> > > has an infinite number of solutions     Ny + x*  since A (Ny + x*) = ANy 
> > > + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b 
> > > and x* has the smallest norm of all solutions.
> > >
> > >       With left preconditioning   B A x = B b GMRES, for example, 
> > > normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb + 
> > > alpha_3 BABABAb + ....  but the B operator will likely introduce some 
> > > component into the direction of the null space so as GMRES continues the 
> > > "solution" computed will grow larger and larger with a large component in 
> > > the null space of A. Hence we simply modify GMRES a tiny bit by building 
> > > the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > >
> > >  Does "I" mean an identity matrix? Could you possibly send me a link for 
> > > this GMRES implementation, that is, how PETSc does this in the actual 
> > > code?
> >
> >    Yes.
> >
> >     It is in the helper routine KSP_PCApplyBAorAB()
> > #undef __FUNCT__
> > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec 
> > y,Vec w)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (!ksp->transpose_solve) {
> >     ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> >     ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> >   } else {
> >     ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> >
> > PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (ksp->pc_side == PC_LEFT) {
> >     Mat          A;
> >     MatNullSpace nullsp;
> >     ierr = PCGetOperators(ksp->pc,&A,NULL);CHKERRQ(ierr);
> >     ierr = MatGetNullSpace(A,&nullsp);CHKERRQ(ierr);
> >     if (nullsp) {
> >       ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
> >     }
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> > "ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov 
> > methods only? How about the right preconditioning ones? Are  they just 
> > magically right for the right preconditioning Krylov methods?
> 
>    This is a good question. I am working on a branch now where I will add 
> some more comprehensive testing of the various cases and fix anything that 
> comes up.
> 
>    Were you having trouble with ASM and bjacobi only for right 
> preconditioning?
> 
> 
> Yes. ASM and bjacobi works fine for left preconditioning NOT for RIGHT 
> preconditioning. bjacobi converges, but produces a wrong solution. ASM needs 
> more iterations, however the solution is right. 
> 
> 
>  
>     Note that when A is symmetric the range of A is orthogonal to null space 
> of A so yes I think in that case it is just "magically right" but if A is not 
> symmetric then I don't think it is "magically right". I'll work on it.
> 
> 
>    Barry
> 
> >
> > Fande Kong,
> >
> >
> > There is no code directly in the GMRES or other methods.
> >
> > >
> > > (I-N)BABABAb + ....  that is we remove from each new direction anything 
> > > in the direction of the null space. Hence the null space doesn't directly 
> > > appear in the preconditioner, just in the KSP method.   If you attach a 
> > > null space to the matrix, the KSP just automatically uses it to do the 
> > > removal above.
> > >
> > >     With right preconditioning the solution is built from alpha_1 b   + 
> > > alpha_2 ABb + alpha_3 ABABb + .... and again we apply (I-N) to each term 
> > > to remove any part that is in the null space of A.
> > >
> > >    Now consider the case   A y = b where b is NOT in the range of A. So 
> > > the problem has no "true" solution, but one can find a least squares 
> > > solution by rewriting b = b_par + b_perp where b_par is in the range of A 
> > > and b_perp is orthogonal to the range of A and solve instead    A x = 
> > > b_perp. If you provide a MatSetTransposeNullSpace() then KSP 
> > > automatically uses it to remove b_perp from the right hand side before 
> > > starting the KSP iterations.
> > >
> > >   The manual pages for MatNullSpaceAttach() and 
> > > MatTranposeNullSpaceAttach() discuss this an explain how it relates to 
> > > the fundamental theorem of linear algebra.
> > >
> > >   Note that for symmetric matrices the two null spaces are the same.
> > >
> > >   Barry
> > >
> > >
> > >    A different note: This "trick" is not a "cure all" for a totally 
> > > inappropriate preconditioner. For example if one uses for a 
> > > preconditioner a direct (sparse or dense) solver or an ILU(k) one can end 
> > > up with a very bad solver because the direct solver will likely produce a 
> > > very small pivot at some point thus the triangular solver applied in the 
> > > precondition can produce HUGE changes in the solution (that are not 
> > > physical) and so the preconditioner basically produces garbage. On the 
> > > other hand sometimes it works out ok.
> > >
> > > What preconditioners  are appropriate? asm, bjacobi, amg? I have an 
> > > example which shows  lu and ilu indeed work, but asm and bjacobi do not 
> > > at all. That is why I am asking questions about algorithms. I am trying 
> > > to figure out a default preconditioner for several singular systems.
> >
> >    Hmm, normally asm and bjacobi would be fine with this unless one or more 
> > of the subblocks are themselves singular (which normally won't happen). AMG 
> > can also work find sometimes.
> >
> >    Can you send a sample code?
> >
> >   Barry
> >
> > >
> > > Thanks again.
> > >
> > >
> > > Fande Kong,
> > >
> > >
> > >
> > > >
> > > >
> > > > Fande Kong,
> 
> 

Reply via email to