Fande,

     Could you send me (petsc-ma...@mcs.anl.gov) a non symmetric matrix you 
have that has a different null space for A and A'. This would be one that is 
failing with right preconditioning. Smaller the better but whatever size you 
have. Run the code with -ksp_view_mat binary and send the resulting file called 
binaryoutput.

   I need a test matrix to update the PETSc code for this case.


   Barry

> On Oct 11, 2016, at 3:04 PM, Kong, Fande <fande.k...@inl.gov> wrote:
> 
> 
> 
> On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> 
> > On Oct 11, 2016, at 12:01 PM, Kong, Fande <fande.k...@inl.gov> wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> >
> > > On Oct 11, 2016, at 9:33 AM, Kong, Fande <fande.k...@inl.gov> wrote:
> > >
> > > Barry, Thanks so much for your explanation. It helps me a lot.
> > >
> > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> > >
> > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande <fande.k...@inl.gov> wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I know how to remove the null spaces from a singular system using 
> > > > creating a MatNullSpace and attaching it to Mat.
> > > >
> > > > I was really wondering what is the philosophy behind this? The exact 
> > > > algorithms we are using in PETSc right now?  Where we are dealing with 
> > > > this, preconditioner, linear solver, or nonlinear solver?
> > >
> > >    It is in the Krylov solver.
> > >
> > >    The idea is very simple. Say you have   a singular A with null space N 
> > > (that all values Ny are in the null space of A. So N is tall and skinny) 
> > > and you want to solve A x = b where b is in the range of A. This problem 
> > > has an infinite number of solutions     Ny + x*  since A (Ny + x*) = ANy 
> > > + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b 
> > > and x* has the smallest norm of all solutions.
> > >
> > >       With left preconditioning   B A x = B b GMRES, for example, 
> > > normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb + 
> > > alpha_3 BABABAb + ....  but the B operator will likely introduce some 
> > > component into the direction of the null space so as GMRES continues the 
> > > "solution" computed will grow larger and larger with a large component in 
> > > the null space of A. Hence we simply modify GMRES a tiny bit by building 
> > > the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > >
> > >  Does "I" mean an identity matrix? Could you possibly send me a link for 
> > > this GMRES implementation, that is, how PETSc does this in the actual 
> > > code?
> >
> >    Yes.
> >
> >     It is in the helper routine KSP_PCApplyBAorAB()
> > #undef __FUNCT__
> > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec 
> > y,Vec w)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (!ksp->transpose_solve) {
> >     ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> >     ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> >   } else {
> >     ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> >
> > PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (ksp->pc_side == PC_LEFT) {
> >     Mat          A;
> >     MatNullSpace nullsp;
> >     ierr = PCGetOperators(ksp->pc,&A,NULL);CHKERRQ(ierr);
> >     ierr = MatGetNullSpace(A,&nullsp);CHKERRQ(ierr);
> >     if (nullsp) {
> >       ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
> >     }
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> > "ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov 
> > methods only? How about the right preconditioning ones? Are  they just 
> > magically right for the right preconditioning Krylov methods?
> 
>    This is a good question. I am working on a branch now where I will add 
> some more comprehensive testing of the various cases and fix anything that 
> comes up.
> 
>    Were you having trouble with ASM and bjacobi only for right 
> preconditioning?
> 
> 
> Yes. ASM and bjacobi works fine for left preconditioning NOT for RIGHT 
> preconditioning. bjacobi converges, but produces a wrong solution. ASM needs 
> more iterations, however the solution is right. 
> 
> 
>  
>     Note that when A is symmetric the range of A is orthogonal to null space 
> of A so yes I think in that case it is just "magically right" but if A is not 
> symmetric then I don't think it is "magically right". I'll work on it.
> 
> 
>    Barry
> 
> >
> > Fande Kong,
> >
> >
> > There is no code directly in the GMRES or other methods.
> >
> > >
> > > (I-N)BABABAb + ....  that is we remove from each new direction anything 
> > > in the direction of the null space. Hence the null space doesn't directly 
> > > appear in the preconditioner, just in the KSP method.   If you attach a 
> > > null space to the matrix, the KSP just automatically uses it to do the 
> > > removal above.
> > >
> > >     With right preconditioning the solution is built from alpha_1 b   + 
> > > alpha_2 ABb + alpha_3 ABABb + .... and again we apply (I-N) to each term 
> > > to remove any part that is in the null space of A.
> > >
> > >    Now consider the case   A y = b where b is NOT in the range of A. So 
> > > the problem has no "true" solution, but one can find a least squares 
> > > solution by rewriting b = b_par + b_perp where b_par is in the range of A 
> > > and b_perp is orthogonal to the range of A and solve instead    A x = 
> > > b_perp. If you provide a MatSetTransposeNullSpace() then KSP 
> > > automatically uses it to remove b_perp from the right hand side before 
> > > starting the KSP iterations.
> > >
> > >   The manual pages for MatNullSpaceAttach() and 
> > > MatTranposeNullSpaceAttach() discuss this an explain how it relates to 
> > > the fundamental theorem of linear algebra.
> > >
> > >   Note that for symmetric matrices the two null spaces are the same.
> > >
> > >   Barry
> > >
> > >
> > >    A different note: This "trick" is not a "cure all" for a totally 
> > > inappropriate preconditioner. For example if one uses for a 
> > > preconditioner a direct (sparse or dense) solver or an ILU(k) one can end 
> > > up with a very bad solver because the direct solver will likely produce a 
> > > very small pivot at some point thus the triangular solver applied in the 
> > > precondition can produce HUGE changes in the solution (that are not 
> > > physical) and so the preconditioner basically produces garbage. On the 
> > > other hand sometimes it works out ok.
> > >
> > > What preconditioners  are appropriate? asm, bjacobi, amg? I have an 
> > > example which shows  lu and ilu indeed work, but asm and bjacobi do not 
> > > at all. That is why I am asking questions about algorithms. I am trying 
> > > to figure out a default preconditioner for several singular systems.
> >
> >    Hmm, normally asm and bjacobi would be fine with this unless one or more 
> > of the subblocks are themselves singular (which normally won't happen). AMG 
> > can also work find sometimes.
> >
> >    Can you send a sample code?
> >
> >   Barry
> >
> > >
> > > Thanks again.
> > >
> > >
> > > Fande Kong,
> > >
> > >
> > >
> > > >
> > > >
> > > > Fande Kong,
> 
> 

Reply via email to