Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-17 Thread Barry Smith

> On Oct 17, 2016, at 10:47 AM, Kong, Fande  wrote:
> 
> 
> 
> On Thu, Oct 13, 2016 at 8:21 PM, Barry Smith  wrote:
> 
>   Fande,
> 
>   What SNES method are you using?  If you use SNESKSPONLY I think it is ok, 
> it will solve for the norm minimizing least square solution during the one 
> KSPSolve() and then return.
> 
> The problem we are currently working on is a linear problem, but it could be 
> extended to be nonlinear.  Yes, you are right. "ksponly" indeed works, and 
> returns the right solution. But the norm of residual still could confuse 
> users because it is not close to zero.
> 
>  
> 
>   Yes, if you use SNESNEWTONLS or others though the SNES solver will, as you 
> say, think that progress has not been made.
> 
>I do not like what you propose to do, changing the right hand side of the 
> system the user provides is a nasty and surprising side effect.
> 
> I do not like this way either. The reason I posted this code here is that I 
> want to let you know what are inconsistent  between the nonlinear solvers and 
> the linear solvers. 

   You could have SNESSolve_KSPONLY subtract off the null space in the right 
hand side initially just like KSPSolve() does with code like

  ierr = MatGetTransposeNullSpace(pmat,);CHKERRQ(ierr);
  if (nullsp) {
ierr = VecDuplicate(ksp->vec_rhs,);CHKERRQ(ierr);
ierr = VecCopy(ksp->vec_rhs,btmp);CHKERRQ(ierr);
ierr = MatNullSpaceRemove(nullsp,btmp);CHKERRQ(ierr);
vec_rhs  = ksp->vec_rhs;
ksp->vec_rhs = btmp;
  }

It is not perfect, see my comment below, but it gets what you want and "kind 
of" makes the residuals decrease as in the KSPSolve directly case.

> 
>  
> 
> What is your goal? To make it look like the SNES system has had a 
> residual norm reduction?
> 
> Yes, I would like to make SNES have a residual reduction.  Possibly, we could 
> add something in the converged_test function? For example, the residual 
> vector is temporarily subtracted when evaluating the residual  norm  if the 
> system has a null space?

There is likely to always be confusion (in the linear case) or with any 
least squares type solver. The true residual is not really decreasing past a 
certain point but if the solver only sees the consistent part then  it looks 
like the residual is decreasing. 

   The problem is that none of this stuff (the PETSc model and API) was 
designed for the generality of inconsistent least squares problems and we have 
just been bolting on more support over time without enhancing the model. For 
example we could introduce the concept of a consistent residual and an 
inconsistent residual and have the default monitors display both when they are 
different; instead we just display "the residual norm" without clarity of 
"what" residual norm.  We should think about this and wait for Jed to come up 
with the ideal design :-)


  Barry




> 
>  
> 
>We could generalize you question and ask what about solving for nonlinear 
> problems: find the minimal norm solution of min_x || F(x) - b||. This may or 
> may not belong in Tao, currently SNES doesn't do any kind of nonlinear least 
> squares.
> 
> 
> It would be great, if we could add this kind of solvers. Tao does have one, I 
> think.  I would like  to contribute something like this latter (of course, if 
> you are ok with this algorithm), when we are moving  to nonlinear problems in 
> our applications. 
> 
> Fande Kong,
>  
> 
>   Barry
> 
> 
> > On Oct 13, 2016, at 5:20 PM, Kong, Fande  wrote:
> >
> > One more question.
> >
> > Suppose that we are solving the singular linear system Ax = b. N(A) is the 
> > null space of A, and N(A^T) is the null space of the transpose of A.
> >
> > The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r - 
> > b_n. Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear 
> > iteration, a linear system A \delta x = F(x) is solved. N(A) is applied to 
> > Krylov space  during the linear iterating. Before the actual solve 
> > "(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made, 
> > F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x ) 
> > = A(x+\delta x)-b_r - b_n.
> >
> > F(x+\delta x ) always contain the vector b_n, and then the algorithm never 
> > converges because the normal of F is at least 1.
> >
> > Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed 
> > from F?
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >VecDuplicate(ksp->vec_rhs,);
> >VecCopy(ksp->vec_rhs,btmp);
> >MatNullSpaceRemove(nullsp,btmp);
> >vec_rhs  = ksp->vec_rhs;
> >ksp->vec_rhs = btmp;
> > }
> >
> > should  be changed to
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >MatNullSpaceRemove(nullsp,ksp->vec_rhs);
> > }
> > ???
> >
> > Or other solutions to this issue?
> >
> >
> > Fande Kong,
> >
> >
> >
> >
> >
> > On Thu, Oct 13, 2016 at 8:23 AM, Matthew 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-17 Thread Kong, Fande
On Thu, Oct 13, 2016 at 8:21 PM, Barry Smith  wrote:

>
>   Fande,
>
>   What SNES method are you using?  If you use SNESKSPONLY I think it is
> ok, it will solve for the norm minimizing least square solution during the
> one KSPSolve() and then return.
>

The problem we are currently working on is a linear problem, but it could
be extended to be nonlinear.  Yes, you are right. "ksponly" indeed works,
and returns the right solution. But the norm of residual still could
confuse users because it is not close to zero.



>
>   Yes, if you use SNESNEWTONLS or others though the SNES solver will, as
> you say, think that progress has not been made.
>
>I do not like what you propose to do, changing the right hand side of
> the system the user provides is a nasty and surprising side effect.
>

I do not like this way either. The reason I posted this code here is that I
want to let you know what are inconsistent  between the nonlinear solvers
and the linear solvers.



>
> What is your goal? To make it look like the SNES system has had a
> residual norm reduction?
>

Yes, I would like to make SNES have a residual reduction.  Possibly, we
could add something in the converged_test function? For example, the
residual vector is temporarily subtracted when evaluating the residual
norm  if the system has a null space?



>
>We could generalize you question and ask what about solving for
> nonlinear problems: find the minimal norm solution of min_x || F(x) - b||.
> This may or may not belong in Tao, currently SNES doesn't do any kind of
> nonlinear least squares.
>


It would be great, if we could add this kind of solvers. Tao does have one,
I think.  I would like  to contribute something like this latter (of
course, if you are ok with this algorithm), when we are moving  to
nonlinear problems in our applications.

Fande Kong,


>
>   Barry
>
>
> > On Oct 13, 2016, at 5:20 PM, Kong, Fande  wrote:
> >
> > One more question.
> >
> > Suppose that we are solving the singular linear system Ax = b. N(A) is
> the null space of A, and N(A^T) is the null space of the transpose of A.
> >
> > The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r -
> b_n. Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear
> iteration, a linear system A \delta x = F(x) is solved. N(A) is applied to
> Krylov space  during the linear iterating. Before the actual solve
> "(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made,
> F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x )
> = A(x+\delta x)-b_r - b_n.
> >
> > F(x+\delta x ) always contain the vector b_n, and then the algorithm
> never converges because the normal of F is at least 1.
> >
> > Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed
> from F?
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >VecDuplicate(ksp->vec_rhs,);
> >VecCopy(ksp->vec_rhs,btmp);
> >MatNullSpaceRemove(nullsp,btmp);
> >vec_rhs  = ksp->vec_rhs;
> >ksp->vec_rhs = btmp;
> > }
> >
> > should  be changed to
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >MatNullSpaceRemove(nullsp,ksp->vec_rhs);
> > }
> > ???
> >
> > Or other solutions to this issue?
> >
> >
> > Fande Kong,
> >
> >
> >
> >
> >
> > On Thu, Oct 13, 2016 at 8:23 AM, Matthew Knepley 
> wrote:
> > On Thu, Oct 13, 2016 at 9:06 AM, Kong, Fande  wrote:
> >
> >
> > On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:
> > Barry Smith  writes:
> > >   I would make that a separate routine that the users would call first.
> >
> > We have VecMDot and VecMAXPY.  I would propose adding
> >
> >   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
> >
> > (where R can be NULL).
> >
> > What does R mean here?
> >
> > It means the coefficients of the old basis vectors in the new basis.
> >
> >   Matt
> >
> > If nobody working on this, I will be going to take a try.
> >
> > Fande,
> >
> >
> > Does anyone use the "Vecs" type?
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
>
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-17 Thread Kong, Fande
Hi Barry,

Thanks so much for this work. I will checkout your branch, and take a look.

Thanks again!

Fande Kong,

On Thu, Oct 13, 2016 at 8:10 PM, Barry Smith  wrote:

>
>   Fande,
>
>I have done some work, mostly understanding and documentation, on
> handling singular systems with KSP in the branch 
> barry/improve-matnullspace-usage.
> This also includes a new example that solves both a symmetric example and
> an example where nullspace(A) != nullspace(A') src/ksp/ksp/examples/
> tutorials/ex67.c
>
>My understanding is now documented in the manual page for KSPSolve(),
> part of this is quoted below:
>
> ---
>If you provide a matrix that has a MatSetNullSpace() and
> MatSetTransposeNullSpace() this will use that information to solve singular
> systems
>in the least squares sense with a norm minimizing solution.
> $
> $   A x = b   where b = b_p + b_t where b_t is not in the
> range of A (and hence by the fundamental theorem of linear algebra is in
> the nullspace(A') see MatSetNullSpace()
> $
> $KSP first removes b_t producing the linear system  A x = b_p (which
> has multiple solutions) and solves this to find the ||x|| minimizing
> solution (and hence
> $it finds the solution x orthogonal to the nullspace(A). The algorithm
> is simply in each iteration of the Krylov method we remove the nullspace(A)
> from the search
> $direction thus the solution which is a linear combination of the
> search directions has no component in the nullspace(A).
> $
> $We recommend always using GMRES for such singular systems.
> $If nullspace(A) = nullspace(A') (note symmetric matrices always
> satisfy this property) then both left and right preconditioning will work
> $If nullspace(A) != nullspace(A') then left preconditioning will work
> but right preconditioning may not work (or it may).
>
>Developer Note: The reason we cannot always solve  nullspace(A) !=
> nullspace(A') systems with right preconditioning is because we need to
> remove at each iteration
>the nullspace(AB) from the search direction. While we know the
> nullspace(A) the nullspace(AB) equals B^-1 times the nullspace(A) but
> except for trivial preconditioners
>such as diagonal scaling we cannot apply the inverse of the
> preconditioner to a vector and thus cannot compute the nullspace(AB).
> --
>
> Any feed back on the correctness or clarity of the material is
> appreciated. The punch line is that right preconditioning cannot be trusted
> with nullspace(A) != nullspace(A') I don't see any fix for this.
>
>   Barry
>
>
>
> > On Oct 11, 2016, at 3:04 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith 
> wrote:
> >
> > > On Oct 11, 2016, at 12:01 PM, Kong, Fande  wrote:
> > >
> > >
> > >
> > > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith 
> wrote:
> > >
> > > > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> > > >
> > > > Barry, Thanks so much for your explanation. It helps me a lot.
> > > >
> > > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith 
> wrote:
> > > >
> > > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande 
> wrote:
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > > > >
> > > > > I was really wondering what is the philosophy behind this? The
> exact algorithms we are using in PETSc right now?  Where we are dealing
> with this, preconditioner, linear solver, or nonlinear solver?
> > > >
> > > >It is in the Krylov solver.
> > > >
> > > >The idea is very simple. Say you have   a singular A with null
> space N (that all values Ny are in the null space of A. So N is tall and
> skinny) and you want to solve A x = b where b is in the range of A. This
> problem has an infinite number of solutions Ny + x*  since A (Ny + x*)
> = ANy + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* =
> b and x* has the smallest norm of all solutions.
> > > >
> > > >   With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb +   but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > > >
> > > >  Does "I" mean an identity matrix? Could you possibly send me a link
> for this GMRES implementation, that is, how PETSc does this in the actual
> code?
> > >
> > >Yes.
> > >
> > > It is in the helper routine KSP_PCApplyBAorAB()
> > > #undef __FUNCT__
> > > #define 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-13 Thread Barry Smith

  Fande,

  What SNES method are you using?  If you use SNESKSPONLY I think it is ok, it 
will solve for the norm minimizing least square solution during the one 
KSPSolve() and then return.

  Yes, if you use SNESNEWTONLS or others though the SNES solver will, as you 
say, think that progress has not been made. 

   I do not like what you propose to do, changing the right hand side of the 
system the user provides is a nasty and surprising side effect.

What is your goal? To make it look like the SNES system has had a residual 
norm reduction?

   We could generalize you question and ask what about solving for nonlinear 
problems: find the minimal norm solution of min_x || F(x) - b||. This may or 
may not belong in Tao, currently SNES doesn't do any kind of nonlinear least 
squares.

  Barry
 

> On Oct 13, 2016, at 5:20 PM, Kong, Fande  wrote:
> 
> One more question.
> 
> Suppose that we are solving the singular linear system Ax = b. N(A) is the 
> null space of A, and N(A^T) is the null space of the transpose of A.
> 
> The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r - b_n. 
> Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear iteration, a 
> linear system A \delta x = F(x) is solved. N(A) is applied to Krylov space  
> during the linear iterating. Before the actual solve 
> "(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made, 
> F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x ) = 
> A(x+\delta x)-b_r - b_n.   
> 
> F(x+\delta x ) always contain the vector b_n, and then the algorithm never 
> converges because the normal of F is at least 1. 
> 
> Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed from 
> F?
> 
> MatGetTransposeNullSpace(pmat,);
> if (nullsp) {
>VecDuplicate(ksp->vec_rhs,);
>VecCopy(ksp->vec_rhs,btmp);
>MatNullSpaceRemove(nullsp,btmp);
>vec_rhs  = ksp->vec_rhs;
>ksp->vec_rhs = btmp;
> }
> 
> should  be changed to 
> 
> MatGetTransposeNullSpace(pmat,);
> if (nullsp) {
>MatNullSpaceRemove(nullsp,ksp->vec_rhs);
> }
> ???
> 
> Or other solutions to this issue?
> 
> 
> Fande Kong,
> 
> 
> 
> 
> 
> On Thu, Oct 13, 2016 at 8:23 AM, Matthew Knepley  wrote:
> On Thu, Oct 13, 2016 at 9:06 AM, Kong, Fande  wrote:
> 
> 
> On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:
> Barry Smith  writes:
> >   I would make that a separate routine that the users would call first.
> 
> We have VecMDot and VecMAXPY.  I would propose adding
> 
>   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
> 
> (where R can be NULL).
> 
> What does R mean here?
> 
> It means the coefficients of the old basis vectors in the new basis.
> 
>   Matt
>  
> If nobody working on this, I will be going to take a try.
> 
> Fande,
>  
> 
> Does anyone use the "Vecs" type?
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 



Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-13 Thread Barry Smith

  Fande,

   I have done some work, mostly understanding and documentation, on handling 
singular systems with KSP in the branch barry/improve-matnullspace-usage. This 
also includes a new example that solves both a symmetric example and an example 
where nullspace(A) != nullspace(A') src/ksp/ksp/examples/tutorials/ex67.c  

   My understanding is now documented in the manual page for KSPSolve(), part 
of this is quoted below:

---
   If you provide a matrix that has a MatSetNullSpace() and 
MatSetTransposeNullSpace() this will use that information to solve singular 
systems
   in the least squares sense with a norm minimizing solution.
$
$   A x = b   where b = b_p + b_t where b_t is not in the range 
of A (and hence by the fundamental theorem of linear algebra is in the 
nullspace(A') see MatSetNullSpace()
$
$KSP first removes b_t producing the linear system  A x = b_p (which has 
multiple solutions) and solves this to find the ||x|| minimizing solution (and 
hence
$it finds the solution x orthogonal to the nullspace(A). The algorithm is 
simply in each iteration of the Krylov method we remove the nullspace(A) from 
the search
$direction thus the solution which is a linear combination of the search 
directions has no component in the nullspace(A).
$
$We recommend always using GMRES for such singular systems.
$If nullspace(A) = nullspace(A') (note symmetric matrices always satisfy 
this property) then both left and right preconditioning will work
$If nullspace(A) != nullspace(A') then left preconditioning will work but 
right preconditioning may not work (or it may).

   Developer Note: The reason we cannot always solve  nullspace(A) != 
nullspace(A') systems with right preconditioning is because we need to remove 
at each iteration
   the nullspace(AB) from the search direction. While we know the 
nullspace(A) the nullspace(AB) equals B^-1 times the nullspace(A) but except 
for trivial preconditioners
   such as diagonal scaling we cannot apply the inverse of the 
preconditioner to a vector and thus cannot compute the nullspace(AB).
--

Any feed back on the correctness or clarity of the material is appreciated. The 
punch line is that right preconditioning cannot be trusted with nullspace(A) != 
nullspace(A') I don't see any fix for this.

  Barry



> On Oct 11, 2016, at 3:04 PM, Kong, Fande  wrote:
> 
> 
> 
> On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith  wrote:
> 
> > On Oct 11, 2016, at 12:01 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith  wrote:
> >
> > > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> > >
> > > Barry, Thanks so much for your explanation. It helps me a lot.
> > >
> > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith  wrote:
> > >
> > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I know how to remove the null spaces from a singular system using 
> > > > creating a MatNullSpace and attaching it to Mat.
> > > >
> > > > I was really wondering what is the philosophy behind this? The exact 
> > > > algorithms we are using in PETSc right now?  Where we are dealing with 
> > > > this, preconditioner, linear solver, or nonlinear solver?
> > >
> > >It is in the Krylov solver.
> > >
> > >The idea is very simple. Say you have   a singular A with null space N 
> > > (that all values Ny are in the null space of A. So N is tall and skinny) 
> > > and you want to solve A x = b where b is in the range of A. This problem 
> > > has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy 
> > > + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b 
> > > and x* has the smallest norm of all solutions.
> > >
> > >   With left preconditioning   B A x = B b GMRES, for example, 
> > > normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb + 
> > > alpha_3 BABABAb +   but the B operator will likely introduce some 
> > > component into the direction of the null space so as GMRES continues the 
> > > "solution" computed will grow larger and larger with a large component in 
> > > the null space of A. Hence we simply modify GMRES a tiny bit by building 
> > > the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > >
> > >  Does "I" mean an identity matrix? Could you possibly send me a link for 
> > > this GMRES implementation, that is, how PETSc does this in the actual 
> > > code?
> >
> >Yes.
> >
> > It is in the helper routine KSP_PCApplyBAorAB()
> > #undef __FUNCT__
> > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec 
> > y,Vec w)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (!ksp->transpose_solve) {
> > ierr = 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-13 Thread Kong, Fande
One more question.

Suppose that we are solving the singular linear system Ax = b. N(A) is the
null space of A, and N(A^T) is the null space of the transpose of A.

The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r -
b_n. Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear
iteration, a linear system A \delta x = F(x) is solved. N(A) is applied to
Krylov space  during the linear iterating. Before the actual solve
"(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made,
F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x )
= A(x+\delta x)-b_r - b_n.

F(x+\delta x ) always contain the vector b_n, and then the algorithm never
converges because the normal of F is at least 1.

Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed
from F?

MatGetTransposeNullSpace(pmat,);
if (nullsp) {
   VecDuplicate(ksp->vec_rhs,);
   VecCopy(ksp->vec_rhs,btmp);
   MatNullSpaceRemove(nullsp,btmp);
   vec_rhs  = ksp->vec_rhs;
   ksp->vec_rhs = btmp;
}

should  be changed to

MatGetTransposeNullSpace(pmat,);
if (nullsp) {
   MatNullSpaceRemove(nullsp,ksp->vec_rhs);
}
???

Or other solutions to this issue?


Fande Kong,





On Thu, Oct 13, 2016 at 8:23 AM, Matthew Knepley  wrote:

> On Thu, Oct 13, 2016 at 9:06 AM, Kong, Fande  wrote:
>
>>
>>
>> On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:
>>
>>> Barry Smith  writes:
>>> >   I would make that a separate routine that the users would call first.
>>>
>>> We have VecMDot and VecMAXPY.  I would propose adding
>>>
>>>   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
>>>
>>> (where R can be NULL).
>>>
>>
>> What does R mean here?
>>
>
> It means the coefficients of the old basis vectors in the new basis.
>
>   Matt
>
>
>> If nobody working on this, I will be going to take a try.
>>
>> Fande,
>>
>>
>>>
>>> Does anyone use the "Vecs" type?
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-13 Thread Matthew Knepley
On Thu, Oct 13, 2016 at 9:06 AM, Kong, Fande  wrote:

>
>
> On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:
>
>> Barry Smith  writes:
>> >   I would make that a separate routine that the users would call first.
>>
>> We have VecMDot and VecMAXPY.  I would propose adding
>>
>>   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
>>
>> (where R can be NULL).
>>
>
> What does R mean here?
>

It means the coefficients of the old basis vectors in the new basis.

  Matt


> If nobody working on this, I will be going to take a try.
>
> Fande,
>
>
>>
>> Does anyone use the "Vecs" type?
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-13 Thread Jed Brown
"Kong, Fande"  writes:

> On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:
>
>> Barry Smith  writes:
>> >   I would make that a separate routine that the users would call first.
>>
>> We have VecMDot and VecMAXPY.  I would propose adding
>>
>>   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
>>
>> (where R can be NULL).
>>
>
> What does R mean here?

It's a QR factorization, where the (right triangular) R is optional.


signature.asc
Description: PGP signature


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-13 Thread Kong, Fande
On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:

> Barry Smith  writes:
> >   I would make that a separate routine that the users would call first.
>
> We have VecMDot and VecMAXPY.  I would propose adding
>
>   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
>
> (where R can be NULL).
>

What does R mean here?

If nobody working on this, I will be going to take a try.

Fande,


>
> Does anyone use the "Vecs" type?
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Barry Smith

   I also think it is best whenever possible to compute the orthonormal basis 
analytically rather than numerically. As Jed points out numerical 
orthogonalization generally does not provide full precision and that could 
matter.

> On Oct 12, 2016, at 11:06 PM, Barry Smith  wrote:
> 
> 
>> On Oct 12, 2016, at 10:52 PM, Fande Kong  wrote:
>> 
>> 
>> 
>> On Wed, Oct 12, 2016 at 9:24 PM, Jed Brown  wrote:
>> Fande Kong  writes:
>> 
>>> I think we need to make sure that the basis vectors are orthogonal to each
>>> other and they are normalized. Right?
>> 
>> Yes, that is clearly stated in the man page and checked for in debug
>> mode.  The relevant code to remove the null space is
>> 
>>  if (sp->n) {
>>ierr = VecMDot(vec,sp->n,sp->vecs,sp->alpha);CHKERRQ(ierr);
>>for (i=0; in; i++) sp->alpha[i] = -sp->alpha[i];
>>ierr = VecMAXPY(vec,sp->n,sp->alpha,sp->vecs);CHKERRQ(ierr);
>>  }
>> 
>> 
>> Right now, we are forcing users to provide orthogonal basis vectors. Is 
>> there any issue if we orthogonalize  the arbibitry basis vectors provided by 
>> users in PETSc?  And then users could pass arbitrary basis vectors without 
>> doing any preprocessing.
> 
>  I would make that a separate routine that the users would call first.
> 
>  Barry
> 
>> 
>> Fande,
>> 
>> 
>> 
> 



Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Jed Brown
Barry Smith  writes:
>   I would make that a separate routine that the users would call first.

We have VecMDot and VecMAXPY.  I would propose adding

  VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);

(where R can be NULL).

Does anyone use the "Vecs" type?


signature.asc
Description: PGP signature


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Barry Smith

> On Oct 12, 2016, at 10:52 PM, Fande Kong  wrote:
> 
> 
> 
> On Wed, Oct 12, 2016 at 9:24 PM, Jed Brown  wrote:
> Fande Kong  writes:
> 
> > I think we need to make sure that the basis vectors are orthogonal to each
> > other and they are normalized. Right?
> 
> Yes, that is clearly stated in the man page and checked for in debug
> mode.  The relevant code to remove the null space is
> 
>   if (sp->n) {
> ierr = VecMDot(vec,sp->n,sp->vecs,sp->alpha);CHKERRQ(ierr);
> for (i=0; in; i++) sp->alpha[i] = -sp->alpha[i];
> ierr = VecMAXPY(vec,sp->n,sp->alpha,sp->vecs);CHKERRQ(ierr);
>   }
> 
> 
> Right now, we are forcing users to provide orthogonal basis vectors. Is there 
> any issue if we orthogonalize  the arbibitry basis vectors provided by users 
> in PETSc?  And then users could pass arbitrary basis vectors without doing 
> any preprocessing.

  I would make that a separate routine that the users would call first.

  Barry

> 
> Fande,
> 
>  
> 



Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Jed Brown
Fande Kong  writes:

> I think we need to make sure that the basis vectors are orthogonal to each
> other and they are normalized. Right?

Yes, that is clearly stated in the man page and checked for in debug
mode.  The relevant code to remove the null space is

  if (sp->n) {
ierr = VecMDot(vec,sp->n,sp->vecs,sp->alpha);CHKERRQ(ierr);
for (i=0; in; i++) sp->alpha[i] = -sp->alpha[i];
ierr = VecMAXPY(vec,sp->n,sp->alpha,sp->vecs);CHKERRQ(ierr);
  }


signature.asc
Description: PGP signature


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Barry Smith

> On Oct 12, 2016, at 10:18 PM, Fande Kong  wrote:
> 
> 
> 
> On Wed, Oct 12, 2016 at 9:12 PM, Barry Smith  wrote:
> 
> > On Oct 12, 2016, at 10:00 PM, Amneet Bhalla  wrote:
> >
> >
> >
> > On Monday, October 10, 2016, Barry Smith  wrote:
> >
> > > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> > >
> > > Hi All,
> > >
> > > I know how to remove the null spaces from a singular system using 
> > > creating a MatNullSpace and attaching it to Mat.
> > >
> > > I was really wondering what is the philosophy behind this? The exact 
> > > algorithms we are using in PETSc right now?  Where we are dealing with 
> > > this, preconditioner, linear solver, or nonlinear solver?
> >
> >It is in the Krylov solver.
> >
> >The idea is very simple. Say you have   a singular A with null space N 
> > (that all values Ny are in the null space of A. So N is tall and skinny) 
> > and you want to solve A x = b where b is in the range of A. This problem 
> > has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy + 
> > Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and 
> > x* has the smallest norm of all solutions.
> >
> >   With left preconditioning   B A x = B b GMRES, for example, normally 
> > computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3 
> > BABABAb +   but the B operator will likely introduce some component 
> > into the direction of the null space so as GMRES continues the "solution" 
> > computed will grow larger and larger with a large component in the null 
> > space of A. Hence we simply modify GMRES a tiny bit by building the 
> > solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3 (I-N)BABABAb 
> > +   that is we remove from each new direction anything in the direction 
> > of the null space. Hence the null space doesn't directly appear in the 
> > preconditioner, just in the KSP method.   If you attach a null space to the 
> > matrix, the KSP just automatically uses it to do the removal above.
> >
> > Barry, if identity matrix I is of size M x M (which is also the size of A) 
> > then are you augmenting N (size M x R; R < M) by zero colums to make I  - N 
> > possible? If so it means that only first R values of vector Bb are used for 
> > scaling zero Eigenvectors of A. Does this choice affect iteration count, 
> > meaning one can arbitrarily choose any R values of the vector Bb to scale 
> > zero eigenvectors of A?
> 
>   Yes I wasn't very precise here. I should have written it as as the 
> projection of the vector onto the complement of the null space which I think 
> can be written as I - N(N'N)^-1N'
> This is done with the routine MatNullSpaceRemove(). The basis of the null 
> space you provide to MatNullSpaceCreate() should not effect the convergence 
> of the Krylov method at all since the same null space is removed regardless 
> of the basis.
> 
> I think we need to make sure that the basis vectors are orthogonal to each 
> other and they are normalized. Right?

  Yes, MatNullSpaceCreate() should report an error if all the vectors are not 
orthonormal. 

#if defined(PETSC_USE_DEBUG)
  if (n) {
PetscScalar *dots;
for (i=0; i PETSC_SQRT_MACHINE_EPSILON) 
SETERRQ2(PetscObjectComm((PetscObject)vecs[i]),PETSC_ERR_ARG_WRONG,"Vector %D 
must have 2-norm of 1.0, it is %g",i,(double)norm);
}
if (has_cnst) {
  for (i=0; i PETSC_SQRT_MACHINE_EPSILON) 
SETERRQ2(PetscObjectComm((PetscObject)vecs[i]),PETSC_ERR_ARG_WRONG,"Vector %D 
must be orthogonal to constant vector, inner product is 
%g",i,(double)PetscAbsScalar(sum));
  }
}
ierr = PetscMalloc1(n-1,);CHKERRQ(ierr);
for (i=0; i PETSC_SQRT_MACHINE_EPSILON) 
SETERRQ3(PetscObjectComm((PetscObject)vecs[i]),PETSC_ERR_ARG_WRONG,"Vector %D 
must be orthogonal to vector %D, inner product is 
%g",i,i+j+1,(double)PetscAbsScalar(dots[j]));
  }
}
PetscFree(dots);CHKERRQ(ierr);
  }
#endif


> 
> Fande,
>  
> 
>   Barry
> 
> 
> >
> > With right preconditioning the solution is built from alpha_1 b   + 
> > alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to 
> > remove any part that is in the null space of A.
> >
> >Now consider the case   A y = b where b is NOT in the range of A. So the 
> > problem has no "true" solution, but one can find a least squares solution 
> > by rewriting b = b_par + b_perp where b_par is in the range of A and b_perp 
> > is orthogonal to the range of A and solve insteadA x = 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Fande Kong
On Wed, Oct 12, 2016 at 9:12 PM, Barry Smith  wrote:

>
> > On Oct 12, 2016, at 10:00 PM, Amneet Bhalla 
> wrote:
> >
> >
> >
> > On Monday, October 10, 2016, Barry Smith  wrote:
> >
> > > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> > >
> > > Hi All,
> > >
> > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > >
> > > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now?  Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
> >
> >It is in the Krylov solver.
> >
> >The idea is very simple. Say you have   a singular A with null space
> N (that all values Ny are in the null space of A. So N is tall and skinny)
> and you want to solve A x = b where b is in the range of A. This problem
> has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy +
> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
> x* has the smallest norm of all solutions.
> >
> >   With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb +   but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> (I-N)BABABAb +   that is we remove from each new direction anything in
> the direction of the null space. Hence the null space doesn't directly
> appear in the preconditioner, just in the KSP method.   If you attach a
> null space to the matrix, the KSP just automatically uses it to do the
> removal above.
> >
> > Barry, if identity matrix I is of size M x M (which is also the size of
> A) then are you augmenting N (size M x R; R < M) by zero colums to make I
> - N possible? If so it means that only first R values of vector Bb are used
> for scaling zero Eigenvectors of A. Does this choice affect iteration
> count, meaning one can arbitrarily choose any R values of the vector Bb to
> scale zero eigenvectors of A?
>
>   Yes I wasn't very precise here. I should have written it as as the
> projection of the vector onto the complement of the null space which I
> think can be written as I - N(N'N)^-1N'
> This is done with the routine MatNullSpaceRemove(). The basis of the null
> space you provide to MatNullSpaceCreate() should not effect the convergence
> of the Krylov method at all since the same null space is removed regardless
> of the basis.
>

I think we need to make sure that the basis vectors are orthogonal to each
other and they are normalized. Right?

Fande,


>
>   Barry
>
>
> >
> > With right preconditioning the solution is built from alpha_1 b   +
> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to
> remove any part that is in the null space of A.
> >
> >Now consider the case   A y = b where b is NOT in the range of A. So
> the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve insteadA x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
> >
> >   The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
> >
> >   Note that for symmetric matrices the two null spaces are the same.
> >
> >   Barry
> >
> >
> >A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
> >
> >
> > >
> > >
> > > Fande Kong,
> >
> >
> >
> > --
> > --Amneet
> >
> >
> >
> >
>
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Barry Smith

> On Oct 12, 2016, at 10:00 PM, Amneet Bhalla  wrote:
> 
> 
> 
> On Monday, October 10, 2016, Barry Smith  wrote:
> 
> > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> >
> > Hi All,
> >
> > I know how to remove the null spaces from a singular system using creating 
> > a MatNullSpace and attaching it to Mat.
> >
> > I was really wondering what is the philosophy behind this? The exact 
> > algorithms we are using in PETSc right now?  Where we are dealing with 
> > this, preconditioner, linear solver, or nonlinear solver?
> 
>It is in the Krylov solver.
> 
>The idea is very simple. Say you have   a singular A with null space N 
> (that all values Ny are in the null space of A. So N is tall and skinny) and 
> you want to solve A x = b where b is in the range of A. This problem has an 
> infinite number of solutions Ny + x*  since A (Ny + x*) = ANy + Ax* = Ax* 
> = b where x* is the "minimum norm solution; that is Ax* = b and x* has the 
> smallest norm of all solutions.
> 
>   With left preconditioning   B A x = B b GMRES, for example, normally 
> computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3 BABABAb 
> +   but the B operator will likely introduce some component into the 
> direction of the null space so as GMRES continues the "solution" computed 
> will grow larger and larger with a large component in the null space of A. 
> Hence we simply modify GMRES a tiny bit by building the solution from alpha_1 
> (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3 (I-N)BABABAb +   that is we 
> remove from each new direction anything in the direction of the null space. 
> Hence the null space doesn't directly appear in the preconditioner, just in 
> the KSP method.   If you attach a null space to the matrix, the KSP just 
> automatically uses it to do the removal above.
> 
> Barry, if identity matrix I is of size M x M (which is also the size of A) 
> then are you augmenting N (size M x R; R < M) by zero colums to make I  - N 
> possible? If so it means that only first R values of vector Bb are used for 
> scaling zero Eigenvectors of A. Does this choice affect iteration count, 
> meaning one can arbitrarily choose any R values of the vector Bb to scale 
> zero eigenvectors of A?

  Yes I wasn't very precise here. I should have written it as as the projection 
of the vector onto the complement of the null space which I think can be 
written as I - N(N'N)^-1N' 
This is done with the routine MatNullSpaceRemove(). The basis of the null space 
you provide to MatNullSpaceCreate() should not effect the convergence of the 
Krylov method at all since the same null space is removed regardless of the 
basis.

  Barry


> 
> With right preconditioning the solution is built from alpha_1 b   + 
> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to 
> remove any part that is in the null space of A.
> 
>Now consider the case   A y = b where b is NOT in the range of A. So the 
> problem has no "true" solution, but one can find a least squares solution by 
> rewriting b = b_par + b_perp where b_par is in the range of A and b_perp is 
> orthogonal to the range of A and solve insteadA x = b_perp. If you 
> provide a MatSetTransposeNullSpace() then KSP automatically uses it to remove 
> b_perp from the right hand side before starting the KSP iterations.
> 
>   The manual pages for MatNullSpaceAttach() and MatTranposeNullSpaceAttach() 
> discuss this an explain how it relates to the fundamental theorem of linear 
> algebra.
> 
>   Note that for symmetric matrices the two null spaces are the same.
> 
>   Barry
> 
> 
>A different note: This "trick" is not a "cure all" for a totally 
> inappropriate preconditioner. For example if one uses for a preconditioner a 
> direct (sparse or dense) solver or an ILU(k) one can end up with a very bad 
> solver because the direct solver will likely produce a very small pivot at 
> some point thus the triangular solver applied in the precondition can produce 
> HUGE changes in the solution (that are not physical) and so the 
> preconditioner basically produces garbage. On the other hand sometimes it 
> works out ok.
> 
> 
> >
> >
> > Fande Kong,
> 
> 
> 
> -- 
> --Amneet 
> 
> 
> 
> 



Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Fande Kong
On Wed, Oct 12, 2016 at 9:00 PM, Amneet Bhalla 
wrote:

>
>
> On Monday, October 10, 2016, Barry Smith  wrote:
>
>>
>> > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
>> >
>> > Hi All,
>> >
>> > I know how to remove the null spaces from a singular system using
>> creating a MatNullSpace and attaching it to Mat.
>> >
>> > I was really wondering what is the philosophy behind this? The exact
>> algorithms we are using in PETSc right now?  Where we are dealing with
>> this, preconditioner, linear solver, or nonlinear solver?
>>
>>It is in the Krylov solver.
>>
>>The idea is very simple. Say you have   a singular A with null space N
>> (that all values Ny are in the null space of A. So N is tall and skinny)
>> and you want to solve A x = b where b is in the range of A. This problem
>> has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy +
>> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
>> x* has the smallest norm of all solutions.
>>
>>   With left preconditioning   B A x = B b GMRES, for example,
>> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
>> alpha_3 BABABAb +   but the B operator will likely introduce some
>> component into the direction of the null space so as GMRES continues the
>> "solution" computed will grow larger and larger with a large component in
>> the null space of A. Hence we simply modify GMRES a tiny bit by building
>> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
>> (I-N)BABABAb +   that is we remove from each new direction anything in
>> the direction of the null space. Hence the null space doesn't directly
>> appear in the preconditioner, just in the KSP method.   If you attach a
>> null space to the matrix, the KSP just automatically uses it to do the
>> removal above.
>
>
> Barry, if identity matrix I is of size M x M (which is also the size of A)
> then are you augmenting N (size M x R; R < M) by zero colums to make I  - N
> possible? If so it means that only first R values of vector Bb are used for
> scaling zero Eigenvectors of A. Does this choice affect iteration count,
> meaning one can arbitrarily choose any R values of the vector Bb to scale
> zero eigenvectors of A?
>

I think it is just a notation. Let us denote Bb as y, that is, y = Bb. N =
{x_0, x_1, ..., x_r}, where x_i is a vector. Applying  I-N to y means  y =
y - \sum (y, x_i) x_i.  (y, x_i) is the inner product of vectors y and
x_i.  Look at this code for more details,
http://www.mcs.anl.gov/petsc/petsc-current/src/mat/interface/matnull.c.html#MatNullSpaceRemove

Fande,



>
>> With right preconditioning the solution is built from alpha_1 b   +
>> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to
>> remove any part that is in the null space of A.
>>
>>Now consider the case   A y = b where b is NOT in the range of A. So
>> the problem has no "true" solution, but one can find a least squares
>> solution by rewriting b = b_par + b_perp where b_par is in the range of A
>> and b_perp is orthogonal to the range of A and solve insteadA x =
>> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
>> uses it to remove b_perp from the right hand side before starting the KSP
>> iterations.
>>
>>   The manual pages for MatNullSpaceAttach() and
>> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
>> fundamental theorem of linear algebra.
>>
>>   Note that for symmetric matrices the two null spaces are the same.
>>
>>   Barry
>>
>>
>>A different note: This "trick" is not a "cure all" for a totally
>> inappropriate preconditioner. For example if one uses for a preconditioner
>> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
>> bad solver because the direct solver will likely produce a very small pivot
>> at some point thus the triangular solver applied in the precondition can
>> produce HUGE changes in the solution (that are not physical) and so the
>> preconditioner basically produces garbage. On the other hand sometimes it
>> works out ok.
>>
>>
>> >
>> >
>> > Fande Kong,
>>
>>
>
> --
> --Amneet
>
>
>
>
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-12 Thread Amneet Bhalla
On Monday, October 10, 2016, Barry Smith  wrote:

>
> > On Oct 10, 2016, at 4:01 PM, Kong, Fande  > wrote:
> >
> > Hi All,
> >
> > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> >
> > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now?  Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
>
>It is in the Krylov solver.
>
>The idea is very simple. Say you have   a singular A with null space N
> (that all values Ny are in the null space of A. So N is tall and skinny)
> and you want to solve A x = b where b is in the range of A. This problem
> has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy +
> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
> x* has the smallest norm of all solutions.
>
>   With left preconditioning   B A x = B b GMRES, for example, normally
> computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3
> BABABAb +   but the B operator will likely introduce some component
> into the direction of the null space so as GMRES continues the "solution"
> computed will grow larger and larger with a large component in the null
> space of A. Hence we simply modify GMRES a tiny bit by building the
> solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3 (I-N)BABABAb
> +   that is we remove from each new direction anything in the direction
> of the null space. Hence the null space doesn't directly appear in the
> preconditioner, just in the KSP method.   If you attach a null space to the
> matrix, the KSP just automatically uses it to do the removal above.


Barry, if identity matrix I is of size M x M (which is also the size of A)
then are you augmenting N (size M x R; R < M) by zero colums to make I  - N
possible? If so it means that only first R values of vector Bb are used for
scaling zero Eigenvectors of A. Does this choice affect iteration count,
meaning one can arbitrarily choose any R values of the vector Bb to scale
zero eigenvectors of A?

>
> With right preconditioning the solution is built from alpha_1 b   +
> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to
> remove any part that is in the null space of A.
>
>Now consider the case   A y = b where b is NOT in the range of A. So
> the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve insteadA x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
>
>   The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
>
>   Note that for symmetric matrices the two null spaces are the same.
>
>   Barry
>
>
>A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
>
>
> >
> >
> > Fande Kong,
>
>

-- 
--Amneet


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-11 Thread Kong, Fande
Barry,

I am trying to reproduce this issue using a pure PETSc code. VecLoad does
not work for me. I do not know why. Anyway, I can reproduce this using a
very small system.  Here are some info:

Mat, A
Mat Object:() 2 MPI processes
  type: mpiaij
row 0: (0, 1.)
row 1: (0, -0.820827)  (1, 1.51669)  (2, -0.820827)
row 2: (1, -0.820827)  (2, 1.51669)  (3, -0.820827)
row 3: (2, -0.820827)  (3, 1.51669)  (4, -0.820827)
row 4: (3, -0.820827)  (4, 1.51669)  (5, -0.820827)
row 5: (4, -0.820827)  (5, 1.51669)  (6, -0.820827)
row 6: (5, -0.820827)  (6, 1.51669)  (7, -0.820827)
row 7: (6, -0.820827)  (7, 1.51669)  (8, -0.820827)
row 8: (8, 1.)


Right hand side b:

Vec Object: 2 MPI processes
  type: mpi
Process [0]
0.
-0.356693
-0.50444
-0.356693
-5.55112e-17
Process [1]
0.356693
0.50444
0.356693
0.


Mat Null space N(A):

Vec Object: 2 MPI processes
  type: mpi
Process [0]
0.
0.191342
0.353553
0.46194
0.5
Process [1]
0.46194
0.353553
0.191342
6.12323e-17


Please run with two MPI threads using -ksp_pc_side right -pc_type bjacobi
and -ksp_pc_side left -pc_type bjacobi. Will produce different solutions.
The one obtained with using "left" is correct (we have an analytical
solution).

I also attached data for matrix, rhs and nullspace, but I am not sure if
you can read them or not. I can load mat.dat, but I could not read rhs.dat
and nullspace.dat.

Fande,



On Tue, Oct 11, 2016 at 3:44 PM, Barry Smith  wrote:

>
>   Fande,
>
>  Could you send me (petsc-ma...@mcs.anl.gov) a non symmetric matrix
> you have that has a different null space for A and A'. This would be one
> that is failing with right preconditioning. Smaller the better but whatever
> size you have. Run the code with -ksp_view_mat binary and send the
> resulting file called binaryoutput.
>
>I need a test matrix to update the PETSc code for this case.
>
>
>Barry
>
> > On Oct 11, 2016, at 3:04 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith 
> wrote:
> >
> > > On Oct 11, 2016, at 12:01 PM, Kong, Fande  wrote:
> > >
> > >
> > >
> > > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith 
> wrote:
> > >
> > > > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> > > >
> > > > Barry, Thanks so much for your explanation. It helps me a lot.
> > > >
> > > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith 
> wrote:
> > > >
> > > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande 
> wrote:
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > > > >
> > > > > I was really wondering what is the philosophy behind this? The
> exact algorithms we are using in PETSc right now?  Where we are dealing
> with this, preconditioner, linear solver, or nonlinear solver?
> > > >
> > > >It is in the Krylov solver.
> > > >
> > > >The idea is very simple. Say you have   a singular A with null
> space N (that all values Ny are in the null space of A. So N is tall and
> skinny) and you want to solve A x = b where b is in the range of A. This
> problem has an infinite number of solutions Ny + x*  since A (Ny + x*)
> = ANy + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* =
> b and x* has the smallest norm of all solutions.
> > > >
> > > >   With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb +   but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > > >
> > > >  Does "I" mean an identity matrix? Could you possibly send me a link
> for this GMRES implementation, that is, how PETSc does this in the actual
> code?
> > >
> > >Yes.
> > >
> > > It is in the helper routine KSP_PCApplyBAorAB()
> > > #undef __FUNCT__
> > > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec
> y,Vec w)
> > > {
> > >   PetscErrorCode ierr;
> > >   PetscFunctionBegin;
> > >   if (!ksp->transpose_solve) {
> > > ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> > > ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> > >   } else {
> > > ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);
> CHKERRQ(ierr);
> > >   }
> > >   PetscFunctionReturn(0);
> > > }
> > >
> > >
> > > PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
> > > {
> > >   PetscErrorCode ierr;
> > >   PetscFunctionBegin;
> > >   if (ksp->pc_side == PC_LEFT) {
> > > Mat  A;
> > > MatNullSpace 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-11 Thread Barry Smith

  Fande,

 Could you send me (petsc-ma...@mcs.anl.gov) a non symmetric matrix you 
have that has a different null space for A and A'. This would be one that is 
failing with right preconditioning. Smaller the better but whatever size you 
have. Run the code with -ksp_view_mat binary and send the resulting file called 
binaryoutput.

   I need a test matrix to update the PETSc code for this case.


   Barry

> On Oct 11, 2016, at 3:04 PM, Kong, Fande  wrote:
> 
> 
> 
> On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith  wrote:
> 
> > On Oct 11, 2016, at 12:01 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith  wrote:
> >
> > > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> > >
> > > Barry, Thanks so much for your explanation. It helps me a lot.
> > >
> > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith  wrote:
> > >
> > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I know how to remove the null spaces from a singular system using 
> > > > creating a MatNullSpace and attaching it to Mat.
> > > >
> > > > I was really wondering what is the philosophy behind this? The exact 
> > > > algorithms we are using in PETSc right now?  Where we are dealing with 
> > > > this, preconditioner, linear solver, or nonlinear solver?
> > >
> > >It is in the Krylov solver.
> > >
> > >The idea is very simple. Say you have   a singular A with null space N 
> > > (that all values Ny are in the null space of A. So N is tall and skinny) 
> > > and you want to solve A x = b where b is in the range of A. This problem 
> > > has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy 
> > > + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b 
> > > and x* has the smallest norm of all solutions.
> > >
> > >   With left preconditioning   B A x = B b GMRES, for example, 
> > > normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb + 
> > > alpha_3 BABABAb +   but the B operator will likely introduce some 
> > > component into the direction of the null space so as GMRES continues the 
> > > "solution" computed will grow larger and larger with a large component in 
> > > the null space of A. Hence we simply modify GMRES a tiny bit by building 
> > > the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > >
> > >  Does "I" mean an identity matrix? Could you possibly send me a link for 
> > > this GMRES implementation, that is, how PETSc does this in the actual 
> > > code?
> >
> >Yes.
> >
> > It is in the helper routine KSP_PCApplyBAorAB()
> > #undef __FUNCT__
> > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec 
> > y,Vec w)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (!ksp->transpose_solve) {
> > ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> > ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> >   } else {
> > ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> >
> > PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (ksp->pc_side == PC_LEFT) {
> > Mat  A;
> > MatNullSpace nullsp;
> > ierr = PCGetOperators(ksp->pc,,NULL);CHKERRQ(ierr);
> > ierr = MatGetNullSpace(A,);CHKERRQ(ierr);
> > if (nullsp) {
> >   ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
> > }
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> > "ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov 
> > methods only? How about the right preconditioning ones? Are  they just 
> > magically right for the right preconditioning Krylov methods?
> 
>This is a good question. I am working on a branch now where I will add 
> some more comprehensive testing of the various cases and fix anything that 
> comes up.
> 
>Were you having trouble with ASM and bjacobi only for right 
> preconditioning?
> 
> 
> Yes. ASM and bjacobi works fine for left preconditioning NOT for RIGHT 
> preconditioning. bjacobi converges, but produces a wrong solution. ASM needs 
> more iterations, however the solution is right. 
> 
> 
>  
> Note that when A is symmetric the range of A is orthogonal to null space 
> of A so yes I think in that case it is just "magically right" but if A is not 
> symmetric then I don't think it is "magically right". I'll work on it.
> 
> 
>Barry
> 
> >
> > Fande Kong,
> >
> >
> > There is no code directly in the GMRES or other methods.
> >
> > >
> > > (I-N)BABABAb +   that is we remove from each new direction anything 
> > > in the direction of the null space. Hence the null space doesn't directly 
> > > appear in the preconditioner, 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-11 Thread Kong, Fande
On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith  wrote:

>
> > On Oct 11, 2016, at 12:01 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith 
> wrote:
> >
> > > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> > >
> > > Barry, Thanks so much for your explanation. It helps me a lot.
> > >
> > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith 
> wrote:
> > >
> > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> > > >
> > > > Hi All,
> > > >
> > > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > > >
> > > > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now?  Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
> > >
> > >It is in the Krylov solver.
> > >
> > >The idea is very simple. Say you have   a singular A with null
> space N (that all values Ny are in the null space of A. So N is tall and
> skinny) and you want to solve A x = b where b is in the range of A. This
> problem has an infinite number of solutions Ny + x*  since A (Ny + x*)
> = ANy + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* =
> b and x* has the smallest norm of all solutions.
> > >
> > >   With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb +   but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > >
> > >  Does "I" mean an identity matrix? Could you possibly send me a link
> for this GMRES implementation, that is, how PETSc does this in the actual
> code?
> >
> >Yes.
> >
> > It is in the helper routine KSP_PCApplyBAorAB()
> > #undef __FUNCT__
> > #define __FUNCT__ "KSP_PCApplyBAorAB"
> > PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec
> y,Vec w)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (!ksp->transpose_solve) {
> > ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> > ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
> >   } else {
> > ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);
> CHKERRQ(ierr);
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> >
> > PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
> > {
> >   PetscErrorCode ierr;
> >   PetscFunctionBegin;
> >   if (ksp->pc_side == PC_LEFT) {
> > Mat  A;
> > MatNullSpace nullsp;
> > ierr = PCGetOperators(ksp->pc,,NULL);CHKERRQ(ierr);
> > ierr = MatGetNullSpace(A,);CHKERRQ(ierr);
> > if (nullsp) {
> >   ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
> > }
> >   }
> >   PetscFunctionReturn(0);
> > }
> >
> > "ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov
> methods only? How about the right preconditioning ones? Are  they just
> magically right for the right preconditioning Krylov methods?
>
>This is a good question. I am working on a branch now where I will add
> some more comprehensive testing of the various cases and fix anything that
> comes up.
>
>Were you having trouble with ASM and bjacobi only for right
> preconditioning?
>
>
Yes. ASM and bjacobi works fine for left preconditioning NOT for RIGHT
preconditioning. bjacobi converges, but produces a wrong solution. ASM
needs more iterations, however the solution is right.




> Note that when A is symmetric the range of A is orthogonal to null
> space of A so yes I think in that case it is just "magically right" but if
> A is not symmetric then I don't think it is "magically right". I'll work on
> it.
>
>
>Barry
>
> >
> > Fande Kong,
> >
> >
> > There is no code directly in the GMRES or other methods.
> >
> > >
> > > (I-N)BABABAb +   that is we remove from each new direction
> anything in the direction of the null space. Hence the null space doesn't
> directly appear in the preconditioner, just in the KSP method.   If you
> attach a null space to the matrix, the KSP just automatically uses it to do
> the removal above.
> > >
> > > With right preconditioning the solution is built from alpha_1 b
>  + alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term
> to remove any part that is in the null space of A.
> > >
> > >Now consider the case   A y = b where b is NOT in the range of A.
> So the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-11 Thread Kong, Fande
On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith  wrote:

>
> > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> >
> > Barry, Thanks so much for your explanation. It helps me a lot.
> >
> > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith  wrote:
> >
> > > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> > >
> > > Hi All,
> > >
> > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > >
> > > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now?  Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
> >
> >It is in the Krylov solver.
> >
> >The idea is very simple. Say you have   a singular A with null space
> N (that all values Ny are in the null space of A. So N is tall and skinny)
> and you want to solve A x = b where b is in the range of A. This problem
> has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy +
> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
> x* has the smallest norm of all solutions.
> >
> >   With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb +   but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> >
> >  Does "I" mean an identity matrix? Could you possibly send me a link for
> this GMRES implementation, that is, how PETSc does this in the actual code?
>
>Yes.
>
> It is in the helper routine KSP_PCApplyBAorAB()
> #undef __FUNCT__
> #define __FUNCT__ "KSP_PCApplyBAorAB"
> PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec
> y,Vec w)
> {
>   PetscErrorCode ierr;
>   PetscFunctionBegin;
>   if (!ksp->transpose_solve) {
> ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
> ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
>   } else {
> ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);
> CHKERRQ(ierr);
>   }
>   PetscFunctionReturn(0);
> }
>


PETSC_STATIC_INLINE PetscErrorCode KSP_RemoveNullSpace(KSP ksp,Vec y)
{
  PetscErrorCode ierr;
  PetscFunctionBegin;
  if (ksp->pc_side == PC_LEFT) {
Mat  A;
MatNullSpace nullsp;
ierr = PCGetOperators(ksp->pc,,NULL);CHKERRQ(ierr);
ierr = MatGetNullSpace(A,);CHKERRQ(ierr);
if (nullsp) {
  ierr = MatNullSpaceRemove(nullsp,y);CHKERRQ(ierr);
}
  }
  PetscFunctionReturn(0);
}

"ksp->pc_side == PC_LEFT" deals with the left preconditioning Krylov
methods only? How about the right preconditioning ones? Are  they just
magically right for the right preconditioning Krylov methods?

Fande Kong,


>
> There is no code directly in the GMRES or other methods.
>
> >
> > (I-N)BABABAb +   that is we remove from each new direction anything
> in the direction of the null space. Hence the null space doesn't directly
> appear in the preconditioner, just in the KSP method.   If you attach a
> null space to the matrix, the KSP just automatically uses it to do the
> removal above.
> >
> > With right preconditioning the solution is built from alpha_1 b   +
> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to
> remove any part that is in the null space of A.
> >
> >Now consider the case   A y = b where b is NOT in the range of A. So
> the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve insteadA x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
> >
> >   The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
> >
> >   Note that for symmetric matrices the two null spaces are the same.
> >
> >   Barry
> >
> >
> >A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
> >
> > What preconditioners  are appropriate? 

Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-11 Thread Barry Smith

> On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> 
> Barry, Thanks so much for your explanation. It helps me a lot.
> 
> On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith  wrote:
> 
> > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> >
> > Hi All,
> >
> > I know how to remove the null spaces from a singular system using creating 
> > a MatNullSpace and attaching it to Mat.
> >
> > I was really wondering what is the philosophy behind this? The exact 
> > algorithms we are using in PETSc right now?  Where we are dealing with 
> > this, preconditioner, linear solver, or nonlinear solver?
> 
>It is in the Krylov solver.
> 
>The idea is very simple. Say you have   a singular A with null space N 
> (that all values Ny are in the null space of A. So N is tall and skinny) and 
> you want to solve A x = b where b is in the range of A. This problem has an 
> infinite number of solutions Ny + x*  since A (Ny + x*) = ANy + Ax* = Ax* 
> = b where x* is the "minimum norm solution; that is Ax* = b and x* has the 
> smallest norm of all solutions.
> 
>   With left preconditioning   B A x = B b GMRES, for example, normally 
> computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3 BABABAb 
> +   but the B operator will likely introduce some component into the 
> direction of the null space so as GMRES continues the "solution" computed 
> will grow larger and larger with a large component in the null space of A. 
> Hence we simply modify GMRES a tiny bit by building the solution from alpha_1 
> (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3 
>  
>  Does "I" mean an identity matrix? Could you possibly send me a link for this 
> GMRES implementation, that is, how PETSc does this in the actual code?

   Yes. 

It is in the helper routine KSP_PCApplyBAorAB()
#undef __FUNCT__
#define __FUNCT__ "KSP_PCApplyBAorAB"
PETSC_STATIC_INLINE PetscErrorCode KSP_PCApplyBAorAB(KSP ksp,Vec x,Vec y,Vec w)
{
  PetscErrorCode ierr;
  PetscFunctionBegin;
  if (!ksp->transpose_solve) {
ierr = PCApplyBAorAB(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
ierr = KSP_RemoveNullSpace(ksp,y);CHKERRQ(ierr);
  } else {
ierr = PCApplyBAorABTranspose(ksp->pc,ksp->pc_side,x,y,w);CHKERRQ(ierr);
  }
  PetscFunctionReturn(0);
}

There is no code directly in the GMRES or other methods.

>   
> (I-N)BABABAb +   that is we remove from each new direction anything in 
> the direction of the null space. Hence the null space doesn't directly appear 
> in the preconditioner, just in the KSP method.   If you attach a null space 
> to the matrix, the KSP just automatically uses it to do the removal above.
> 
> With right preconditioning the solution is built from alpha_1 b   + 
> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to 
> remove any part that is in the null space of A.
> 
>Now consider the case   A y = b where b is NOT in the range of A. So the 
> problem has no "true" solution, but one can find a least squares solution by 
> rewriting b = b_par + b_perp where b_par is in the range of A and b_perp is 
> orthogonal to the range of A and solve insteadA x = b_perp. If you 
> provide a MatSetTransposeNullSpace() then KSP automatically uses it to remove 
> b_perp from the right hand side before starting the KSP iterations.
> 
>   The manual pages for MatNullSpaceAttach() and MatTranposeNullSpaceAttach() 
> discuss this an explain how it relates to the fundamental theorem of linear 
> algebra.
> 
>   Note that for symmetric matrices the two null spaces are the same.
> 
>   Barry
> 
> 
>A different note: This "trick" is not a "cure all" for a totally 
> inappropriate preconditioner. For example if one uses for a preconditioner a 
> direct (sparse or dense) solver or an ILU(k) one can end up with a very bad 
> solver because the direct solver will likely produce a very small pivot at 
> some point thus the triangular solver applied in the precondition can produce 
> HUGE changes in the solution (that are not physical) and so the 
> preconditioner basically produces garbage. On the other hand sometimes it 
> works out ok.
> 
> What preconditioners  are appropriate? asm, bjacobi, amg? I have an example 
> which shows  lu and ilu indeed work, but asm and bjacobi do not at all. That 
> is why I am asking questions about algorithms. I am trying to figure out a 
> default preconditioner for several singular systems.

   Hmm, normally asm and bjacobi would be fine with this unless one or more of 
the subblocks are themselves singular (which normally won't happen). AMG can 
also work find sometimes. 

   Can you send a sample code?

  Barry

> 
> Thanks again.
> 
> 
> Fande Kong,
>  
> 
> 
> >
> >
> > Fande Kong,



Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-11 Thread Kong, Fande
Barry, Thanks so much for your explanation. It helps me a lot.

On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith  wrote:

>
> > On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> >
> > Hi All,
> >
> > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> >
> > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now?  Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
>
>It is in the Krylov solver.
>
>The idea is very simple. Say you have   a singular A with null space N
> (that all values Ny are in the null space of A. So N is tall and skinny)
> and you want to solve A x = b where b is in the range of A. This problem
> has an infinite number of solutions Ny + x*  since A (Ny + x*) = ANy +
> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
> x* has the smallest norm of all solutions.
>
>   With left preconditioning   B A x = B b GMRES, for example, normally
> computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3
> BABABAb +   but the B operator will likely introduce some component
> into the direction of the null space so as GMRES continues the "solution"
> computed will grow larger and larger with a large component in the null
> space of A. Hence we simply modify GMRES a tiny bit by building the
> solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3


 Does "I" mean an identity matrix? Could you possibly send me a link for
this GMRES implementation, that is, how PETSc does this in the actual code?


> (I-N)BABABAb +   that is we remove from each new direction anything in
> the direction of the null space. Hence the null space doesn't directly
> appear in the preconditioner, just in the KSP method.   If you attach a
> null space to the matrix, the KSP just automatically uses it to do the
> removal above.
>
> With right preconditioning the solution is built from alpha_1 b   +
> alpha_2 ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to
> remove any part that is in the null space of A.
>
>Now consider the case   A y = b where b is NOT in the range of A. So
> the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve insteadA x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
>
>   The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
>
>   Note that for symmetric matrices the two null spaces are the same.
>
>   Barry
>
>
>A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
>

What preconditioners  are appropriate? asm, bjacobi, amg? I have an example
which shows  lu and ilu indeed work, but asm and bjacobi do not at all.
That is why I am asking questions about algorithms. I am trying to figure
out a default preconditioner for several singular systems.

Thanks again.


Fande Kong,


>
>
> >
> >
> > Fande Kong,
>
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-10 Thread Barry Smith

> On Oct 10, 2016, at 4:01 PM, Kong, Fande  wrote:
> 
> Hi All,
> 
> I know how to remove the null spaces from a singular system using creating a 
> MatNullSpace and attaching it to Mat.
> 
> I was really wondering what is the philosophy behind this? The exact 
> algorithms we are using in PETSc right now?  Where we are dealing with this, 
> preconditioner, linear solver, or nonlinear solver?

   It is in the Krylov solver.

   The idea is very simple. Say you have   a singular A with null space N (that 
all values Ny are in the null space of A. So N is tall and skinny) and you want 
to solve A x = b where b is in the range of A. This problem has an infinite 
number of solutions Ny + x*  since A (Ny + x*) = ANy + Ax* = Ax* = b where 
x* is the "minimum norm solution; that is Ax* = b and x* has the smallest norm 
of all solutions.

  With left preconditioning   B A x = B b GMRES, for example, normally 
computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3 BABABAb + 
  but the B operator will likely introduce some component into the 
direction of the null space so as GMRES continues the "solution" computed will 
grow larger and larger with a large component in the null space of A. Hence we 
simply modify GMRES a tiny bit by building the solution from alpha_1 (I-N)Bb   
+ alpha_2 (I-N)BABb + alpha_3 (I-N)BABABAb +   that is we remove from each 
new direction anything in the direction of the null space. Hence the null space 
doesn't directly appear in the preconditioner, just in the KSP method.   If you 
attach a null space to the matrix, the KSP just automatically uses it to do the 
removal above.

With right preconditioning the solution is built from alpha_1 b   + alpha_2 
ABb + alpha_3 ABABb +  and again we apply (I-N) to each term to remove any 
part that is in the null space of A.

   Now consider the case   A y = b where b is NOT in the range of A. So the 
problem has no "true" solution, but one can find a least squares solution by 
rewriting b = b_par + b_perp where b_par is in the range of A and b_perp is 
orthogonal to the range of A and solve insteadA x = b_perp. If you provide 
a MatSetTransposeNullSpace() then KSP automatically uses it to remove b_perp 
from the right hand side before starting the KSP iterations. 

  The manual pages for MatNullSpaceAttach() and MatTranposeNullSpaceAttach() 
discuss this an explain how it relates to the fundamental theorem of linear 
algebra.

  Note that for symmetric matrices the two null spaces are the same.

  Barry


   A different note: This "trick" is not a "cure all" for a totally 
inappropriate preconditioner. For example if one uses for a preconditioner a 
direct (sparse or dense) solver or an ILU(k) one can end up with a very bad 
solver because the direct solver will likely produce a very small pivot at some 
point thus the triangular solver applied in the precondition can produce HUGE 
changes in the solution (that are not physical) and so the preconditioner 
basically produces garbage. On the other hand sometimes it works out ok.


> 
> 
> Fande Kong,