Re: [petsc-users] preconditioner for contact / mesh tying problem

2016-10-17 Thread Jed Brown
Hoang Giang Bui  writes:

> Dear PETSc folks,
>
> While searching literature on the preconditioner for contact/mesh tying
> problem, I saw the paper by Dr. Adams "Algebraic multigrid methods for
> constrained linear systems with applications to contact problems in solid
> mechanics, NLAA, 2004". Given the promising aspects the paper has shown for
> constrained linear system, I wonder if some code's also available in PETSc
> for testing/further extension?

This particular algorithm is not available within the PETSc library.
Mark might have some code around, but I doubt it's maintained or easy to
tinker with.  You should be able to implement the algorithm fairly
quickly using PETSc.  It would make a great example if you're willing to
contribute.  And we can advise if you decide to write such an example.


signature.asc
Description: PGP signature


Re: [petsc-users] large PetscCommDuplicate overhead

2016-10-17 Thread Barry Smith

   Hmm, None of these sadly. There should be no barriers in the calls 
(optimized versions) just MPI_Attr_get() and MPI_Attr_set(). 

   Maybe someone else has a better idea.

  Barry



> On Oct 17, 2016, at 3:41 PM, Matthew Overholt  wrote:
> 
> Barry,
> 
> If I look at the symbols available to trace I find the following.
>> nm xSYMMIC | grep " T MPI" | grep "attr"
> <#> T MPIR_Call_attr_copy
> <#> T MPIR_Call_attr_delete
> <#> T MPIR_Comm_delete_attr_impl
> <#> T MPIR_Comm_set_attr_impl
> <#> T MPIU_nem_gni_smsg_mbox_attr_init
> 
> => Are the two _Comm_ symbols the ones of interest?
> 
>> nm xSYMMIC | grep " T MPI" | grep "arrier"
> <#> T MPIDI_CRAY_dmapp_barrier_join
> <#> T MPIDI_Cray_shared_mem_coll_barrier
> <#> T MPIDI_Cray_shared_mem_coll_barrier_gather
> <#> T MPID_Sched_barrier
> <#> T MPID_nem_barrier
> <#> T MPID_nem_barrier_init
> <#> T MPID_nem_barrier_vars_init
> <#> T MPIR_Barrier
> <#> T MPIR_Barrier_impl
> <#> T MPIR_Barrier_inter
> <#> T MPIR_Barrier_intra
> <#> T MPIR_CRAY_Barrier
> <#> T MPIR_Ibarrier_impl
> <#> T MPIR_Ibarrier_inter
> <#> T MPIR_Ibarrier_intra
> 
> => Which of these barriers should I trace?
> 
> Finally, the current version of PETSc seems to be 3.7.2; I am not able to
> load 3.7.3.
> 
> Thanks,
> Matt Overholt
> 
> 
> -Original Message-
> From: Barry Smith [mailto:bsm...@mcs.anl.gov] 
> Sent: Thursday, October 13, 2016 11:46 PM
> To: overh...@capesim.com
> Cc: Jed Brown; PETSc
> Subject: Re: [petsc-users] large PetscCommDuplicate overhead
> 
> 
>  Mathew,
> 
>Thanks for the additional information. This is all very weird since the
> same number of calls  made to PetscCommDuplicate() are the same regardless
> of geometry and the time of the call shouldn't depend on the geometry.
> 
>Would you be able to do another set of tests where you track the time in
> MPI_Get_attr() and MPI_Barrier() instead of PetscCommDuplicate()? It could
> be Cray did something "funny" in their implementation of PETSc.
> 
>   You could also try using the module petsc/3.7.3 instead of the cray-petsc
> module
> 
>   Thanks
> 
>  Barry
> 
> 
> 
> 
>> On Oct 12, 2016, at 10:48 AM, Matthew Overholt 
> wrote:
>> 
>> Jed,
>> 
>> I realize that the PetscCommDuplicate (PCD) overhead I am seeing must 
>> be only indirectly related to the problem size, etc., and I wouldn't 
>> be surprised if it was an artifact of some sort related to my specific 
>> algorithm.  So you may not want to pursue this much further.  However, 
>> I did make three runs using the same Edison environment and code but 
>> different input geometry files.  Earlier I found a strong dependence 
>> on the number of processes, so for this test I ran all of the tests on 
>> 1 node with 8 processes (N=1, n=8).  What I found was that the amount 
>> of PCD overhead was geometry dependent, not size dependent.  A 
>> moderately-sized simple geometry (with relatively few ghosted vertices 
>> at the simple-planar interfaces) had no PCD overhead, whereas both 
>> small and large complex geometries (with relatively more ghosted 
>> vertices at the more-complex interfaces) had 5 - 6% PCD overhead.  The log
> files follow.
>> 
>> Thanks,
>> Matt Overholt
>> 
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 



Re: [petsc-users] large PetscCommDuplicate overhead

2016-10-17 Thread Matthew Overholt
Barry,

If I look at the symbols available to trace I find the following.
> nm xSYMMIC | grep " T MPI" | grep "attr"
<#> T MPIR_Call_attr_copy
<#> T MPIR_Call_attr_delete
<#> T MPIR_Comm_delete_attr_impl
<#> T MPIR_Comm_set_attr_impl
<#> T MPIU_nem_gni_smsg_mbox_attr_init

=> Are the two _Comm_ symbols the ones of interest?

> nm xSYMMIC | grep " T MPI" | grep "arrier"
<#> T MPIDI_CRAY_dmapp_barrier_join
<#> T MPIDI_Cray_shared_mem_coll_barrier
<#> T MPIDI_Cray_shared_mem_coll_barrier_gather
<#> T MPID_Sched_barrier
<#> T MPID_nem_barrier
<#> T MPID_nem_barrier_init
<#> T MPID_nem_barrier_vars_init
<#> T MPIR_Barrier
<#> T MPIR_Barrier_impl
<#> T MPIR_Barrier_inter
<#> T MPIR_Barrier_intra
<#> T MPIR_CRAY_Barrier
<#> T MPIR_Ibarrier_impl
<#> T MPIR_Ibarrier_inter
<#> T MPIR_Ibarrier_intra

=> Which of these barriers should I trace?

Finally, the current version of PETSc seems to be 3.7.2; I am not able to
load 3.7.3.

Thanks,
Matt Overholt


-Original Message-
From: Barry Smith [mailto:bsm...@mcs.anl.gov] 
Sent: Thursday, October 13, 2016 11:46 PM
To: overh...@capesim.com
Cc: Jed Brown; PETSc
Subject: Re: [petsc-users] large PetscCommDuplicate overhead


  Mathew,

Thanks for the additional information. This is all very weird since the
same number of calls  made to PetscCommDuplicate() are the same regardless
of geometry and the time of the call shouldn't depend on the geometry.

Would you be able to do another set of tests where you track the time in
MPI_Get_attr() and MPI_Barrier() instead of PetscCommDuplicate()? It could
be Cray did something "funny" in their implementation of PETSc.

   You could also try using the module petsc/3.7.3 instead of the cray-petsc
module

   Thanks

  Barry




> On Oct 12, 2016, at 10:48 AM, Matthew Overholt 
wrote:
> 
> Jed,
> 
> I realize that the PetscCommDuplicate (PCD) overhead I am seeing must 
> be only indirectly related to the problem size, etc., and I wouldn't 
> be surprised if it was an artifact of some sort related to my specific 
> algorithm.  So you may not want to pursue this much further.  However, 
> I did make three runs using the same Edison environment and code but 
> different input geometry files.  Earlier I found a strong dependence 
> on the number of processes, so for this test I ran all of the tests on 
> 1 node with 8 processes (N=1, n=8).  What I found was that the amount 
> of PCD overhead was geometry dependent, not size dependent.  A 
> moderately-sized simple geometry (with relatively few ghosted vertices 
> at the simple-planar interfaces) had no PCD overhead, whereas both 
> small and large complex geometries (with relatively more ghosted 
> vertices at the more-complex interfaces) had 5 - 6% PCD overhead.  The log
files follow.
> 
> Thanks,
> Matt Overholt
> 


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus



Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-17 Thread Barry Smith

> On Oct 17, 2016, at 10:47 AM, Kong, Fande  wrote:
> 
> 
> 
> On Thu, Oct 13, 2016 at 8:21 PM, Barry Smith  wrote:
> 
>   Fande,
> 
>   What SNES method are you using?  If you use SNESKSPONLY I think it is ok, 
> it will solve for the norm minimizing least square solution during the one 
> KSPSolve() and then return.
> 
> The problem we are currently working on is a linear problem, but it could be 
> extended to be nonlinear.  Yes, you are right. "ksponly" indeed works, and 
> returns the right solution. But the norm of residual still could confuse 
> users because it is not close to zero.
> 
>  
> 
>   Yes, if you use SNESNEWTONLS or others though the SNES solver will, as you 
> say, think that progress has not been made.
> 
>I do not like what you propose to do, changing the right hand side of the 
> system the user provides is a nasty and surprising side effect.
> 
> I do not like this way either. The reason I posted this code here is that I 
> want to let you know what are inconsistent  between the nonlinear solvers and 
> the linear solvers. 

   You could have SNESSolve_KSPONLY subtract off the null space in the right 
hand side initially just like KSPSolve() does with code like

  ierr = MatGetTransposeNullSpace(pmat,);CHKERRQ(ierr);
  if (nullsp) {
ierr = VecDuplicate(ksp->vec_rhs,);CHKERRQ(ierr);
ierr = VecCopy(ksp->vec_rhs,btmp);CHKERRQ(ierr);
ierr = MatNullSpaceRemove(nullsp,btmp);CHKERRQ(ierr);
vec_rhs  = ksp->vec_rhs;
ksp->vec_rhs = btmp;
  }

It is not perfect, see my comment below, but it gets what you want and "kind 
of" makes the residuals decrease as in the KSPSolve directly case.

> 
>  
> 
> What is your goal? To make it look like the SNES system has had a 
> residual norm reduction?
> 
> Yes, I would like to make SNES have a residual reduction.  Possibly, we could 
> add something in the converged_test function? For example, the residual 
> vector is temporarily subtracted when evaluating the residual  norm  if the 
> system has a null space?

There is likely to always be confusion (in the linear case) or with any 
least squares type solver. The true residual is not really decreasing past a 
certain point but if the solver only sees the consistent part then  it looks 
like the residual is decreasing. 

   The problem is that none of this stuff (the PETSc model and API) was 
designed for the generality of inconsistent least squares problems and we have 
just been bolting on more support over time without enhancing the model. For 
example we could introduce the concept of a consistent residual and an 
inconsistent residual and have the default monitors display both when they are 
different; instead we just display "the residual norm" without clarity of 
"what" residual norm.  We should think about this and wait for Jed to come up 
with the ideal design :-)


  Barry




> 
>  
> 
>We could generalize you question and ask what about solving for nonlinear 
> problems: find the minimal norm solution of min_x || F(x) - b||. This may or 
> may not belong in Tao, currently SNES doesn't do any kind of nonlinear least 
> squares.
> 
> 
> It would be great, if we could add this kind of solvers. Tao does have one, I 
> think.  I would like  to contribute something like this latter (of course, if 
> you are ok with this algorithm), when we are moving  to nonlinear problems in 
> our applications. 
> 
> Fande Kong,
>  
> 
>   Barry
> 
> 
> > On Oct 13, 2016, at 5:20 PM, Kong, Fande  wrote:
> >
> > One more question.
> >
> > Suppose that we are solving the singular linear system Ax = b. N(A) is the 
> > null space of A, and N(A^T) is the null space of the transpose of A.
> >
> > The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r - 
> > b_n. Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear 
> > iteration, a linear system A \delta x = F(x) is solved. N(A) is applied to 
> > Krylov space  during the linear iterating. Before the actual solve 
> > "(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made, 
> > F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x ) 
> > = A(x+\delta x)-b_r - b_n.
> >
> > F(x+\delta x ) always contain the vector b_n, and then the algorithm never 
> > converges because the normal of F is at least 1.
> >
> > Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed 
> > from F?
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >VecDuplicate(ksp->vec_rhs,);
> >VecCopy(ksp->vec_rhs,btmp);
> >MatNullSpaceRemove(nullsp,btmp);
> >vec_rhs  = ksp->vec_rhs;
> >ksp->vec_rhs = btmp;
> > }
> >
> > should  be changed to
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >MatNullSpaceRemove(nullsp,ksp->vec_rhs);
> > }
> > ???
> >
> > Or other solutions to this issue?
> >
> >
> > Fande Kong,
> >
> >
> >
> >
> >
> > On Thu, Oct 13, 2016 at 8:23 AM, Matthew 

Re: [petsc-users] SLEPc: Convergence Problems

2016-10-17 Thread Julian Andrej
Do you precondition your eigenvalue problem? If not, you should. Let
us know what structure your matrix has and which blocks (if there are
any) include which physics.

Regards
Julian

On Mon, Oct 17, 2016 at 5:30 PM, Christopher Pierce  wrote:
> I've implemented my application using MatGetSubMatrix and the solvers
> appear to be converging correctly now, just slowly.  I assume that this
> is due to the clustering of eigenvalues inherent to the problem that I'm
> using, however.  I think that this should be enough to get me on track
> to solving problems with it.
>
> Thanks,
>
> Chris
>
>
> On 10/14/16 01:43, Christopher Pierce wrote:
>> Thank You,
>>
>> That looks like what I need to do if the highly degenerate eigenpairs
>> are my problem.  I'll try that out this week and see if that helps.
>>
>> Chris
>>
>>
>>
>>
>> On 10/13/16 20:01, Barry Smith wrote:
>>>   I would use MatGetSubMatrix() to pull out the part of the matrix you care 
>>> about and hand that matrix off to SLEPc.
>>>
>>>   Others prefer to remove the Dirichlet boundary value locations while 
>>> doing the finite element assembly, this way those locations never appear in 
>>> the matrix.
>>>
>>>The end result is the same, you have the slightly smaller matrix of 
>>> interest to compute the eigenvalues from.
>>>
>>>
>>> Barry
>>>
 On Oct 13, 2016, at 5:48 PM, Christopher Pierce  wrote:

 Hello All,

 As there isn't a SLEPc specific list, it was recommended that I bring my
 question here.  I am using SLEPc to solve a generalized eigenvalue
 problem generated as part of the Finite Element Method, but am having
 difficulty getting the diagonalizer to converge.  I am worried that the
 method used to set boundary conditions in the matrix is creating the
 problem and am looking for other people's input.

 In order to set the boundary conditions, I find the list of IDs that
 should be zero in the resulting eigenvectors and then use
 MatZeroRowsColumns to zero the rows and columns and in the matrix A
 insert a large value such as 1E10 on each diagonal element that was
 zeroed and likewise for the B matrix except with the value 1.0.  That
 way the eigenvalues resulting from those solutions are on the order of
 1E10 and are outside of the region of interest for my problem.

 When I tried to diagonal the matrices I could only get converged
 solutions from the rqcg method which I have found to not scale well with
 my problem.  When using any other method, the approximate error of the
 eigenpairs hovers around 1E00 and 1E01 until it reaches the max number
 of iterations.  Could having so many identical eigenvalues (~1,000) in
 the spectrum be causing this to happen even if they are far outside of
 the range of interest?

 Thank,

 Chris Pierce
 WPI Center for Computation Nano-Science


>>
>
>


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-17 Thread Kong, Fande
On Thu, Oct 13, 2016 at 8:21 PM, Barry Smith  wrote:

>
>   Fande,
>
>   What SNES method are you using?  If you use SNESKSPONLY I think it is
> ok, it will solve for the norm minimizing least square solution during the
> one KSPSolve() and then return.
>

The problem we are currently working on is a linear problem, but it could
be extended to be nonlinear.  Yes, you are right. "ksponly" indeed works,
and returns the right solution. But the norm of residual still could
confuse users because it is not close to zero.



>
>   Yes, if you use SNESNEWTONLS or others though the SNES solver will, as
> you say, think that progress has not been made.
>
>I do not like what you propose to do, changing the right hand side of
> the system the user provides is a nasty and surprising side effect.
>

I do not like this way either. The reason I posted this code here is that I
want to let you know what are inconsistent  between the nonlinear solvers
and the linear solvers.



>
> What is your goal? To make it look like the SNES system has had a
> residual norm reduction?
>

Yes, I would like to make SNES have a residual reduction.  Possibly, we
could add something in the converged_test function? For example, the
residual vector is temporarily subtracted when evaluating the residual
norm  if the system has a null space?



>
>We could generalize you question and ask what about solving for
> nonlinear problems: find the minimal norm solution of min_x || F(x) - b||.
> This may or may not belong in Tao, currently SNES doesn't do any kind of
> nonlinear least squares.
>


It would be great, if we could add this kind of solvers. Tao does have one,
I think.  I would like  to contribute something like this latter (of
course, if you are ok with this algorithm), when we are moving  to
nonlinear problems in our applications.

Fande Kong,


>
>   Barry
>
>
> > On Oct 13, 2016, at 5:20 PM, Kong, Fande  wrote:
> >
> > One more question.
> >
> > Suppose that we are solving the singular linear system Ax = b. N(A) is
> the null space of A, and N(A^T) is the null space of the transpose of A.
> >
> > The linear system is solved using SNES, that is, F(x) = Ax-b = Ax -b_r -
> b_n. Here b_n  in N(A^T),  and b_r in R(A).  During each nonlinear
> iteration, a linear system A \delta x = F(x) is solved. N(A) is applied to
> Krylov space  during the linear iterating. Before the actual solve
> "(*ksp->ops->solve)(ksp)" for \delta x,  a temporary copy of F(x) is made,
> F_tmp. N(A^T) is applied to F_tmp. We will get a \delta x.  F(x+\delta x )
> = A(x+\delta x)-b_r - b_n.
> >
> > F(x+\delta x ) always contain the vector b_n, and then the algorithm
> never converges because the normal of F is at least 1.
> >
> > Should we apply N(A^T) to F instead of F_tmp so that b_n can be removed
> from F?
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >VecDuplicate(ksp->vec_rhs,);
> >VecCopy(ksp->vec_rhs,btmp);
> >MatNullSpaceRemove(nullsp,btmp);
> >vec_rhs  = ksp->vec_rhs;
> >ksp->vec_rhs = btmp;
> > }
> >
> > should  be changed to
> >
> > MatGetTransposeNullSpace(pmat,);
> > if (nullsp) {
> >MatNullSpaceRemove(nullsp,ksp->vec_rhs);
> > }
> > ???
> >
> > Or other solutions to this issue?
> >
> >
> > Fande Kong,
> >
> >
> >
> >
> >
> > On Thu, Oct 13, 2016 at 8:23 AM, Matthew Knepley 
> wrote:
> > On Thu, Oct 13, 2016 at 9:06 AM, Kong, Fande  wrote:
> >
> >
> > On Wed, Oct 12, 2016 at 10:21 PM, Jed Brown  wrote:
> > Barry Smith  writes:
> > >   I would make that a separate routine that the users would call first.
> >
> > We have VecMDot and VecMAXPY.  I would propose adding
> >
> >   VecQR(PetscInt nvecs,Vec *vecs,PetscScalar *R);
> >
> > (where R can be NULL).
> >
> > What does R mean here?
> >
> > It means the coefficients of the old basis vectors in the new basis.
> >
> >   Matt
> >
> > If nobody working on this, I will be going to take a try.
> >
> > Fande,
> >
> >
> > Does anyone use the "Vecs" type?
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
>
>


Re: [petsc-users] SLEPc: Convergence Problems

2016-10-17 Thread Christopher Pierce
I've implemented my application using MatGetSubMatrix and the solvers
appear to be converging correctly now, just slowly.  I assume that this
is due to the clustering of eigenvalues inherent to the problem that I'm
using, however.  I think that this should be enough to get me on track
to solving problems with it.

Thanks,

Chris


On 10/14/16 01:43, Christopher Pierce wrote:
> Thank You,
>
> That looks like what I need to do if the highly degenerate eigenpairs
> are my problem.  I'll try that out this week and see if that helps.
>
> Chris
>
>
>
>
> On 10/13/16 20:01, Barry Smith wrote:
>>   I would use MatGetSubMatrix() to pull out the part of the matrix you care 
>> about and hand that matrix off to SLEPc.
>>
>>   Others prefer to remove the Dirichlet boundary value locations while doing 
>> the finite element assembly, this way those locations never appear in the 
>> matrix. 
>>
>>The end result is the same, you have the slightly smaller matrix of 
>> interest to compute the eigenvalues from.
>>
>>
>> Barry
>>
>>> On Oct 13, 2016, at 5:48 PM, Christopher Pierce  wrote:
>>>
>>> Hello All,
>>>
>>> As there isn't a SLEPc specific list, it was recommended that I bring my
>>> question here.  I am using SLEPc to solve a generalized eigenvalue
>>> problem generated as part of the Finite Element Method, but am having
>>> difficulty getting the diagonalizer to converge.  I am worried that the
>>> method used to set boundary conditions in the matrix is creating the
>>> problem and am looking for other people's input.
>>>
>>> In order to set the boundary conditions, I find the list of IDs that
>>> should be zero in the resulting eigenvectors and then use
>>> MatZeroRowsColumns to zero the rows and columns and in the matrix A
>>> insert a large value such as 1E10 on each diagonal element that was
>>> zeroed and likewise for the B matrix except with the value 1.0.  That
>>> way the eigenvalues resulting from those solutions are on the order of
>>> 1E10 and are outside of the region of interest for my problem.
>>>
>>> When I tried to diagonal the matrices I could only get converged
>>> solutions from the rqcg method which I have found to not scale well with
>>> my problem.  When using any other method, the approximate error of the
>>> eigenpairs hovers around 1E00 and 1E01 until it reaches the max number
>>> of iterations.  Could having so many identical eigenvalues (~1,000) in
>>> the spectrum be causing this to happen even if they are far outside of
>>> the range of interest?
>>>
>>> Thank,
>>>
>>> Chris Pierce
>>> WPI Center for Computation Nano-Science
>>>
>>>
>




signature.asc
Description: OpenPGP digital signature


Re: [petsc-users] Algorithms to remove null spaces in a singular system

2016-10-17 Thread Kong, Fande
Hi Barry,

Thanks so much for this work. I will checkout your branch, and take a look.

Thanks again!

Fande Kong,

On Thu, Oct 13, 2016 at 8:10 PM, Barry Smith  wrote:

>
>   Fande,
>
>I have done some work, mostly understanding and documentation, on
> handling singular systems with KSP in the branch 
> barry/improve-matnullspace-usage.
> This also includes a new example that solves both a symmetric example and
> an example where nullspace(A) != nullspace(A') src/ksp/ksp/examples/
> tutorials/ex67.c
>
>My understanding is now documented in the manual page for KSPSolve(),
> part of this is quoted below:
>
> ---
>If you provide a matrix that has a MatSetNullSpace() and
> MatSetTransposeNullSpace() this will use that information to solve singular
> systems
>in the least squares sense with a norm minimizing solution.
> $
> $   A x = b   where b = b_p + b_t where b_t is not in the
> range of A (and hence by the fundamental theorem of linear algebra is in
> the nullspace(A') see MatSetNullSpace()
> $
> $KSP first removes b_t producing the linear system  A x = b_p (which
> has multiple solutions) and solves this to find the ||x|| minimizing
> solution (and hence
> $it finds the solution x orthogonal to the nullspace(A). The algorithm
> is simply in each iteration of the Krylov method we remove the nullspace(A)
> from the search
> $direction thus the solution which is a linear combination of the
> search directions has no component in the nullspace(A).
> $
> $We recommend always using GMRES for such singular systems.
> $If nullspace(A) = nullspace(A') (note symmetric matrices always
> satisfy this property) then both left and right preconditioning will work
> $If nullspace(A) != nullspace(A') then left preconditioning will work
> but right preconditioning may not work (or it may).
>
>Developer Note: The reason we cannot always solve  nullspace(A) !=
> nullspace(A') systems with right preconditioning is because we need to
> remove at each iteration
>the nullspace(AB) from the search direction. While we know the
> nullspace(A) the nullspace(AB) equals B^-1 times the nullspace(A) but
> except for trivial preconditioners
>such as diagonal scaling we cannot apply the inverse of the
> preconditioner to a vector and thus cannot compute the nullspace(AB).
> --
>
> Any feed back on the correctness or clarity of the material is
> appreciated. The punch line is that right preconditioning cannot be trusted
> with nullspace(A) != nullspace(A') I don't see any fix for this.
>
>   Barry
>
>
>
> > On Oct 11, 2016, at 3:04 PM, Kong, Fande  wrote:
> >
> >
> >
> > On Tue, Oct 11, 2016 at 12:18 PM, Barry Smith 
> wrote:
> >
> > > On Oct 11, 2016, at 12:01 PM, Kong, Fande  wrote:
> > >
> > >
> > >
> > > On Tue, Oct 11, 2016 at 10:39 AM, Barry Smith 
> wrote:
> > >
> > > > On Oct 11, 2016, at 9:33 AM, Kong, Fande  wrote:
> > > >
> > > > Barry, Thanks so much for your explanation. It helps me a lot.
> > > >
> > > > On Mon, Oct 10, 2016 at 4:00 PM, Barry Smith 
> wrote:
> > > >
> > > > > On Oct 10, 2016, at 4:01 PM, Kong, Fande 
> wrote:
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> > > > >
> > > > > I was really wondering what is the philosophy behind this? The
> exact algorithms we are using in PETSc right now?  Where we are dealing
> with this, preconditioner, linear solver, or nonlinear solver?
> > > >
> > > >It is in the Krylov solver.
> > > >
> > > >The idea is very simple. Say you have   a singular A with null
> space N (that all values Ny are in the null space of A. So N is tall and
> skinny) and you want to solve A x = b where b is in the range of A. This
> problem has an infinite number of solutions Ny + x*  since A (Ny + x*)
> = ANy + Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* =
> b and x* has the smallest norm of all solutions.
> > > >
> > > >   With left preconditioning   B A x = B b GMRES, for example,
> normally computes the solution in the as alpha_1 Bb   + alpha_2 BABb +
> alpha_3 BABABAb +   but the B operator will likely introduce some
> component into the direction of the null space so as GMRES continues the
> "solution" computed will grow larger and larger with a large component in
> the null space of A. Hence we simply modify GMRES a tiny bit by building
> the solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3
> > > >
> > > >  Does "I" mean an identity matrix? Could you possibly send me a link
> for this GMRES implementation, that is, how PETSc does this in the actual
> code?
> > >
> > >Yes.
> > >
> > > It is in the helper routine KSP_PCApplyBAorAB()
> > > #undef __FUNCT__
> > > #define