On Monday, October 10, 2016, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
> > On Oct 10, 2016, at 4:01 PM, Kong, Fande <fande.k...@inl.gov
> <javascript:;>> wrote:
> >
> > Hi All,
> >
> > I know how to remove the null spaces from a singular system using
> creating a MatNullSpace and attaching it to Mat.
> >
> > I was really wondering what is the philosophy behind this? The exact
> algorithms we are using in PETSc right now?  Where we are dealing with
> this, preconditioner, linear solver, or nonlinear solver?
>
>    It is in the Krylov solver.
>
>    The idea is very simple. Say you have   a singular A with null space N
> (that all values Ny are in the null space of A. So N is tall and skinny)
> and you want to solve A x = b where b is in the range of A. This problem
> has an infinite number of solutions     Ny + x*  since A (Ny + x*) = ANy +
> Ax* = Ax* = b where x* is the "minimum norm solution; that is Ax* = b and
> x* has the smallest norm of all solutions.
>
>       With left preconditioning   B A x = B b GMRES, for example, normally
> computes the solution in the as alpha_1 Bb   + alpha_2 BABb + alpha_3
> BABABAb + ....  but the B operator will likely introduce some component
> into the direction of the null space so as GMRES continues the "solution"
> computed will grow larger and larger with a large component in the null
> space of A. Hence we simply modify GMRES a tiny bit by building the
> solution from alpha_1 (I-N)Bb   + alpha_2 (I-N)BABb + alpha_3 (I-N)BABABAb
> + ....  that is we remove from each new direction anything in the direction
> of the null space. Hence the null space doesn't directly appear in the
> preconditioner, just in the KSP method.   If you attach a null space to the
> matrix, the KSP just automatically uses it to do the removal above.


Barry, if identity matrix I is of size M x M (which is also the size of A)
then are you augmenting N (size M x R; R < M) by zero colums to make I  - N
possible? If so it means that only first R values of vector Bb are used for
scaling zero Eigenvectors of A. Does this choice affect iteration count,
meaning one can arbitrarily choose any R values of the vector Bb to scale
zero eigenvectors of A?

>
>     With right preconditioning the solution is built from alpha_1 b   +
> alpha_2 ABb + alpha_3 ABABb + .... and again we apply (I-N) to each term to
> remove any part that is in the null space of A.
>
>    Now consider the case   A y = b where b is NOT in the range of A. So
> the problem has no "true" solution, but one can find a least squares
> solution by rewriting b = b_par + b_perp where b_par is in the range of A
> and b_perp is orthogonal to the range of A and solve instead    A x =
> b_perp. If you provide a MatSetTransposeNullSpace() then KSP automatically
> uses it to remove b_perp from the right hand side before starting the KSP
> iterations.
>
>   The manual pages for MatNullSpaceAttach() and
> MatTranposeNullSpaceAttach() discuss this an explain how it relates to the
> fundamental theorem of linear algebra.
>
>   Note that for symmetric matrices the two null spaces are the same.
>
>   Barry
>
>
>    A different note: This "trick" is not a "cure all" for a totally
> inappropriate preconditioner. For example if one uses for a preconditioner
> a direct (sparse or dense) solver or an ILU(k) one can end up with a very
> bad solver because the direct solver will likely produce a very small pivot
> at some point thus the triangular solver applied in the precondition can
> produce HUGE changes in the solution (that are not physical) and so the
> preconditioner basically produces garbage. On the other hand sometimes it
> works out ok.
>
>
> >
> >
> > Fande Kong,
>
>

-- 
--Amneet

Reply via email to