Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-10-05 Thread zakaryah
Sorry again about the formatting - I struggle with notation in email.

I think I see what you are saying - my augmented matrix should differ in
the last row between the one used to calculate the nullspace, u, and the
one the SNES uses as the Jacobian.  If I construct the SNES Jacobian A so
that u is truly in its nullspace, then I can just add u to the nullspace of
A and let PETSc take care of the projection?  I will try this - thanks for
the excellent suggestion.

On Fri, Oct 5, 2018 at 4:42 PM Matthew Knepley  wrote:

> On Fri, Oct 5, 2018 at 4:15 PM zakaryah  wrote:
>
>> Thanks Matt - I was trying to write it in linear algebra notation, but
>> the nature of the problem might produce some confusion.  I was also
>> originally a bit confused by a fairly major typo in the paper I was
>> following, but I'm almost sure I have it clear now.  I'll try to be
>> concise - the meat is in the last two paragraphs.
>>
>> My system of nonlinear equations, F(x)=0, has dimension m.  Traditional
>> methods for solving, like Newton's method, tend to reach points at which
>> the Jacobian becomes singular.  I'm trying the homotopy trick, which
>> involves introducing an additional scalar variable, s, and solving the
>> system H(x,s) = sF(x) + (1-s)(x-x_0), starting at s=0 and following the
>> zero curve to, hopefully, a solution at s=1.
>>
>> The homotopy map H only has m components, so its Jacobian J is m x
>> (m+1), and has rank m.  I need to find a vector u in its null space: J u
>> = 0.  Note that u is size m+1.  Rather than solve J u = 0, I make an
>> augmented matrix A, which is (m+1)x(m+1), and consists of J in its first m
>> rows, then a somewhat arbitrary vector in the last row.  Then the vector
>> u is the solution to A u = b, where b is zero everywhere except its last
>> element.  This solve seems to be working fine.
>>
>> The SNES should then solve for a particular solution y of the Newton
>> update, J y = - H(y).  A (unique) particular solution can also be found by
>> solving A y = [-H(y) 0]^T.  Finally - and this is the step I can't figure
>> out how to implement properly, I want to find the norm-minimizing solution
>> z for the Newton update:
>> z = y - u(y^T u)/(u^T u).
>>
>> I doubt there is any way to do this elegantly within the existing
>> routines, because u is not in the nullspace of A.  Rather, it's in the
>> nullspace of J, which is the submatrix I care about.  I think what I need
>> is a way to just apply the orthogonal projection in the preceding
>> paragraph, after the SNES solves the linear system but before the Newton
>> update is applied.  I don't know the SNES internals well at all.  Is it
>> possible to hook this in somehow, or put it in an existing function?
>>
>
> I do not claim to fully understand the above (it would be easier for me to
> write the equations explicitly on their own lines). However, the issue with
> nullspaces is soluble I think. So you say that
>
>   J \in R^{m \cross (m+1)} is full rank
>
> so that its nullspace has dimension 1 and is spanned by u,
>
>   J u = 0.
>
> Now you augment J to get a square matrix A by adding a row,
>
>   /  J  \   u = /   J u  \  =  / 0 \
>  \ v^T /\ v^T u / \  0 /
>
> if v is orthogonal to u. Thus, just choose v from the m-1 dimensional
> space orthogonal to u.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks again for all your help.
>>
>>
>> On Oct 1, 2018 6:49 AM, "Matthew Knepley"  wrote:
>>
>> On Sun, Sep 30, 2018 at 6:06 PM zakaryah  wrote:
>>
>>> OK, thanks.
>>>
>>> I'm using a composite DM, DM_packer.  To make a separate KSP, I did the
>>> following in the FormJacobian() routine, after assembling the Jacobian
>>> matrix A and the RHS for the tangent vector, b:
>>>
>>>- KSPCreate(PETSC_COMM_WORLD,)
>>>- KSPSetDM(ksp,DM_packer)
>>>- KSPSetDMActive(ksp,PETSC_FALSE) - because I want to set the
>>>operators
>>>- KSPSetOperators(ksp,A,P)
>>>- VecSet(n,0) - set initial guess to zero
>>>- KSPSolve(ksp,b,n) - this solve works correctly
>>>- VecNormalize(n,NULL)
>>>-
>>>
>>> MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
>>>- MatSetNullSpace(A,nullsp)
>>>
>>> Then, with -snes_type newtonls -pc_type none -ksp_monitor
>>> -ksp_monitor_true_residual -ksp_view, the output for the KSPSolve described
>>> above, i.e. for the tangent vector, looks correct:
>>>
>>>   0 KSP preconditioned resid norm 1.e+03 true resid norm
>>> 1.e+03 ||r(i)||/||b|| 1.e+00
>>> ...
>>>
>>> 185 KSP preconditioned resid norm 9.900713131874e-03 true resid norm
>>> 9.900713131904e-03 ||r(i)||/||b|| 9.900713131904e-06
>>>
>>> Linear solve converged due to CONVERGED_RTOL iterations 185
>>>
>>> KSP Object: 1 MPI processes
>>>
>>>   type: gmres
>>>
>>> restart=30, using Classical (unmodified) Gram-Schmidt
>>> Orthogonalization with no iterative refinement
>>>
>>> happy breakdown tolerance 1e-30
>>>
>>>   maximum iterations=1, initial guess is zero
>>>
>>>   

Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-10-05 Thread zakaryah
Thanks Matt - I was trying to write it in linear algebra notation, but the
nature of the problem might produce some confusion.  I was also originally
a bit confused by a fairly major typo in the paper I was following, but I'm
almost sure I have it clear now.  I'll try to be concise - the meat is in
the last two paragraphs.

My system of nonlinear equations, F(x)=0, has dimension m.  Traditional
methods for solving, like Newton's method, tend to reach points at which
the Jacobian becomes singular.  I'm trying the homotopy trick, which
involves introducing an additional scalar variable, s, and solving the
system H(x,s) = sF(x) + (1-s)(x-x_0), starting at s=0 and following the
zero curve to, hopefully, a solution at s=1.

The homotopy map H only has m components, so its Jacobian J is m x (m+1),
and has rank m.  I need to find a vector u in its null space: J u = 0.
Note that u is size m+1.  Rather than solve J u = 0, I make an augmented
matrix A, which is (m+1)x(m+1), and consists of J in its first m rows, then
a somewhat arbitrary vector in the last row.  Then the vector u is the
solution to A u = b, where b is zero everywhere except its last element.
This solve seems to be working fine.

The SNES should then solve for a particular solution y of the Newton
update, J y = - H(y).  A (unique) particular solution can also be found by
solving A y = [-H(y) 0]^T.  Finally - and this is the step I can't figure
out how to implement properly, I want to find the norm-minimizing solution
z for the Newton update:
z = y - u(y^T u)/(u^T u).

I doubt there is any way to do this elegantly within the existing routines,
because u is not in the nullspace of A.  Rather, it's in the nullspace of
J, which is the submatrix I care about.  I think what I need is a way to
just apply the orthogonal projection in the preceding paragraph, after the
SNES solves the linear system but before the Newton update is applied.  I
don't know the SNES internals well at all.  Is it possible to hook this in
somehow, or put it in an existing function?

Thanks again for all your help.


On Oct 1, 2018 6:49 AM, "Matthew Knepley"  wrote:

On Sun, Sep 30, 2018 at 6:06 PM zakaryah  wrote:

> OK, thanks.
>
> I'm using a composite DM, DM_packer.  To make a separate KSP, I did the
> following in the FormJacobian() routine, after assembling the Jacobian
> matrix A and the RHS for the tangent vector, b:
>
>- KSPCreate(PETSC_COMM_WORLD,)
>- KSPSetDM(ksp,DM_packer)
>- KSPSetDMActive(ksp,PETSC_FALSE) - because I want to set the operators
>- KSPSetOperators(ksp,A,P)
>- VecSet(n,0) - set initial guess to zero
>- KSPSolve(ksp,b,n) - this solve works correctly
>- VecNormalize(n,NULL)
>-
>
> MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
>- MatSetNullSpace(A,nullsp)
>
> Then, with -snes_type newtonls -pc_type none -ksp_monitor
> -ksp_monitor_true_residual -ksp_view, the output for the KSPSolve described
> above, i.e. for the tangent vector, looks correct:
>
>   0 KSP preconditioned resid norm 1.e+03 true resid norm
> 1.e+03 ||r(i)||/||b|| 1.e+00
> ...
>
> 185 KSP preconditioned resid norm 9.900713131874e-03 true resid norm
> 9.900713131904e-03 ||r(i)||/||b|| 9.900713131904e-06
>
> Linear solve converged due to CONVERGED_RTOL iterations 185
>
> KSP Object: 1 MPI processes
>
>   type: gmres
>
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
> happy breakdown tolerance 1e-30
>
>   maximum iterations=1, initial guess is zero
>
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>
>   left preconditioning
>
>   using PRECONDITIONED norm type for convergence test
>
> PC Object: 1 MPI processes
>
>   type: none
>
>   linear system matrix = precond matrix:
>
>   Mat Object: 1 MPI processes
>
> type: seqaij
>
> rows=78247, cols=78247
>
> total: nonzeros=6063481, allocated nonzeros=6063481
>
> total number of mallocs used during MatSetValues calls =0
>
>   using I-node routines: found 26083 nodes, limit used is 5
>
>
> But the next KSP, i.e. the one in the SNES, doesn't converge in the true
> residual:
>
>
>   0 KSP preconditioned resid norm 2.045599092896e-04 true resid norm
> 2.803828296212e-04 ||r(i)||/||b|| 1.e+00
>
> 191 KSP preconditioned resid norm 2.009941278534e-09 true resid norm
> 1.636010142734e-04 ||r(i)||/||b|| 5.834915586465e-01
>
>
> KSP Object: 1 MPI processes
>
>   type: gmres
>
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
> happy breakdown tolerance 1e-30
>
>   maximum iterations=1, initial guess is zero
>
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>
>   left preconditioning
>
>   using PRECONDITIONED norm type for convergence test
>
> PC Object: 1 MPI processes
>
>   type: none
>
>   linear system matrix = precond matrix:
>
>   Mat Object: 1 MPI 

Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-10-01 Thread Matthew Knepley
On Sun, Sep 30, 2018 at 6:06 PM zakaryah  wrote:

> OK, thanks.
>
> I'm using a composite DM, DM_packer.  To make a separate KSP, I did the
> following in the FormJacobian() routine, after assembling the Jacobian
> matrix A and the RHS for the tangent vector, b:
>
>- KSPCreate(PETSC_COMM_WORLD,)
>- KSPSetDM(ksp,DM_packer)
>- KSPSetDMActive(ksp,PETSC_FALSE) - because I want to set the operators
>- KSPSetOperators(ksp,A,P)
>- VecSet(n,0) - set initial guess to zero
>- KSPSolve(ksp,b,n) - this solve works correctly
>- VecNormalize(n,NULL)
>-
>
> MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
>- MatSetNullSpace(A,nullsp)
>
> Then, with -snes_type newtonls -pc_type none -ksp_monitor
> -ksp_monitor_true_residual -ksp_view, the output for the KSPSolve described
> above, i.e. for the tangent vector, looks correct:
>
>   0 KSP preconditioned resid norm 1.e+03 true resid norm
> 1.e+03 ||r(i)||/||b|| 1.e+00
> ...
>
> 185 KSP preconditioned resid norm 9.900713131874e-03 true resid norm
> 9.900713131904e-03 ||r(i)||/||b|| 9.900713131904e-06
>
> Linear solve converged due to CONVERGED_RTOL iterations 185
>
> KSP Object: 1 MPI processes
>
>   type: gmres
>
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
> happy breakdown tolerance 1e-30
>
>   maximum iterations=1, initial guess is zero
>
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>
>   left preconditioning
>
>   using PRECONDITIONED norm type for convergence test
>
> PC Object: 1 MPI processes
>
>   type: none
>
>   linear system matrix = precond matrix:
>
>   Mat Object: 1 MPI processes
>
> type: seqaij
>
> rows=78247, cols=78247
>
> total: nonzeros=6063481, allocated nonzeros=6063481
>
> total number of mallocs used during MatSetValues calls =0
>
>   using I-node routines: found 26083 nodes, limit used is 5
>
>
> But the next KSP, i.e. the one in the SNES, doesn't converge in the true
> residual:
>
>
>   0 KSP preconditioned resid norm 2.045599092896e-04 true resid norm
> 2.803828296212e-04 ||r(i)||/||b|| 1.e+00
>
> 191 KSP preconditioned resid norm 2.009941278534e-09 true resid norm
> 1.636010142734e-04 ||r(i)||/||b|| 5.834915586465e-01
>
>
> KSP Object: 1 MPI processes
>
>   type: gmres
>
> restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>
> happy breakdown tolerance 1e-30
>
>   maximum iterations=1, initial guess is zero
>
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>
>   left preconditioning
>
>   using PRECONDITIONED norm type for convergence test
>
> PC Object: 1 MPI processes
>
>   type: none
>
>   linear system matrix = precond matrix:
>
>   Mat Object: 1 MPI processes
>
> type: seqaij
>
> rows=78247, cols=78247
>
> total: nonzeros=6063481, allocated nonzeros=6063481
>
> total number of mallocs used during MatSetValues calls =0
>
>   has attached null space
>
>   using I-node routines: found 26083 nodes, limit used is 5
>
>
> My guess about what's going on is that the tangent vector n isn't really
> in the nullspace of A.
>

I do not understand the rest, but this is the problem. Maybe it would make
more sense if you wrote things
in linear algebraic notation.

  Thanks,

 Matt


> Rather, it's in the nullspace of the m x (m+1) submatrix of A.  So, An=c
> e_{m+1}, where c is an arbitrary constant and e_{m+1} is the m+1 th basis
> vector.  The nonlinear function also has an added row, F_{m+1}, which is
> set to zero in the FormFunction() routine.  I don't care about the value of
> F_{m+1}, but I suppose that if it's included in the true residual, and NOT
> in the "preconditioned" residual, even with pc_type none, then it will be
> tricky to diagnose the performance.  Should I be using my own routine to
> evaluate the residual, so that F_{m+1} is not included?
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-09-30 Thread zakaryah
OK, thanks.

I'm using a composite DM, DM_packer.  To make a separate KSP, I did the
following in the FormJacobian() routine, after assembling the Jacobian
matrix A and the RHS for the tangent vector, b:

   - KSPCreate(PETSC_COMM_WORLD,)
   - KSPSetDM(ksp,DM_packer)
   - KSPSetDMActive(ksp,PETSC_FALSE) - because I want to set the operators
   - KSPSetOperators(ksp,A,P)
   - VecSet(n,0) - set initial guess to zero
   - KSPSolve(ksp,b,n) - this solve works correctly
   - VecNormalize(n,NULL)
   -
   MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
   - MatSetNullSpace(A,nullsp)

Then, with -snes_type newtonls -pc_type none -ksp_monitor
-ksp_monitor_true_residual -ksp_view, the output for the KSPSolve described
above, i.e. for the tangent vector, looks correct:

  0 KSP preconditioned resid norm 1.e+03 true resid norm
1.e+03 ||r(i)||/||b|| 1.e+00
...

185 KSP preconditioned resid norm 9.900713131874e-03 true resid norm
9.900713131904e-03 ||r(i)||/||b|| 9.900713131904e-06

Linear solve converged due to CONVERGED_RTOL iterations 185

KSP Object: 1 MPI processes

  type: gmres

restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization
with no iterative refinement

happy breakdown tolerance 1e-30

  maximum iterations=1, initial guess is zero

  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.

  left preconditioning

  using PRECONDITIONED norm type for convergence test

PC Object: 1 MPI processes

  type: none

  linear system matrix = precond matrix:

  Mat Object: 1 MPI processes

type: seqaij

rows=78247, cols=78247

total: nonzeros=6063481, allocated nonzeros=6063481

total number of mallocs used during MatSetValues calls =0

  using I-node routines: found 26083 nodes, limit used is 5


But the next KSP, i.e. the one in the SNES, doesn't converge in the true
residual:


  0 KSP preconditioned resid norm 2.045599092896e-04 true resid norm
2.803828296212e-04 ||r(i)||/||b|| 1.e+00

191 KSP preconditioned resid norm 2.009941278534e-09 true resid norm
1.636010142734e-04 ||r(i)||/||b|| 5.834915586465e-01


KSP Object: 1 MPI processes

  type: gmres

restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization
with no iterative refinement

happy breakdown tolerance 1e-30

  maximum iterations=1, initial guess is zero

  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.

  left preconditioning

  using PRECONDITIONED norm type for convergence test

PC Object: 1 MPI processes

  type: none

  linear system matrix = precond matrix:

  Mat Object: 1 MPI processes

type: seqaij

rows=78247, cols=78247

total: nonzeros=6063481, allocated nonzeros=6063481

total number of mallocs used during MatSetValues calls =0

  has attached null space

  using I-node routines: found 26083 nodes, limit used is 5


My guess about what's going on is that the tangent vector n isn't really in
the nullspace of A.  Rather, it's in the nullspace of the m x (m+1)
submatrix of A.  So, An=c e_{m+1}, where c is an arbitrary constant and
e_{m+1} is the m+1 th basis vector.  The nonlinear function also has an
added row, F_{m+1}, which is set to zero in the FormFunction() routine.  I
don't care about the value of F_{m+1}, but I suppose that if it's included
in the true residual, and NOT in the "preconditioned" residual, even with
pc_type none, then it will be tricky to diagnose the performance.  Should I
be using my own routine to evaluate the residual, so that F_{m+1} is not
included?


Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-09-29 Thread Dave May
On Sat, 29 Sep 2018 at 16:09, Matthew Knepley  wrote:

> On Sat, Sep 29, 2018 at 9:47 AM zakaryah  wrote:
>
>> Hi Matt - thanks for all your help.
>>
>> Let's say I want exactly the same solver for the tangent vector and the
>> SNES update, so I should reuse the KSP.
>>
>
If you want to do this, there is no need or reason to call
KSPSetFromOptions() inside your Jacobian evaluator - just call
SNESSetFromOptions once on the outer object.



>> My attempt to do this looks like the summary of FormJacobian() in the
>> previous message:
>>
>>- assemble Jacobian
>>- assemble RHS
>>- get KSP from the SNES passed to FormJacobian()
>>- KSPSetFromOptions()
>>- KSPSetOperators()
>>- KSPSolve()
>>
>> I'm not sure that's the right approach, but it doesn't work - the
>> KSPSolve() in the summary, i.e. the one for the tangent vector, seems to
>> work fine.  But the next KSPSolve() looks strange - it seems to use a
>> preconditioner even with -pc_type none, etc.  This makes me think I am
>> doing something seriously wrong.
>>
>
> 1) To get things going, just make the separate. Then we can optimize.
>
> 2) Give -ksp_view -ksp_monitor_true_residual -ksp_converged_reason to see
> what is happening
>
>   Matt
>
>
>> On Sat, Sep 29, 2018, 8:16 AM Matthew Knepley  wrote:
>>
>>> On Fri, Sep 28, 2018 at 11:13 PM zakaryah  wrote:
>>>
 I'm working on a homotopy solver which follows a zero curve through
 solution space.  A central aspect of this method is to calculate the vector
 tangent to the curve, predict the next point, then correct using iteration
 of, e.g. Newton's method, orthogonal to the tangent vector.

 Previously, we discussed the possibilities of implementing this within
 PETSc's SNES.  Within my FormJacobian() function, I construct the linear
 system which defines the tangent vector, solve it, then add the vector to
 the nullspace of the Jacobian.  I think that in principle this can work,
 but I suspect I'm doing something wrong.

 Here's a summary of the code within FormJacobian():

- Set values and assemble Jacobian matrix A - this is working fine
- Set values and assemble RHS vector b for linear system defining
tangent vector n - this is working fine
- SNESGetKSP(snes,_ksp) - I thought it made sense to use the KSP
associated with the SNES, hoping that PCs which use a factorization 
 could
be reused when the SNES calls KSPSolve() to calculate the update
- KSPSetFromOptions(my_ksp) - not sure this is necessary but one of
my problems is setting options for this KSP from the command line and 
 even
with this call it doesn't seem to be working properly
- MatSetNullSpace(A,NULL) - remove any existing null space from
Jacobian
- KSPSetOperators(my_ksp,A,P) - P is the other matrix in
FormJacobian()
- VecSet(n,0) - set initial guess to zero
- KSPSolve(my_ksp,b,n) - these solves appear to work, i.e. use the
options passed from the command line with -ksp_XXX or -pc_XXX
- VecNormalize(n,NULL)
-

 MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
- MatSetNullSpace(A,nullsp)
- MatNullSpaceDestroy()
- return

 The immediate problem is that the subsequent KSPSolve(), i.e. the one
 called internally by SNESSolve(), behaves strangely.  For example, if I use
 -pc_type none -ksp_monitor -ksp_monitor_true_residual, then the KSPSolve()
 that I call within FormJacobian() looks correct - "preconditioned" norm and
 true norm are identical, and both converge as I expect (i.e. slowly but
 geometrically).  However, the subsequent KSPSolve(), internal to the
 SNESSolve(), has large differences between the preconditioned norm and the
 true norm.  In addition, the KSP does not converge in the true residual,
 but I'll have a hard time debugging that without knowing how to properly
 set the options.

>>>
>>> We need to clear up the usage first. If you want EXACTLY the same solver
>>> for both solvers, then reuse
>>> the KSP, otherwise do not do it. Does it work then?
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
 I hope someone can help me see what I'm doing wrong.

 On Sun, Jul 22, 2018 at 9:09 PM zakaryah  wrote:

> Thanks Matt and Barry,
>
> Matt - if I do the calculation in FormJacobian(), which makes by far
> the most sense and is as per your suggestion, do I need to set the
> operators of the SNES's KSP back to whatever they were before I set them? 
>  The
> linear system I want to solve within FormJacobian() involves the Jacobian
> matrix itself, and I want to remove the "nullspace" from that same matrix
> within FormFunction().
>
> Barry - I'm trying to implement a homotopy solver.  In short, I have a
> system of n nonlinear equations in n variables, F(x), 

Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-09-29 Thread Matthew Knepley
On Sat, Sep 29, 2018 at 9:47 AM zakaryah  wrote:

> Hi Matt - thanks for all your help.
>
> Let's say I want exactly the same solver for the tangent vector and the
> SNES update, so I should reuse the KSP.
>
> My attempt to do this looks like the summary of FormJacobian() in the
> previous message:
>
>- assemble Jacobian
>- assemble RHS
>- get KSP from the SNES passed to FormJacobian()
>- KSPSetFromOptions()
>- KSPSetOperators()
>- KSPSolve()
>
> I'm not sure that's the right approach, but it doesn't work - the
> KSPSolve() in the summary, i.e. the one for the tangent vector, seems to
> work fine.  But the next KSPSolve() looks strange - it seems to use a
> preconditioner even with -pc_type none, etc.  This makes me think I am
> doing something seriously wrong.
>

1) To get things going, just make the separate. Then we can optimize.

2) Give -ksp_view -ksp_monitor_true_residual -ksp_converged_reason to see
what is happening

  Matt


> On Sat, Sep 29, 2018, 8:16 AM Matthew Knepley  wrote:
>
>> On Fri, Sep 28, 2018 at 11:13 PM zakaryah  wrote:
>>
>>> I'm working on a homotopy solver which follows a zero curve through
>>> solution space.  A central aspect of this method is to calculate the vector
>>> tangent to the curve, predict the next point, then correct using iteration
>>> of, e.g. Newton's method, orthogonal to the tangent vector.
>>>
>>> Previously, we discussed the possibilities of implementing this within
>>> PETSc's SNES.  Within my FormJacobian() function, I construct the linear
>>> system which defines the tangent vector, solve it, then add the vector to
>>> the nullspace of the Jacobian.  I think that in principle this can work,
>>> but I suspect I'm doing something wrong.
>>>
>>> Here's a summary of the code within FormJacobian():
>>>
>>>- Set values and assemble Jacobian matrix A - this is working fine
>>>- Set values and assemble RHS vector b for linear system defining
>>>tangent vector n - this is working fine
>>>- SNESGetKSP(snes,_ksp) - I thought it made sense to use the KSP
>>>associated with the SNES, hoping that PCs which use a factorization could
>>>be reused when the SNES calls KSPSolve() to calculate the update
>>>- KSPSetFromOptions(my_ksp) - not sure this is necessary but one of
>>>my problems is setting options for this KSP from the command line and 
>>> even
>>>with this call it doesn't seem to be working properly
>>>- MatSetNullSpace(A,NULL) - remove any existing null space from
>>>Jacobian
>>>- KSPSetOperators(my_ksp,A,P) - P is the other matrix in
>>>FormJacobian()
>>>- VecSet(n,0) - set initial guess to zero
>>>- KSPSolve(my_ksp,b,n) - these solves appear to work, i.e. use the
>>>options passed from the command line with -ksp_XXX or -pc_XXX
>>>- VecNormalize(n,NULL)
>>>-
>>>
>>> MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
>>>- MatSetNullSpace(A,nullsp)
>>>- MatNullSpaceDestroy()
>>>- return
>>>
>>> The immediate problem is that the subsequent KSPSolve(), i.e. the one
>>> called internally by SNESSolve(), behaves strangely.  For example, if I use
>>> -pc_type none -ksp_monitor -ksp_monitor_true_residual, then the KSPSolve()
>>> that I call within FormJacobian() looks correct - "preconditioned" norm and
>>> true norm are identical, and both converge as I expect (i.e. slowly but
>>> geometrically).  However, the subsequent KSPSolve(), internal to the
>>> SNESSolve(), has large differences between the preconditioned norm and the
>>> true norm.  In addition, the KSP does not converge in the true residual,
>>> but I'll have a hard time debugging that without knowing how to properly
>>> set the options.
>>>
>>
>> We need to clear up the usage first. If you want EXACTLY the same solver
>> for both solvers, then reuse
>> the KSP, otherwise do not do it. Does it work then?
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> I hope someone can help me see what I'm doing wrong.
>>>
>>> On Sun, Jul 22, 2018 at 9:09 PM zakaryah  wrote:
>>>
 Thanks Matt and Barry,

 Matt - if I do the calculation in FormJacobian(), which makes by far
 the most sense and is as per your suggestion, do I need to set the
 operators of the SNES's KSP back to whatever they were before I set them?  
 The
 linear system I want to solve within FormJacobian() involves the Jacobian
 matrix itself, and I want to remove the "nullspace" from that same matrix
 within FormFunction().

 Barry - I'm trying to implement a homotopy solver.  In short, I have a
 system of n nonlinear equations in n variables, F(x), which is hard to
 solve because the Jacobian tends to become singular using standard
 methods.  I want to add an auxiliary variable, lambda, to create a
 homotopy: H(lambda,x) = lambda*F(x) + (1-lambda)G(x), where G is "easy to
 solve", and the idea is that the Jacobian of the n+1 variable system will
 not become singular 

Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-09-29 Thread zakaryah
Hi Matt - thanks for all your help.

Let's say I want exactly the same solver for the tangent vector and the
SNES update, so I should reuse the KSP.

My attempt to do this looks like the summary of FormJacobian() in the
previous message:

   - assemble Jacobian
   - assemble RHS
   - get KSP from the SNES passed to FormJacobian()
   - KSPSetFromOptions()
   - KSPSetOperators()
   - KSPSolve()

I'm not sure that's the right approach, but it doesn't work - the
KSPSolve() in the summary, i.e. the one for the tangent vector, seems to
work fine.  But the next KSPSolve() looks strange - it seems to use a
preconditioner even with -pc_type none, etc.  This makes me think I am
doing something seriously wrong.


On Sat, Sep 29, 2018, 8:16 AM Matthew Knepley  wrote:

> On Fri, Sep 28, 2018 at 11:13 PM zakaryah  wrote:
>
>> I'm working on a homotopy solver which follows a zero curve through
>> solution space.  A central aspect of this method is to calculate the vector
>> tangent to the curve, predict the next point, then correct using iteration
>> of, e.g. Newton's method, orthogonal to the tangent vector.
>>
>> Previously, we discussed the possibilities of implementing this within
>> PETSc's SNES.  Within my FormJacobian() function, I construct the linear
>> system which defines the tangent vector, solve it, then add the vector to
>> the nullspace of the Jacobian.  I think that in principle this can work,
>> but I suspect I'm doing something wrong.
>>
>> Here's a summary of the code within FormJacobian():
>>
>>- Set values and assemble Jacobian matrix A - this is working fine
>>- Set values and assemble RHS vector b for linear system defining
>>tangent vector n - this is working fine
>>- SNESGetKSP(snes,_ksp) - I thought it made sense to use the KSP
>>associated with the SNES, hoping that PCs which use a factorization could
>>be reused when the SNES calls KSPSolve() to calculate the update
>>- KSPSetFromOptions(my_ksp) - not sure this is necessary but one of
>>my problems is setting options for this KSP from the command line and even
>>with this call it doesn't seem to be working properly
>>- MatSetNullSpace(A,NULL) - remove any existing null space from
>>Jacobian
>>- KSPSetOperators(my_ksp,A,P) - P is the other matrix in
>>FormJacobian()
>>- VecSet(n,0) - set initial guess to zero
>>- KSPSolve(my_ksp,b,n) - these solves appear to work, i.e. use the
>>options passed from the command line with -ksp_XXX or -pc_XXX
>>- VecNormalize(n,NULL)
>>-
>>
>> MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
>>- MatSetNullSpace(A,nullsp)
>>- MatNullSpaceDestroy()
>>- return
>>
>> The immediate problem is that the subsequent KSPSolve(), i.e. the one
>> called internally by SNESSolve(), behaves strangely.  For example, if I use
>> -pc_type none -ksp_monitor -ksp_monitor_true_residual, then the KSPSolve()
>> that I call within FormJacobian() looks correct - "preconditioned" norm and
>> true norm are identical, and both converge as I expect (i.e. slowly but
>> geometrically).  However, the subsequent KSPSolve(), internal to the
>> SNESSolve(), has large differences between the preconditioned norm and the
>> true norm.  In addition, the KSP does not converge in the true residual,
>> but I'll have a hard time debugging that without knowing how to properly
>> set the options.
>>
>
> We need to clear up the usage first. If you want EXACTLY the same solver
> for both solvers, then reuse
> the KSP, otherwise do not do it. Does it work then?
>
>   Thanks,
>
>  Matt
>
>
>> I hope someone can help me see what I'm doing wrong.
>>
>> On Sun, Jul 22, 2018 at 9:09 PM zakaryah  wrote:
>>
>>> Thanks Matt and Barry,
>>>
>>> Matt - if I do the calculation in FormJacobian(), which makes by far the
>>> most sense and is as per your suggestion, do I need to set the operators of
>>> the SNES's KSP back to whatever they were before I set them?  The
>>> linear system I want to solve within FormJacobian() involves the Jacobian
>>> matrix itself, and I want to remove the "nullspace" from that same matrix
>>> within FormFunction().
>>>
>>> Barry - I'm trying to implement a homotopy solver.  In short, I have a
>>> system of n nonlinear equations in n variables, F(x), which is hard to
>>> solve because the Jacobian tends to become singular using standard
>>> methods.  I want to add an auxiliary variable, lambda, to create a
>>> homotopy: H(lambda,x) = lambda*F(x) + (1-lambda)G(x), where G is "easy to
>>> solve", and the idea is that the Jacobian of the n+1 variable system will
>>> not become singular along the curve H(lambda,x) = 0.
>>>
>>> The method involves adding an equation to H so that the Jacobian H' is
>>> square.  The "submatrix" refers to the n x (n+1) matrix which represents
>>> the Jacobian without the added equation, whereas my FormJacobian() forms
>>> the entire (n+1) x (n+1) matrix H'.  I only refer to the submatrix because
>>> it has a 

Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-09-29 Thread Matthew Knepley
On Fri, Sep 28, 2018 at 11:13 PM zakaryah  wrote:

> I'm working on a homotopy solver which follows a zero curve through
> solution space.  A central aspect of this method is to calculate the vector
> tangent to the curve, predict the next point, then correct using iteration
> of, e.g. Newton's method, orthogonal to the tangent vector.
>
> Previously, we discussed the possibilities of implementing this within
> PETSc's SNES.  Within my FormJacobian() function, I construct the linear
> system which defines the tangent vector, solve it, then add the vector to
> the nullspace of the Jacobian.  I think that in principle this can work,
> but I suspect I'm doing something wrong.
>
> Here's a summary of the code within FormJacobian():
>
>- Set values and assemble Jacobian matrix A - this is working fine
>- Set values and assemble RHS vector b for linear system defining
>tangent vector n - this is working fine
>- SNESGetKSP(snes,_ksp) - I thought it made sense to use the KSP
>associated with the SNES, hoping that PCs which use a factorization could
>be reused when the SNES calls KSPSolve() to calculate the update
>- KSPSetFromOptions(my_ksp) - not sure this is necessary but one of my
>problems is setting options for this KSP from the command line and even
>with this call it doesn't seem to be working properly
>- MatSetNullSpace(A,NULL) - remove any existing null space from
>Jacobian
>- KSPSetOperators(my_ksp,A,P) - P is the other matrix in FormJacobian()
>- VecSet(n,0) - set initial guess to zero
>- KSPSolve(my_ksp,b,n) - these solves appear to work, i.e. use the
>options passed from the command line with -ksp_XXX or -pc_XXX
>- VecNormalize(n,NULL)
>-
>
> MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
>- MatSetNullSpace(A,nullsp)
>- MatNullSpaceDestroy()
>- return
>
> The immediate problem is that the subsequent KSPSolve(), i.e. the one
> called internally by SNESSolve(), behaves strangely.  For example, if I use
> -pc_type none -ksp_monitor -ksp_monitor_true_residual, then the KSPSolve()
> that I call within FormJacobian() looks correct - "preconditioned" norm and
> true norm are identical, and both converge as I expect (i.e. slowly but
> geometrically).  However, the subsequent KSPSolve(), internal to the
> SNESSolve(), has large differences between the preconditioned norm and the
> true norm.  In addition, the KSP does not converge in the true residual,
> but I'll have a hard time debugging that without knowing how to properly
> set the options.
>

We need to clear up the usage first. If you want EXACTLY the same solver
for both solvers, then reuse
the KSP, otherwise do not do it. Does it work then?

  Thanks,

 Matt


> I hope someone can help me see what I'm doing wrong.
>
> On Sun, Jul 22, 2018 at 9:09 PM zakaryah  wrote:
>
>> Thanks Matt and Barry,
>>
>> Matt - if I do the calculation in FormJacobian(), which makes by far the
>> most sense and is as per your suggestion, do I need to set the operators of
>> the SNES's KSP back to whatever they were before I set them?  The linear
>> system I want to solve within FormJacobian() involves the Jacobian matrix
>> itself, and I want to remove the "nullspace" from that same matrix within
>> FormFunction().
>>
>> Barry - I'm trying to implement a homotopy solver.  In short, I have a
>> system of n nonlinear equations in n variables, F(x), which is hard to
>> solve because the Jacobian tends to become singular using standard
>> methods.  I want to add an auxiliary variable, lambda, to create a
>> homotopy: H(lambda,x) = lambda*F(x) + (1-lambda)G(x), where G is "easy to
>> solve", and the idea is that the Jacobian of the n+1 variable system will
>> not become singular along the curve H(lambda,x) = 0.
>>
>> The method involves adding an equation to H so that the Jacobian H' is
>> square.  The "submatrix" refers to the n x (n+1) matrix which represents
>> the Jacobian without the added equation, whereas my FormJacobian() forms
>> the entire (n+1) x (n+1) matrix H'.  I only refer to the submatrix because
>> it has a nullspace, and I want to find it by solving a linear system
>> designed for this purpose, H' u = b, where b is not the zero vector.  H'
>> has no nullspace, but I want to remove the projection of u from my SNES
>> solution vector, as u IS in the nullspace of the submatrix.
>>
>> The RHS vector b can be calculated outside the SNES solve.  I guess my
>> FormJacobian() should look like this:
>>
>> FormJacobian(SNES snes, Vec x, Mat Amat, Mat Pmat, void *ctx) {
>>   KSP ksp;
>>   MatNullSpace unull;
>>   user_struct *user = (user_struct*)ctx;
>>
>>   calculate Amat
>>   assemble Amat
>>
>>   if (Pmat != Amat) {
>> assemble Amat
>>   }
>>
>>   SNESGetKSP(snes,);
>>   KSPSetOperators(ksp,Amat,Pmat);
>>   KSPSolve(ksp,user->b,user->u);
>>
>>   MatNullSpaceCreate(PetscObjectComm((PetscObject)Amat), PETSC_FALSE, 1,
>> &(user->u),);
>>   

Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-09-28 Thread zakaryah
I'm working on a homotopy solver which follows a zero curve through
solution space.  A central aspect of this method is to calculate the vector
tangent to the curve, predict the next point, then correct using iteration
of, e.g. Newton's method, orthogonal to the tangent vector.

Previously, we discussed the possibilities of implementing this within
PETSc's SNES.  Within my FormJacobian() function, I construct the linear
system which defines the tangent vector, solve it, then add the vector to
the nullspace of the Jacobian.  I think that in principle this can work,
but I suspect I'm doing something wrong.

Here's a summary of the code within FormJacobian():

   - Set values and assemble Jacobian matrix A - this is working fine
   - Set values and assemble RHS vector b for linear system defining
   tangent vector n - this is working fine
   - SNESGetKSP(snes,_ksp) - I thought it made sense to use the KSP
   associated with the SNES, hoping that PCs which use a factorization could
   be reused when the SNES calls KSPSolve() to calculate the update
   - KSPSetFromOptions(my_ksp) - not sure this is necessary but one of my
   problems is setting options for this KSP from the command line and even
   with this call it doesn't seem to be working properly
   - MatSetNullSpace(A,NULL) - remove any existing null space from Jacobian
   - KSPSetOperators(my_ksp,A,P) - P is the other matrix in FormJacobian()
   - VecSet(n,0) - set initial guess to zero
   - KSPSolve(my_ksp,b,n) - these solves appear to work, i.e. use the
   options passed from the command line with -ksp_XXX or -pc_XXX
   - VecNormalize(n,NULL)
   -
   MatNullSpaceCreate(PetscObjectComm((PetscObject)A),PETSC_FALSE,1,,)
   - MatSetNullSpace(A,nullsp)
   - MatNullSpaceDestroy()
   - return

The immediate problem is that the subsequent KSPSolve(), i.e. the one
called internally by SNESSolve(), behaves strangely.  For example, if I use
-pc_type none -ksp_monitor -ksp_monitor_true_residual, then the KSPSolve()
that I call within FormJacobian() looks correct - "preconditioned" norm and
true norm are identical, and both converge as I expect (i.e. slowly but
geometrically).  However, the subsequent KSPSolve(), internal to the
SNESSolve(), has large differences between the preconditioned norm and the
true norm.  In addition, the KSP does not converge in the true residual,
but I'll have a hard time debugging that without knowing how to properly
set the options.

I hope someone can help me see what I'm doing wrong.

On Sun, Jul 22, 2018 at 9:09 PM zakaryah  wrote:

> ​Thanks Matt and Barry,
>
> Matt - if I do the calculation in FormJacobian(), which makes by far the
> most sense and is as per your suggestion, do I need to set the operators of
> the SNES's KSP back to whatever they were before I set them?  The linear
> system I want to solve within FormJacobian() involves the Jacobian matrix
> itself, and I want to remove the "nullspace" from that same matrix within
> FormFunction().
>
> Barry - I'm trying to implement a homotopy solver.  In short, I have a
> system of n nonlinear equations in n variables, F(x), which is hard to
> solve because the Jacobian tends to become singular using standard
> methods​.  I want to add an auxiliary variable, lambda, to create a
> homotopy: H(lambda,x) = lambda*F(x) + (1-lambda)G(x), where G is "easy to
> solve", and the idea is that the Jacobian of the n+1 variable system will
> not become singular along the curve H(lambda,x) = 0.
>
> The method involves adding an equation to H so that the Jacobian H' is
> square.  The "submatrix" refers to the n x (n+1) matrix which represents
> the Jacobian without the added equation, whereas my FormJacobian() forms
> the entire (n+1) x (n+1) matrix H'.  I only refer to the submatrix because
> it has a nullspace, and I want to find it by solving a linear system
> designed for this purpose, H' u = b, where b is not the zero vector.  H'
> has no nullspace, but I want to remove the projection of u from my SNES
> solution vector, as u IS in the nullspace of the submatrix.
>
> The RHS vector b can be calculated outside the SNES solve.  I guess my
> FormJacobian() should look like this:
>
> FormJacobian(SNES snes, Vec x, Mat Amat, Mat Pmat, void *ctx) {
>   KSP ksp;
>   MatNullSpace unull;
>   user_struct *user = (user_struct*)ctx;
>
>   calculate Amat
>   assemble Amat
>
>   if (Pmat != Amat) {
> assemble Amat
>   }
>
>   SNESGetKSP(snes,);
>   KSPSetOperators(ksp,Amat,Pmat);
>   KSPSolve(ksp,user->b,user->u);
>
>   MatNullSpaceCreate(PetscObjectComm((PetscObject)Amat), PETSC_FALSE, 1,
> &(user->u),);
>   MatSetNullSpace(Amat,unull);
>   MatNullSpaceDestroy();
> }
>
> Does this look right?
>
>


Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-07-22 Thread zakaryah
​Thanks Matt and Barry,

Matt - if I do the calculation in FormJacobian(), which makes by far the
most sense and is as per your suggestion, do I need to set the operators of
the SNES's KSP back to whatever they were before I set them?  The linear
system I want to solve within FormJacobian() involves the Jacobian matrix
itself, and I want to remove the "nullspace" from that same matrix within
FormFunction().

Barry - I'm trying to implement a homotopy solver.  In short, I have a
system of n nonlinear equations in n variables, F(x), which is hard to
solve because the Jacobian tends to become singular using standard
methods​.  I want to add an auxiliary variable, lambda, to create a
homotopy: H(lambda,x) = lambda*F(x) + (1-lambda)G(x), where G is "easy to
solve", and the idea is that the Jacobian of the n+1 variable system will
not become singular along the curve H(lambda,x) = 0.

The method involves adding an equation to H so that the Jacobian H' is
square.  The "submatrix" refers to the n x (n+1) matrix which represents
the Jacobian without the added equation, whereas my FormJacobian() forms
the entire (n+1) x (n+1) matrix H'.  I only refer to the submatrix because
it has a nullspace, and I want to find it by solving a linear system
designed for this purpose, H' u = b, where b is not the zero vector.  H'
has no nullspace, but I want to remove the projection of u from my SNES
solution vector, as u IS in the nullspace of the submatrix.

The RHS vector b can be calculated outside the SNES solve.  I guess my
FormJacobian() should look like this:

FormJacobian(SNES snes, Vec x, Mat Amat, Mat Pmat, void *ctx) {
  KSP ksp;
  MatNullSpace unull;
  user_struct *user = (user_struct*)ctx;

  calculate Amat
  assemble Amat

  if (Pmat != Amat) {
assemble Amat
  }

  SNESGetKSP(snes,);
  KSPSetOperators(ksp,Amat,Pmat);
  KSPSolve(ksp,user->b,user->u);

  MatNullSpaceCreate(PetscObjectComm((PetscObject)Amat), PETSC_FALSE, 1,
&(user->u),);
  MatSetNullSpace(Amat,unull);
  MatNullSpaceDestroy();
}

Does this look right?


Re: [petsc-users] Fwd: Implementing a homotopy solver

2018-07-22 Thread Smith, Barry F.



> On Jul 22, 2018, at 4:31 PM, zakaryah  wrote:
> 
> OK, thanks, I understand now.
> 
> FormJacobian() has two matrices as arguments, the Jacobian Amat and the 
> preconditioner Pmat, which may be the same.  My FormJacobian() routine sets 
> the values of the Jacobian in Pmat.  So, to calculate the effective nullspace 
> of the submatrix,

  What "submatrix" are you referring to?  Are you calling Pmat a 
submatrix?, if so why?

  Is b then here the zero vector?


> I want to solve Pmat u = b.  To do this within FormJacobian(), can I assemble 
> Pmat, call SNESGetKSP(snes,), and then KSPSolve(ksp,b,u)?  Is it safe to 
> assume that the KSP matrix is Pmat?
> 
> 

The KSPSolve used by SNES solvesAmat delta u = -F(u)  building the 
preconditioner from the matrix Pmat.  


> 



[petsc-users] Fwd: Implementing a homotopy solver

2018-07-22 Thread zakaryah
OK, thanks, I understand now.

FormJacobian() has two matrices as arguments, the Jacobian Amat and the
preconditioner Pmat, which may be the same.  My FormJacobian() routine sets
the values of the Jacobian in Pmat.  So, to calculate the effective
nullspace of the submatrix, I want to solve Pmat u = b.  To do this within
FormJacobian(), can I assemble Pmat, call SNESGetKSP(snes,), and then
KSPSolve(ksp,b,u)?  Is it safe to assume that the KSP matrix is Pmat?