If I remember your problem is K(u) + kaa' = F(u)
You should start by creating a SNES and calling SNESSetPicard. Read its
manual page. Your matrix should be a MatCreateLRC() for the Mat argument to
SNESSetPicard and the Peat should be just your K matrix.
If you run with -ksp_fd_operator -pc_type lu will be using K to
precondition K + kaa' + d F(U)/dU . Newton's method should converge at
quadratic order. You can use -ksp_fd_operator -pc_type anything else to use an
iterative linear solver as the preconditioner of K.
If you really want to use Sherman Morisson formula then you would create a
PC shell and do
typedef struct{
KSP innerksp;
Vec u_1,u_2;
} YourStruct;
SNESGetKSP(&ksp);
KSPGetPC(&pc);
PCSetType(pc,PCSHELL);
PCShellSetApply(pc,YourPCApply)
PetscMemclear(yourstruct,si
PCShellSetContext(pc,&yourstruct);
Then
YourPCApply(PC pc,Vec in, Vec out)
{
YourStruct *yourstruct;
PCShellGetContext(pc,(void**)&yourstruct)
if (!yourstruct->ksp) {
PCCreate(comm,&yourstruct->ksp);
KSPSetPrefix(yourstruct->ksp,"yourpc_");
Mat A,B;
KSPGetOperators(ksp,&A,&B);
KSPSetOperators(yourstruct->ksp,A,B);
create work vectors
}
Apply the solve as you do for the linear case with Sherman Morisson
formula
}
This you can run with for example -yourpc_pc_type cholesky
Barry
Looks complicated, conceptually simple.
> 2) Call KSPSetOperators(ksp, K, K,)
>
> 3) Solve the first system KSPSolve(ksp, -F, u_1)
>
> 4) Solve the second system KSPSolve(ksp, a, u_2)
>
> 5) Calculate beta VecDot(a, u_2, &gamma); beta = 1./(1. + k*gamma);
>
> 6) Update the guess VecDot(u_2, F, &delta); VecAXPBYPCZ(u, 1.0, beta*delta,
> 1.0, u_1, u_2)
No
> On Feb 6, 2020, at 9:02 AM, Olek Niewiarowski <[email protected]> wrote:
>
> Hi Matt,
>
> What you suggested in your last email was exactly what I did on my very first
> attempt at the problem, and while it "worked," convergence was not
> satisfactory due to the Newton step being fixed in step 6. This is the reason
> I would like to use the linesearch in SNES instead. Indeed in your manual you
> "recommend most PETSc users work directly with SNES, rather than using PETSc
> for the linear problem within a nonlinear solver." Ideally I'd like to
> create a SNES solver, pass in the functions to evaluate K, F, a, and k, and
> set up the underlying KSP object as in my first message. Is this possible?
>
> Thanks,
>
> Alexander (Olek) Niewiarowski
> PhD Candidate, Civil & Environmental Engineering
> Princeton University, 2020
> Cell: +1 (610) 393-2978
> From: Matthew Knepley <[email protected]>
> Sent: Thursday, February 6, 2020 5:33
> To: Olek Niewiarowski <[email protected]>
> Cc: Smith, Barry F. <[email protected]>; [email protected]
> <[email protected]>
> Subject: Re: [petsc-users] Implementing the Sherman Morisson formula (low
> rank update) in petsc4py and FEniCS?
>
> On Wed, Feb 5, 2020 at 8:53 PM Olek Niewiarowski <[email protected]> wrote:
> Hi Barry and Matt,
>
> Thank you for your input and for creating a new issue in the repo.
> My initial question was more basic (how to configure the SNES's KSP solver as
> in my first message with a and k), but now I see there's more to the
> implementation. To reiterate, for my problem's structure, a good solution
> algorithm (on the algebraic level) is the following "double
> back-substitution":
> For each nonlinear iteration:
> • define intermediate vectors u_1 and u_2
> • solve Ku_1 = -F --> u_1 = -K^{-1}F (this solve is cheap, don't
> actually need K^-1)
> • solve Ku_2 = -a --> u_2 = -K^{-1}a (ditto)
> • define \beta = 1/(1 + k a^Tu_2)
> • \Delta u = u_1 + \beta k u_2^T F u_2
> • u = u + \Delta u
> This is very easy to setup:
>
> 1) Create a KSP object KSPCreate(comm, &ksp)
>
> 2) Call KSPSetOperators(ksp, K, K,)
>
> 3) Solve the first system KSPSolve(ksp, -F, u_1)
>
> 4) Solve the second system KSPSolve(ksp, a, u_2)
>
> 5) Calculate beta VecDot(a, u_2, &gamma); beta = 1./(1. + k*gamma);
>
> 6) Update the guess VecDot(u_2, F, &delta); VecAXPBYPCZ(u, 1.0, beta*delta,
> 1.0, u_1, u_2)
>
> Thanks,
>
> Matt
>
> I don't need the Jacobian inverse, [K−kaaT]-1 = K-1 - (kK-1
> aaTK-1)/(1+kaTK-1a) just the solution Δu = [K−kaaT]-1F = K-1F - (kK-1
> aFK-1a)/(1 + kaTK-1a)
> = u_1 + beta k u_2^T F u_2 (so I never need to invert K either). (To Matt's
> point on gitlab, K is a symmetric sparse matrix arising from a bilinear form.
> ) Also, eventually, I want to have more than one low-rank updates to K, but
> again, Sherman Morrisson Woodbury should still work.
>
> Being new to PETSc, I don't know if this algorithm directly translates into
> an efficient numerical solver. (I'm also not sure if Picard iteration would
> be useful here.) What would it take to set up the KSP solver in SNES like I
> did below? Is it possible "out of the box"? I looked at MatCreateLRC() -
> what would I pass this to? (A pointer to demo/tutorial would be very
> appreciated.) If there's a better way to go about all of this, I'm open to
> any and all ideas. My only limitation is that I use petsc4py exclusively
> since I/future users of my code will not be comfortable with C.
>
> Thanks again for your help!
>
>
> Alexander (Olek) Niewiarowski
> PhD Candidate, Civil & Environmental Engineering
> Princeton University, 2020
> Cell: +1 (610) 393-2978
> From: Smith, Barry F. <[email protected]>
> Sent: Wednesday, February 5, 2020 15:46
> To: Matthew Knepley <[email protected]>
> Cc: Olek Niewiarowski <[email protected]>; [email protected]
> <[email protected]>
> Subject: Re: [petsc-users] Implementing the Sherman Morisson formula (low
> rank update) in petsc4py and FEniCS?
>
>
> https://gitlab.com/petsc/petsc/issues/557
>
>
> > On Feb 5, 2020, at 7:35 AM, Matthew Knepley <[email protected]> wrote:
> >
> > Perhaps Barry is right that you want Picard, but suppose you really want
> > Newton.
> >
> > "This problem can be solved efficiently using the Sherman-Morrison formula"
> > Well, maybe. The main assumption here is that inverting K is cheap. I see
> > two things you can do in a straightforward way:
> >
> > 1) Use MatCreateLRC()
> > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateLRC.html
> > to create the Jacobian
> > and solve using an iterative method. If you pass just K was the
> > preconditioning matrix, you can use common PCs.
> >
> > 2) We only implemented MatMult() for LRC, but you could stick your SMW
> > code in for MatSolve_LRC if you really want to factor K. We would
> > of course help you do this.
> >
> > Thanks,
> >
> > Matt
> >
> > On Wed, Feb 5, 2020 at 1:36 AM Smith, Barry F. via petsc-users
> > <[email protected]> wrote:
> >
> > I am not sure of everything in your email but it sounds like you want to
> > use a "Picard" iteration to solve [K(u)−kaaT]Δu=−F(u). That is solve
> >
> > A(u^{n}) (u^{n+1} - u^{n}) = F(u^{n}) - A(u^{n})u^{n} where A(u) = K(u)
> > - kaaT
> >
> > PETSc provides code to this with SNESSetPicard() (see the manual pages) I
> > don't know if Petsc4py has bindings for this.
> >
> > Adding missing python bindings is not terribly difficult and you may be
> > able to do it yourself if this is the approach you want.
> >
> > Barry
> >
> >
> >
> > > On Feb 4, 2020, at 5:07 PM, Olek Niewiarowski <[email protected]> wrote:
> > >
> > > Hello,
> > > I am a FEniCS user but new to petsc4py. I am trying to modify the KSP
> > > solver through the SNES object to implement the Sherman-Morrison
> > > formula(e.g. http://fourier.eng.hmc.edu/e176/lectures/algebra/node6.html
> > > ). I am solving a nonlinear system of the form [K(u)−kaaT]Δu=−F(u). Here
> > > the jacobian matrix K is modified by the term kaaT, where k is a scalar.
> > > Notably, K is sparse, while the term kaaT results in a full matrix. This
> > > problem can be solved efficiently using the Sherman-Morrison formula :
> > >
> > > [K−kaaT]-1 = K-1 - (kK-1 aaTK-1)/(1+kaTK-1a)
> > > I have managed to successfully implement this at the linear solve level
> > > (by modifying the KSP solver) inside a custom Newton solver in python by
> > > following an incomplete tutorial at
> > > https://www.firedrakeproject.org/petsc-interface.html#defining-a-preconditioner
> > > :
> > > • while (norm(delU) > alpha): # while not converged
> > > •
> > > • self.update_F() # call to method to update r.h.s form
> > > • self.update_K() # call to update the jacobian form
> > > • K = assemble(self.K) # assemble the jacobian matrix
> > > • F = assemble(self.F) # assemble the r.h.s vector
> > > • a = assemble(self.a_form) # assemble the a_form (see
> > > Sherman Morrison formula)
> > > •
> > > • for bc in self.mem.bc: # apply boundary conditions
> > > • bc.apply(K, F)
> > > • bc.apply(K, a)
> > > •
> > > • B = PETSc.Mat().create()
> > > •
> > > • # Assemble the bilinear form that defines A and get the
> > > concrete
> > > • # PETSc matrix
> > > • A = as_backend_type(K).mat() # get the PETSc objects
> > > for K and a
> > > • u = as_backend_type(a).vec()
> > > •
> > > • # Build the matrix "context" # see firedrake docs
> > > • Bctx = MatrixFreeB(A, u, u, self.k)
> > > •
> > > • # Set up B
> > > • # B is the same size as A
> > > • B.setSizes(*A.getSizes())
> > > • B.setType(B.Type.PYTHON)
> > > • B.setPythonContext(Bctx)
> > > • B.setUp()
> > > •
> > > •
> > > • ksp = PETSc.KSP().create() # create the KSP linear
> > > solver object
> > > • ksp.setOperators(B)
> > > • ksp.setUp()
> > > • pc = ksp.pc
> > > • pc.setType(pc.Type.PYTHON)
> > > • pc.setPythonContext(MatrixFreePC())
> > > • ksp.setFromOptions()
> > > •
> > > • solution = delU # the incremental displacement at
> > > this iteration
> > > •
> > > • b = as_backend_type(-F).vec()
> > > • delu = solution.vector().vec()
> > > •
> > > • ksp.solve(b, delu)
> > >
> > > • self.mem.u.vector().axpy(0.25, self.delU.vector()) #
> > > poor man's linesearch
> > > • counter += 1
> > > Here is the corresponding petsc4py code adapted from the firedrake docs:
> > >
> > > • class MatrixFreePC(object):
> > > •
> > > • def setUp(self, pc):
> > > • B, P = pc.getOperators()
> > > • # extract the MatrixFreeB object from B
> > > • ctx = B.getPythonContext()
> > > • self.A = ctx.A
> > > • self.u = ctx.u
> > > • self.v = ctx.v
> > > • self.k = ctx.k
> > > • # Here we build the PC object that uses the concrete,
> > > • # assembled matrix A. We will use this to apply the
> > > action
> > > • # of A^{-1}
> > > • self.pc = PETSc.PC().create()
> > > • self.pc.setOptionsPrefix("mf_")
> > > • self.pc.setOperators(self.A)
> > > • self.pc.setFromOptions()
> > > • # Since u and v do not change, we can build the
> > > denominator
> > > • # and the action of A^{-1} on u only once, in the setup
> > > • # phase.
> > > • tmp = self.A.createVecLeft()
> > > • self.pc.apply(self.u, tmp)
> > > • self._Ainvu = tmp
> > > • self._denom = 1 + self.k*self.v.dot(self._Ainvu)
> > > •
> > > • def apply(self, pc, x, y):
> > > • # y <- A^{-1}x
> > > • self.pc.apply(x, y)
> > > • # alpha <- (v^T A^{-1} x) / (1 + v^T A^{-1} u)
> > > • alpha = (self.k*self.v.dot(y)) / self._denom
> > > • # y <- y - alpha * A^{-1}u
> > > • y.axpy(-alpha, self._Ainvu)
> > > •
> > > •
> > > • class MatrixFreeB(object):
> > > •
> > > • def __init__(self, A, u, v, k):
> > > • self.A = A
> > > • self.u = u
> > > • self.v = v
> > > • self.k = k
> > > •
> > > • def mult(self, mat, x, y):
> > > • # y <- A x
> > > • self.A.mult(x, y)
> > > •
> > > • # alpha <- v^T x
> > > • alpha = self.v.dot(x)
> > > •
> > > • # y <- y + alpha*u
> > > • y.axpy(alpha, self.u)
> > > However, this approach is not efficient as it requires many iterations
> > > due to the Newton step being fixed, so I would like to implement it using
> > > SNES and use line search. Unfortunately, I have not been able to find any
> > > documentation/tutorial on how to do so. Provided I have the FEniCS forms
> > > for F, K, and a, I'd like to do something along the lines of:
> > > solver = PETScSNESSolver() # the FEniCS SNES wrapper
> > > snes = solver.snes() # the petsc4py SNES object
> > > ## ??
> > > ksp = snes.getKSP()
> > > # set ksp option similar to above
> > > solver.solve()
> > >
> > > I would be very grateful if anyone could could help or point me to a
> > > reference or demo that does something similar (or maybe a completely
> > > different way of solving the problem!).
> > > Many thanks in advance!
> > > Alex
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which their
> > experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/