On Sat, Nov 5, 2011 at 3:45 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Sat, Nov 5, 2011 at 08:06, Matthew Knepley <knepley at gmail.com> wrote: > >> I want to start small by porting a very simple code using fixed point >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >>> then solved by KSP for x, then x0 is updated to x, then repeat until >>> convergence. >>> >> > Run the usual "Newton" methods with A(x) in place of the true Jacobian. > You can compute A(x) in the residual > > F(x) = A(x) x - b(x) > > and cache it in your user context, then pass it back when asked to compute > the Jacobian. > > This runs your algorithm (often called Picard) in "defect correction > mode", but once you write your equations this way, you can try Newton > iteration using -snes_mf_operator. > > >> >>> In the documentation chapter 5 I see all sorts of sophisticated Newton >>> type methods, requiring computation of the Jacobian. Is the above >>> defined simple method still accessible somehow in Petsc or such >>> triviality can only be done by hand? Which one from the existing >>> nonlinear solvers would be a closest match both in simplicity and >>> robustness (even if slow performance)? >>> >> >> You want -snes_type nrichardson. All you need is to define the residual. >> > > Matt, were the 1000 emails we exchanged over this last month not enough to > prevent you from spreading misinformation under a different name? > Tell people whatever you want. The above is just Newton or "iterative refinement" in thousands of NA papers. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111105/6cd57bfe/attachment.htm>
