On Sun, Nov 6, 2011 at 6:59 PM, Matthew Knepley <knepley at gmail.com> wrote: > On Sun, Nov 6, 2011 at 5:52 PM, Dominik Szczerba <dominik at itis.ethz.ch> > wrote: >> >> >>> I want to start small by porting a very simple code using fixed point >> >>> iterations as follows: A(x)x = b(x) is approximated as A(x0)x = b(x0), >> >>> then solved by KSP for x, then x0 is updated to x, then repeat until >> >>> convergence. >> > >> > Run the usual "Newton" methods with A(x) in place of the true Jacobian. >> >> When I substitute A(x) into eq. 5.2 I get: >> >> A(x) dx = -F(x) (1) >> A(x) dx = -A(x) x + b(x) (2) >> A(x) dx + A(x) x = b(x) (3) >> A(x) (x+dx) = b(x) (4) >> >> My questions: >> >> * Will the procedure somehow optimally group the two A(x) terms into >> one, as in 3-4? This requires knowledge, will this be efficiently >> handled? > > There is no grouping. You solve for dx and do a vector addition. > >> >> * I am solving for x+dx, while eq. 5.3 solves for dx. Is this, and >> how, correctly handled? Should I somehow disable the update myself? > > Do not do any update yourself, just give the correct A at each iteration in > your FormJacobian routine. > ? ?Matt
OK, no manual update, this is clear now. What is still not clear is that by substituting A for F' I arrive at an equation in x+dx (my eq. 4), and not dx (Petsc eq. 5.3)... Dominik
