On a related thread,
I think there is a bug in the Newton solvers (in 0.8.1) in that
they now don't zero the correction dx before each linear solve.
For direct solvers this isn't a problem, however for krylov methods
(in particular PETScKrylovSolver), this ends up passing the last
correction as the initial guess for the next iteration. Eventually,
when the residual becomes sufficiently small, this throws a
divergence_tolerance error in Petsc. (To see this, just change the
solver in demo/nls/nonlinearpoisson to something like gmres/ilu and
use Petsc as the back end).
Let me know if I'm being incoherent as usual.
cheers
marc
On Sep 27, 2008, at 4:00 PM, Garth N. Wells wrote:
Nuno David Lopes wrote:
Well, in my stokes/uzawa codes I get much worst times, more
iterations and the
global error diverges. (But I've to think a little more about it,
even in the
first iteration i get different times.) So they don't work
anymore. :(
So I tested with convection-difusion and please take a look at the
solutions.
Even with the LU solver.
This should be fixed now.
Garth
On Friday 26 September 2008, Garth N. Wells wrote:
Garth N. Wells wrote:
Nuno David Lopes wrote:
Is there a simple way of setting an initial guess for an Iterative
LinearSolver?
In Umfpack and PETSc the default initial guess is the zero
vector right?
At the moment, yes (note the UMFPACK is an LU solver, so an initial
guess doesn't do anything).
It's very simple, and I've been meaning to add an option for
using an
initial guess. It's also useful for Newton solvers. I'll add
something
in the next few days.
I've added this, although it's untested. Let me know if it works ok.
Garth
Garth
For time dependent problems it would be great if one could use
the last
step solutions as the initial guess.
I was thinking of something like
-----------------------------------------
solver.solve(A,x_(n+1),x_n,b)
-----------------------------------------
where x_n is the initial guess for the iterative solver.
Also another doubt, (i'm guessing that is my math ignorance only)
is it usual that PETSc::gmres with hypre::amg preconditioner
doesn't
work. (After reading that it is so powerfull...)
In a Stokes 566000^2 subsystem it simply blows up with all of
the RAM
memory (16Gb).
(I tried it with simpler problems but with bigger systems and it
worked perfectly, converging in <10 iterations.)
Sorry for some disturbance in the troubled-hard-development
times you
are having and thanks again.
_______________________________________________
DOLFIN-dev mailing list
[email protected]
http://www.fenics.org/mailman/listinfo/dolfin-dev
_______________________________________________
DOLFIN-dev mailing list
[email protected]
http://www.fenics.org/mailman/listinfo/dolfin-dev
----------------------------------------------------
Marc Spiegelman
Lamont-Doherty Earth Observatory
Dept. of Applied Physics/Applied Math
Columbia University
http://www.ldeo.columbia.edu/~mspieg
tel: 845 704 2323 (SkypeIn)
----------------------------------------------------
_______________________________________________
DOLFIN-dev mailing list
[email protected]
http://www.fenics.org/mailman/listinfo/dolfin-dev