I'm working on getting the unit tests working for the case when Epetra is enabled but PETSc is not. This has not been tested before so things are failing for the Epetra backend. I came accross the following.
This works: solve(A, x, b, "lu") This works: solve(A, x, b, "gmres", "ilu") This does not work: solve(A, x, b, "lu") solve(A, x, b, "gmres", "ilu") The reason is that the solution vector after the first call to solve will be perfect (to within machine precision), and then the Epetra Krylov solver gets confused: Ifpack_AdditiveSchwarz, ov = 0, local solver = ***** `IFPACK ILU (fill=0, relax=0.000000, athr=0.000000, rthr=1.000000)' number of iterations: 5 Actual residual = 5.2388e-17 Recursive residual = 5.6421e-25 Calculated Norms Requested Norm ||r||_2 / ||r0||_2: 1.125463e+00 The residuals are small but after 5 unnecessary iterations it bails out thinking that the residual (as a result of round-off errors) is increasing. Setting the option nonzero_initial_guess to True does not help, since we don't pass that option into Epetra. So this is two bugs at once: 1. A bug in Epetra (getting confused when the initial guess is good) 2. A bug in the DOLFIN wrapper ignoring nonzero_initial_guess Can someone familiar with Trilinos report (1) to the right people? Should we explicitly set the vector to zero in EpetraKrylovSolver.cpp, or is there some option we should set like we do in PETScKrylovSolver? -- Anders _______________________________________________ Mailing list: https://launchpad.net/~dolfin Post to : dolfin@lists.launchpad.net Unsubscribe : https://launchpad.net/~dolfin More help : https://help.launchpad.net/ListHelp