Hi Nils,

It seems like the code is converging well as far as I can tell. For
example, when I modify the sweep loop to be

~~~~
        print
        print 'step: {0}'.format(tt)
        while res > 1e-9: #Solve.sweeps:
            res = eq.sweep(var=rho, dt=Solve.step,
solver=Solve.customSolver)
            print res
~~~~

every step seems to converge in 5 steps to 1e-10.

step: 20.0
0.00192315625797
4.5626520206e-05
1.08935792171e-06
2.60975675747e-08
6.26480480874e-10

step: 21.0
0.00195977695607
4.651749367e-05
1.11103436368e-06
2.66255569394e-08
6.3934852818e-10

step: 22.0
0.00199768415442
4.74384979099e-05
1.13342888153e-06
2.71708361386e-08
6.52634229319e-10

The non-linear residual only really tells you how well each step
individually is converging. It seems like you were only using the very
first non-linear residual, which uses the old value for the
calculation, which doesn't tell you anything useful. Basically "A_new
* x_old -b_new" is not a measure of anything useful. It is only useful
as a quantity for normalization. FiPy uses the old value or the
previous sweep value to calculate the non-linear residual. So the
residual is

   A(n) * x(n-1) - b(n)

where n is the sweep number, and x_(-1) is x_old if the first sweep is
n=0. If only one sweep is executed then the residual isn't a useful
quantity. The linear residual is not returned from the "sweep" method,
which is "A(n) * x(n) - b(n)" and is assumed to be as accurate as the
tolerance of the linear solve. Basically, collecting the first
non-linear residual at each time step is not a useful exercise to
generate a metric for convergence across time steps. In fact, I'm not
even sure how to do that or what it means. I only know how to
understand convergence for a given time step.

One thing that I'm confused about is that it requires 5 sweeps to
converge for a fully linear equation. Is it actually linear? I
couldn't tell from the code.

Cheers,

Daniel




On Fri, Aug 5, 2016 at 8:30 AM, Nils Becker
<nils.bec...@bioquant.uni-heidelberg.de> wrote:
>>> I'm not sure if using ||lhs|| is a good way to normalize the residual.
>>> Surely that could be very small. Is ||lhs - rhs|| getting smaller
>>> without normalization?
>
> if you don't see any obvious errors in building the solution in fipy, i
> could continue poking around if i knew how to evaluate the various terms
> of the solution and their errors at different times. i see methods
> eq_diffprolif.justErrorvector, but i don't quite understand what exactly
> this returns. is it lhs-rhs? what is justResidualVector? can i get those
> for the individual terms on the rhs as well?
>
> cheers, n.
>
> _______________________________________________
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>



-- 
Daniel Wheeler
_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to