On Sat, 24 Sept 2022 at 09:24, Peter Stahlecker
<peter.stahlec...@gmail.com> wrote:
>
> Thanks! I did not know about nsolve.
> Would nsolve be ‚better‘ than converting the equations to numpy, using 
> lambdify, and the use scipy.optimize.fsolve ?

When you use nsolve more or less the same thing happens: the equations
are lambdified but converted to mpmath rather than numpy and then
solved with mpath's findroot function rather than scipy's fsolve. The
tradeoffs between using mpmath vs numpy are the usual ones for
variable precision vs fixed precision: accuracy vs speed.

Since numpy/scipy use fixed precision hardware level floating point
(i.e. 64 bit floats) they can be faster than mpmath which uses
software level arbitrary precision floating point. Likewise since
numpy/scipy use lots of C and Fortran code they can also be faster
than mpmath which is pure Python. On the other hand there may not be
much of a speed difference for relatively "small" systems that do not
benefit significantly from vectorisation and where the overheads of
the Python wrappers in numpy/scipy dominate over the routines that are
implemented in the compiled parts.

Since mpmath uses variable precision arithmetic it can calculate
answers that are more accurate than is possible when using numpy/scipy
or any other fixed precision libraries e.g.:

In [5]: nsolve(x**2 - 2, 1)
Out[5]: 1.41421356237310

In [6]: nsolve(x**2 - 2, 1, prec=50)
Out[6]: 1.4142135623730950488016887242096980785696718753769

The default precision used by nsolve is 15 decimal digits i.e.
basically the same precision as you would expect for 64 bit floats (53
binary digits, ~15 decimal digits) so it should return a result with
the same precision as fsolve. That does not mean that there is no
accuracy gain by default though. Solvers like fsolve that work
*within* the same precision as the output format that they return will
typically return a result that is less accurate than implied by the
precision. In other words the fact that fsolve uses 64 bit floats does
not imply that it returns the most accurate possible 64 bit floats:

In [7]: from scipy.optimize import fsolve

In [8]: fsolve(lambda x: x**2 - 2, 1) # the digits shown are correct
Out[8]: array([1.41421356])

In [9]: fsolve(lambda x: x**2 - 2, 1)[0] # last digits not quite right
Out[9]: 1.4142135623730947

With mpmath's findroot even if you only want the accuracy of 64 bit
floats it will use a higher precision internally in order to round the
result to the desired precision at the end.

The example shown above of x**2 - 2 is a relatively well behaved one
where both nsolve and fsolve can easily find an accurate answer
although the answer from nsolve is slightly more accurate. If you try
something more numerically difficult then you can see bigger output
errors from both functions as well as cases where nsolve throws an
error because it doesn't think that it has enough precision to be able
to converge.

Lastly, nsolve is more convenient just because you don't have to use
lambdify explicitly. Ideally it would be possible to have nsolve call
fsolve so that it would be easier to switch between findroot and
fsolve.

--
Oscar

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAHVvXxRqaf2xOiL%2BuM6zDMLpk8UYgwX44ay%2BGb%2Bbi-8KwqiPUA%40mail.gmail.com.

Reply via email to