2016-04-28 22:48 GMT+02:00 Mosè Giordano <[email protected]>:
> Hi Steven,
>
> 2016-04-28 14:58 GMT+02:00 Steven G. Johnson <[email protected]>:
>> On Thursday, April 28, 2016 at 8:46:06 AM UTC-4, Steven G. Johnson wrote:
>>>
>>> Out of curiosity, how does the performance and accuracy compare to the
>>> "roots" function in the Polynomials.jl package, which uses the standard
>>> companion-matrix approach?
>>
>>
>> Just checked: it is much faster than Polynomials.roots, which I guess is not
>> surprising for a Newton-like method compared to an eigenvalue method,
>
> Yes, as I wrote in my first mail I found my package to be ~20 times
> faster than Polynomials.jl and usually more precise (**very** roughly,
> evaluating abs2(evaluation of polynomial at root) result in ~10^-29
> for Polynomials.jl and ~10^-30 for CmplxRoots.jl).  Probably I
> wouldn't have published the package if it had been both slower and less 
> precise.
>
>> although perhaps the latter is more reliable in some cases?
>>
>> I'm not sure what the state of the art here is, though.
>
> Most of the methods I found are based on finding eigenvalues of
> companion matrix.  A somewhat popular method is Jenkins–Traub
> algorithm (https://en.wikipedia.org/wiki/Jenkins%E2%80%93Traub_algorithm),
> but I didn't find a FLOSS implementation of that algorithm in any
> language, to start with.
>
> In my test suite I added a couple of tests of random polynomials, to
> catch major flaws of root finding algorithm.

I hit a limitation of the algorithm used in PolynomailRoots.jl (new
name of CmplxRoots.jl, but it is hasn't been merged in METADATA.jl
yet): it gives highly inaccurate results for large-order polynomials
(above ~30).  This isn't a systematic behavior, it happens from time
to time, but the larger the degree of polynomial, the more failures
you'll get.  Fortunately, this can be mitigated by using new multiple
precision feature.  For the record, Polynomials.jl has the same
problem as PolynomialRoots.jl, with the former still performing worse
than the latter also at high orders.

Regarding interesting algorithms, Aberth–Ehrlich method
(https://en.wikipedia.org/wiki/Aberth_method) looks promising and it
has already been implemented in multiple precision (see
https://en.wikipedia.org/wiki/MPSolve and
http://numpi.dm.unipi.it/mpsolve-2.2/, this isn't FLOSS though).

Bye,
Mosè

Reply via email to