Existing methods for evaluation of polynomials do not produce tight bounds. 
There is indeed a mountain of papers written about doing it, all with the 
flavor of "our method sucks less" because it usually gets somewhat tighter 
bound, though still not as tight as possible. I was surprised to learn 
this, because I expected to just Google the evaluation of polynomials of 
intervals and make a unum version of it. That's when I discovered no one 
had solved the problem, and had even published proofs that it is NP-hard. 
Gulp! So it took me an extra *three months* to solve that problem, and I 
wondered if I should publish it as a standalone paper, but no, I needed to 
get the book done.

I strongly agree that people cannot be asked to solve a new numerical 
analysis problem every time they program a new function. That is *exactly* the 
point I make over and over in Part 2. We need to simply express what we 
want calculated, and let the *computer* do all the work to get an answer 
that satisfies the environment setting of maximum allowed relative error. 
The last thing we need is a set of "numerical recipes" to choose from.

On Thursday, July 30, 2015 at 3:46:52 PM UTC-7, Steven G. Johnson wrote:
>
> People have devised methods for evaluation of polynomials with interval 
> arithmetic, too (google it).  Not sure how his method compares.  It is well 
> known that you can often work around the dependency problem for very 
> specific expressions.
>
> However, is not practical to tell people that they need to solve a new 
> numerical-analysis research problem every time they have a new program in 
> which a variable is used more than once (used more than once => dependency 
> problem).
>
> And if you don't solve the dependency problem, your error bounds are 
> useless for general-purpose tasks.  And without accurate error bounds, 
> adaptive precision is a non-starter.
>

Reply via email to