On Thursday, July 30, 2015 at 3:10:24 PM UTC-4, Job van der Zwan wrote:
>
> On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:
>>
>> The problem is that if you interpret an exact unum as the open interval 
>> between two adjacent exact values, what you have is essentially the same as 
>> interval arithmetic.  The result of each operation will produce intervals 
>> that are broader and broader (necessitating lower and lower precision 
>> unums), with the well known problem that the interval quickly becomes 
>> absurdly pessimistic in real problems (i.e. you quickly and prematurely 
>> discard all of your precision in a variable-precision format like unums).
>>
>> The real problem with interval arithmetic is not open vs. closed 
>> intervals, it is this growth of the error bounds in realistic computations 
>> (due to the dependency problem and similar).  (The focus on infinite and 
>> semi-infinite open intervals is a sideshow.  If you want useful error 
>> bounds, the important things are the *small* intervals.)
>>
>> If you discard the interval interpretation with its rapid loss of 
>> precision, what you are left with is an inexact flag per value, but with no 
>> useful error bounds. And I don't believe that this is much more useful than 
>> a single inexact flag for a set of computations as in IEEE.
>>
>
> The thing is, these are *exactly *the criticisms Gustafson has of 
> traditional interval arithmetic. In fact, he's even more critical of 
> interval arithmetic than he is of floats, as far as I can see. However, he 
> claims that ubounds don't share the "absurd pessimism" problem. Supposedly, 
> traditional interval arithmetic by necessity needs to be more pessimistic 
> about its boundaries due to rounding, and only using closed endpoint 
> instead of allowing for open intervals. Unums instead are (supposedly) more 
> precise about the information loss they have, and thus (supposedly) don't 
> blow up as badly. Again, his claims, not mine. I'm not saying you're wrong, 
> or even sure if you disagree as much as you might think you are (although 
> I'm pretty sure you wouldn't like the tone he uses when describing 
> traditional methods though).
>
> I agree with the others about the grain of salt (unums/ubounds/uboxes *always 
> *come out on top in his examples, which does make you wonder), but on the 
> other hand: given that the mathematica implementation of his methods are 
> open source, his claims *should* be verifiable (they can be found here 
> under Downloads/Updates 
> <https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867&sa=D&sntz=1&usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA>,
>  
> Simon Byrne linked it earlier. I also found a Python port 
> <https://github.com/jrmuizel/pyunum>).
>

If you inspect the specific examples of challenge problems that Gustafson 
gives in Chapter 14 of his book, the open vs. closed interval distinction 
doesn't actually make an important appearance. The main ways that ubounds 
do better than Mathematica's Intervals are

1. Fused operations allow getting around specific cases of the dependency 
problem. E.g. using squareu[x] instead of x*x allows putting 0 as a lower 
bound of the result, and fdotu allows avoiding cancelation in dot products 
(and as a special case, sums and differences).
2. Sometimes the unums are allowed to have more bits in their mantissa than 
the bounds of the float Intervals.
3. The ubounds often use fewer bits in their representation (averaged over 
a whole calculation) than Interval alternatives.

Only the first two are relevant to correctness/precision. Number 3 is a 
performance issue that I don't want to discuss.

Looking at the specific examples,

* "Wrath of Kahan, 1"
Intervals eventually blow up to [-Inf, Inf]. Ubounds also diverge (from the 
text, "If you keep going, the left endpoint falls below 6, and then 
diverges towards -Inf, but remember that a unum environment can 
automatically detect when relative width gets to high..."). They diverge 
slightly more slowly in this case because he has allowed 2^6=64 bits in the 
mantissa of the ubounds endpoints, whereas the double Interval bounds have 
only 52 bits in their mantissa.

* "Wrath of Kahan, 2"
In the unum computation, squaring operations are fused with squareu, but 
float Interval calculations are not fused. I also believe the check for the 
ubounds containing zero in e[z] is erroneous in a way that makes this 
example very deceiving, but I don't want to go into that detail here.

* "Rump's royal pain"
Uses fused powu and squareu operations, and allows up to 2^7=128 bits in 
the mantissa for ubounds, whereas double intervals are computed without 
fused operations and only allow 52 bits in their mantissa. The fused 
operations are critical here.

* "Quadratic formula"
Main advantage comes from using fused squareu operations, and allowing more 
bits in the mantissa of the unums than in the mantissa of the single 
precision floats. No comparison to float Intervals here.

* "Bailey's numerical nightmare"
Unum Computation is first scaled, and then done almost entirely using fused 
dot products, which makes this into an integer computation until the final 
division in the case of unums. The float Interval computation wasn't ever 
scaled, and the float computation was scaled, but used no fused operations.

* "Fast Fourier Transforms"
I haven't read this example very closely so no comment.

My conclusion is that in all of these specific test cases, Intervals with 
float endpoints could get answers that are just as precise/accurate, if 
they used the same fused operations, and if their mantissa size was as 
large as the maximum mantissa size allowed for the unums. Even so, this is 
a useful observation. Environments that provide Intervals should provide a 
useful set of fused operations over them to reduce the dependency problem 
in a usefully large set of cases. If open vs. closed endpoints are 
important, those could easily be added to any type of Interval 
representation at the cost of 2 extra bits per interval.

The performance claims might eventually be legitimate (with new hardware), 
but I can't personally evaluate them.

Reply via email to