On Wednesday, July 29, 2015 at 5:47:50 PM UTC-4, Job van der Zwan wrote:
>
> On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:
>>
>> Job, I'm basing my judgement on the presentation.
>>
>
> Ah ok, I was wondering I feel like those presentations give a general 
> impression, but don't really explain the details enough. And like I said, 
> your critique overlaps with Gustafson's own critique of traditional 
> interval arithmetic, so I wasn't sure if you meant that you don't buy his 
> suggested alternative ubox method after reading the book, or indicated 
> scepticism based on earlier experience, but without full knowledge of what 
> his suggested alternative is.
>

>From the presentation, it seemed pretty explicit that the "ubox" method 
replaces a single interval or pair of intervals with a rapidly expanding 
set of boxes.  I just don't see any conceivable way that this could be 
practical for large-scale problems involving many variables.
 

> Well.. we give up one bit of *precision* in the fraction, but *our set of 
> representations is still the same size*. We still have the same number of 
> floats as before! It's just that half of them is now exact (with one bit 
> less precision), and the other half represents open intervals between these 
> exact numbers. Which lets you represent the entire real number line 
> accurately (but with limited precision, unless they happen to be equal to 
> an exact float). 
>

Sorry, but that just does not and cannot work.

The problem is that if you interpret an exact unum as the open interval 
between two adjacent exact values, what you have is essentially the same as 
interval arithmetic.  The result of each operation will produce intervals 
that are broader and broader (necessitating lower and lower precision 
unums), with the well known problem that the interval quickly becomes 
absurdly pessimistic in real problems (i.e. you quickly and prematurely 
discard all of your precision in a variable-precision format like unums).

The real problem with interval arithmetic is not open vs. closed intervals, 
it is this growth of the error bounds in realistic computations (due to the 
dependency problem and similar).  (The focus on infinite and semi-infinite 
open intervals is a sideshow.  If you want useful error bounds, the 
important things are the *small* intervals.)

If you discard the interval interpretation with its rapid loss of 
precision, what you are left with is an inexact flag per value, but with no 
useful error bounds.   And I don't believe that this is much more useful 
than a single inexact flag for a set of computations as in IEEE.

Reply via email to