Scott:  Is your number format a public (open source) specification?  How
does it differ from decimal floating point?

On Wed, Jul 29, 2015 at 10:30 AM, Tom Breloff <[email protected]> wrote:

> Correct me if I'm wrong, but (fixed-size) decimal floating-point has most
> of the same issues as floating point in terms of accumulation of errors,
> right?  For certain cases, such as adding 2 prices together, I agree that
> decimal floating point would work ($1.01 + $2.02 == $3.03), but for those
> cases it's easier to represent the values as integers: (101 + 202 == 303 ;
> prec=2), which is basically what I do now.
>
> In terms of storage and complexity, I would expect that decimal floating
> point numbers are bloated as compared to floats.  You're giving up speed
> and memory in order to guarantee an exact representation falls in base
> 10... I could understand how this is occasionally useful, but I can't
> imagine you'd want that in the general case.
>
> In terms of hardware support... obviously it doesn't exist today, but it
> could in the future:
> http://www.theplatform.net/2015/03/12/the-little-chip-that-could-disrupt-exascale-computing/
>
> Either way, I would think there's enough potential to the idea to at least
> prototype and test, and maybe it will prove to be more useful than you
> expect.
>
> On Wed, Jul 29, 2015 at 10:10 AM, Job van der Zwan <
> [email protected]> wrote:
>
>> On Wednesday, 29 July 2015 16:50:21 UTC+3, Steven G. Johnson wrote:
>>>
>>> Regarding, unums, without hardware support, at first glance they don't
>>> sound practical compared to the present alternatives (hardware or software
>>> fixed-precision float types, or arbitrary precision if you need it). And
>>> the "ubox" method for error analysis, even if it overcomes the problems of
>>> interval arithmetic as claimed, sounds too expensive to use on anything
>>> except for the smallest-scale problems because of the large number of boxes
>>> that you seem to need for each value whose error is being tracked.
>>>
>>
>> Well, I don't know enough about traditional methods to say if they're
>> really as limited as Gustafson claims in his book, or if he's just
>> cherry-picking. Same about the cost of using uboxe
>>
>> However, ubound arithmetic tells you that 1 / (0, 1] = [1, inf), and that
>> [1, inf) / inf = 0. The ubounds describing those interval results are
>> effectively just a pair of floating point numbers, plus a ubit to signal
>> whether an endpoint is open or not. That's a very simple thing to
>> implement. Not sure if there's any arbitrary precision method that deal
>> with this so elegantly - you probably know better than I do.
>>
>
>

Reply via email to