This seems interesting, I'd like to know what David Sanders 
(https://github.com/dpsanders) thinks of the math (I missed his talk at 
JuliaCon, I'm waiting for the video, but the description made it sound 
relevant).

There also doesn't seem to be any representation of -0.0, which from what 
I've read, is important to represent negative underflows.
(however, I really don't understand why there isn't a corresponding value 
for positive underflows in the IEEE formats, in addition to an exact 0)
Why it is displayed as -0.0, instead of something like -Und, 0, and Und, 
similarly to -Inf and Inf, I just don't get.
(If any mathematicians would please explain that to me, I'd appreciate it!)

I also wonder how well this would work for all the array based math used in 
Julia, where you'd really like all the values to have a fixed size
for fast indexing.
I can think of some ways, using an extra bit to say that the real value is 
not in place, but rather in an overflow vector, and then have those
allocated with a big enough size to handle larger precision, I'm not sure 
how that would perform though, it would depend a lot on how many values had 
to be promoted to a larger size.

Finally, I'd wonder, in the absence of hardware that could directly handle 
UNUMs, if this would really be any better than the packing I've been doing 
for decades (to save space in a database and in in-memory structures, where 
things are read much more than written or modified).
(in my old format, I used length/type bytes (which could be 1 or 2 bytes 
normally, or more, to handle up to 8-byte lengths), followed by packed 
data, for example, non-negative integers were represented by 0-n bytes 
after the type/length info), negative integers represented by 0-n without 
any trailing 0xFF bytes, scaled decimals had a 1 or 2 byte signed scale, 
followed by 0-n bytes (same separation of negative/non-negative for ease of 
packing/unpacking).  The format also handles Null, packed strings (binary, 
8-bit text, and Unicode), and binary floating point values
(also packed, first using float format instead of double, if (float)x == x, 
and then eliminating LSBs of 0, which makes a 0 not take any extra bytes, 
and many small values just take 1 or 2 extra bytes).

-Scott

On Saturday, July 25, 2015 at 5:46:32 PM UTC-4, Job van der Zwan wrote:
>
> On Saturday, 25 July 2015 23:34:45 UTC+3, Simon Byrne wrote:
>>
>> Some HN discussion here:
>> https://news.ycombinator.com/item?id=9943589
>>
>
> Oh, hadn't seen that. The linked presentation is also more recent! I found 
> the "slidecast" version of it, where he presents the slides in podcast 
> form. <https://www.youtube.com/watch?v=jN9L7TpMxeA> He's evangelizing a 
> bit, but, well... I guess that makes sense given the topic.
>
> I'd be keen to know more but he hasn't really published any details other 
>> than his book 
>> <http://www.amazon.com/The-End-Error-Computing-Computational/dp/1482239868>. 
>> Based 
>> on the free preview, it looks like a bit of a diatribe rather than a 
>> detailed technical proposal, but you can look at his mathematica code 
>> here 
>> <https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867&sa=D&sntz=1&usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA>
>> .
>>
>
> Well, if you decide to go against something as well-established as the way 
> we've been doing integer and floating point arithmetic, you're probably 
> going to need a lot of explanation in a very accessible style - because you 
> definitely won't have the experts on your side.
>

Reply via email to