On Sunday, 26 July 2015 05:00:44 UTC+3, Scott Jones wrote:

> There also doesn't seem to be any representation of -0.0, which from what 
> I've read, is important to represent negative underflows.
>

Apparently, his format doesn't have underflow, or overflow. I'm still 
trying to wrap my head around it myself, but I *think* is that the trick is 
that it alternates between exact numbers and intervals *between* exact 
numbers. To do so it uses an "uncertainty" bit to indicate open ranges 
*between* the numbers exact numbers. The result is that all numbers can be 
represented *accurately*, but not necessarily *precise.* He kinda explains 
it here 
<https://www.youtube.com/watch?v=jN9L7TpMxeA&feature=youtu.be&t=1284>.

Supposedly, this solves a lot of mathematical issues.

I also wonder how well this would work for all the array based math used in 
> Julia, where you'd really like all the values to have a fixed size
> for fast indexing.
>

You could also "box" them in a fixed maximum unum size, then load/store 
them bytewise according to how big the actual numbers are. Memory-wise you 
wouldn't lose anything over floating point (unless you're at the far ends 
of the dynamic range *and* with all bits being significant digits, which I 
think is unlikely), but you wouldn't gain anything either. So most of the 
claimed energy/speed benefits would vanish, I guess, since the prefetcher 
still loads in the whole memory chunk. But perhaps in-cache there might be 
some benefits to performance. And keeping track of significant figures 
might be worth it.

Finally, I'd wonder, in the absence of hardware that could directly handle 
> UNUMs, if this would really be any better than the packing I've been doing 
> for decades (to save space in a database and in in-memory structures, where 
> things are read much more than written or modified).
>

I guess that all depends on what you mean with "better". Better 
compression? Probably not. But if the automatic-significant-figures part is 
appealing, then maybe?

Reply via email to