On Sunday, July 26, 2015 at 4:06:05 AM UTC-4, Job van der Zwan wrote: > > On Sunday, 26 July 2015 05:00:44 UTC+3, Scott Jones wrote: > >> There also doesn't seem to be any representation of -0.0, which from what >> I've read, is important to represent negative underflows. >> > > Apparently, his format doesn't have underflow, or overflow. I'm still > trying to wrap my head around it myself, but I *think* is that the trick > is that it alternates between exact numbers and intervals *between* exact > numbers. To do so it uses an "uncertainty" bit to indicate open ranges > *between* the numbers exact numbers. The result is that all numbers can > be represented *accurately*, but not necessarily *precise.* He kinda > explains it here > <https://www.youtube.com/watch?v=jN9L7TpMxeA&feature=youtu.be&t=1284>. > > Supposedly, this solves a lot of mathematical issues. >
Ah, yes, and this also solves my issue about really wanting -Underflow, 0, +Underflow instead of -0.0 and +0.0, by having the inexact bit. > I also wonder how well this would work for all the array based math used >> in Julia, where you'd really like all the values to have a fixed size >> for fast indexing. >> > > You could also "box" them in a fixed maximum unum size, then load/store > them bytewise according to how big the actual numbers are. Memory-wise you > wouldn't lose anything over floating point (unless you're at the far ends > of the dynamic range *and* with all bits being significant digits, which > I think is unlikely), but you wouldn't gain anything either. So most of the > claimed energy/speed benefits would vanish, I guess, since the prefetcher > still loads in the whole memory chunk. But perhaps in-cache there might be > some benefits to performance. And keeping track of significant figures > might be worth it. > Yes, I wasn't thinking so much about what happens during the calculation, good point about keeping track of the significant figures. > Finally, I'd wonder, in the absence of hardware that could directly handle >> UNUMs, if this would really be any better than the packing I've been doing >> for decades (to save space in a database and in in-memory structures, where >> things are read much more than written or modified). >> > > I guess that all depends on what you mean with "better". Better > compression? Probably not. But if the automatic-significant-figures part is > appealing, then maybe? > Yes, but I could add the information about "inexact" vs. "exact" and keeping track of significant figures to my format as well, while still storing many common values in just 1 byte (including Null, "", and markers for binary and packed Unicode text). In my work, performance was more related to how much information you could keep in cache (not L1/L2/L3 cache, so much, but in buffers in RAM as opposed to on disk).
