Regarding REX Computing (the reference to http://www.theplatform.net/2015/03/12/the-little-chip-that-could-disrupt-exascale-computing/), the two founders are putting unum arithmetic into their processor design, and I am working closely with them. But the thing to do first is to convert the Mathematica definition into a language that is much closer to the hardware level, like C or Julia, and *then* think about the processor architecture.
An ideal processor architecture would use bit addressing instead of byte or word addressing, and would support variable size integers up to some limit like 128 bits. It also would embrace variable execution time for operations, something that has been preached against by old-school CPU architects. With bandwidth and watts the big design constraints in computing in 2015, it's time to get away from the tyranny of fixed size data and fixed time execution. On Wednesday, July 29, 2015 at 7:52:17 AM UTC-7, Scott Jones wrote: > > > > On Wednesday, July 29, 2015 at 10:30:41 AM UTC-4, Tom Breloff wrote: >> >> Correct me if I'm wrong, but (fixed-size) decimal floating-point has most >> of the same issues as floating point in terms of accumulation of errors, >> right? For certain cases, such as adding 2 prices together, I agree that >> decimal floating point would work ($1.01 + $2.02 == $3.03), but for those >> cases it's easier to represent the values as integers: (101 + 202 == 303 ; >> prec=2), which is basically what I do now. >> > > Programming wise, it's a lot easier to deal with decimal floating point > instead of decimal fixed point, you don't need to worry about making sure > you've scaled things correctly. > I implemented a full decimal floating point package (one 32-bit version in > IBM/370 assembly, another 16-bit 8086/8, and also pure pre-ANSI C (which > worked at 16 or 32, then later 64 bit chunks)), > where all the numbers were either: native sized signed integers (16-bit, > 32-bit, and 0later 64-bit), or scaled decimal (64-bit value, scaled by 1 > byte signed value (10**x)). > This was actually faster than binary floating point for most things, > because at the time, many of the machines did not have floating point > hardware (think PDP-11, and PCs without the 8087 coprocessor), > and because of the use case, frequently you were doing things like > adding/subtracting things that had the same scale. (like your $1.01 + $2.02 > case). > > You don't have to rewrite your code, if the currency uses 1000ths instead > of 100ths, for example, or doesn't use fractions at all, all platforms, no > matter the native machine word size, all got exactly the same results, > with up to ~19 digits precision, *and* you avoided all binary <-> decimal > conversion issues. > (Note: at the time, using binary floating point hardware, on the machines > where it was available, like the VAX or IBM or PCs with 8087 chip, would > have given different results on each platform, which was not acceptable. > Even with the IEEE standard, I think you still can get varying results on > different platforms :-( ) > > In terms of storage and complexity, I would expect that decimal floating >> point numbers are bloated as compared to floats. You're giving up speed >> and memory in order to guarantee an exact representation falls in base >> 10... I could understand how this is occasionally useful, but I can't >> imagine you'd want that in the general case. >> > > Why bloated? There's not really that much difference, a few bits I think, > if you are comparing IEEE 64-bit binary to IEEE 64-bit decimal float > formats. > As far as storage, they actually took less space in the decimal format, > because it was also the format for integers (talking about my storage > format here) > > >> In terms of hardware support... obviously it doesn't exist today, but it >> could in the future: >> http://www.theplatform.net/2015/03/12/the-little-chip-that-could-disrupt-exascale-computing/ >> >> Either way, I would think there's enough potential to the idea to at >> least prototype and test, and maybe it will prove to be more useful than >> you expect. >> > > I really would like to see this! > > >> On Wed, Jul 29, 2015 at 10:10 AM, Job van der Zwan <[email protected]> >> wrote: >> >>> On Wednesday, 29 July 2015 16:50:21 UTC+3, Steven G. Johnson wrote: >>>> >>>> Regarding, unums, without hardware support, at first glance they don't >>>> sound practical compared to the present alternatives (hardware or software >>>> fixed-precision float types, or arbitrary precision if you need it). And >>>> the "ubox" method for error analysis, even if it overcomes the problems of >>>> interval arithmetic as claimed, sounds too expensive to use on anything >>>> except for the smallest-scale problems because of the large number of >>>> boxes >>>> that you seem to need for each value whose error is being tracked. >>>> >>> >>> Well, I don't know enough about traditional methods to say if they're >>> really as limited as Gustafson claims in his book, or if he's just >>> cherry-picking. Same about the cost of using uboxe >>> >>> However, ubound arithmetic tells you that 1 / (0, 1] = [1, inf), and >>> that [1, inf) / inf = 0. The ubounds describing those interval results are >>> effectively just a pair of floating point numbers, plus a ubit to signal >>> whether an endpoint is open or not. That's a very simple thing to >>> implement. Not sure if there's any arbitrary precision method that deal >>> with this so elegantly - you probably know better than I do. >>> >> >>
