Mmm... require it where, though? Thanks,
-- Raul On Sun, Feb 12, 2017 at 3:38 PM, Robert Bernecky <[email protected]> wrote: > There is a hint of how he claims that 64-bit IEEE-754 is wrong, > on his penultimate slide: > > "IEEE floats require 80-bit precision to get it right." > > I think 80-bits is the default precision used by gcc. If so, > then Gustafson's claim is correct. Now, it would have > been nice if he had clarified that on slide 0... > > Bob > > > On 2017-02-09 12:34 PM, William Tanksley, Jr wrote: >> >> His slides claim that happens in 32 and 64 bits. He'll have to answer how >> he got that result -- it DOES seem incredibly unlikely. >> >> http://web.stanford.edu/class/ee380/Abstracts/170201-slides.pdf >> >> He does have a neural network and FFT in the video (not mentioned in the >> slides) -- it's actually most of the video, showing off a Julia >> implementation (the Julia language is made to allow alternate numbering >> systems, so many of its built-in and library functions will work with any >> type of number you've defined). >> >> Personally, I'm very impressed with his past work, but it's taking me a >> lot >> of mental effort to figure out this one. It does seem strictly superior to >> his original unum design, and faster than unum2 (nobody ever did build a >> fast implementation of that; most of the work was done on VERY incomplete >> implementations). >> >> -Wm >> >> Raul Miller <[email protected]> wrote: >> >>> My guess, looking at those numbers, but not mustering enough interest >>> to plow through the video, is that by IEEE-754, he meant 32 bit >>> IEEE-754, but J uses 64 bit IEEE-754. >>> >>> I'm having trouble mustering up interest because while focusing on >>> specific cases is useful for working with simple algorithms, for >>> something like this you really need to be considering much larger >>> fields of values. And I'm not going to see anything like that in this >>> video. >>> >>> -- >>> Raul >>> >>> >>> On Thu, Feb 9, 2017 at 10:51 AM, Raul Miller <[email protected]> >>> wrote: >>>> >>>> Does this answer your question? >>>> >>>> a=.3.2e7 1 _1 8.0e7 >>>> b=. 4.0e7 1 _1 _1.6e7 >>>> a +/ .* b >>>> 2 >>>> (a +/ .* b) - 2 >>>> 0 >>>> mm=: +/@(*"1 _) >>>> a mm b >>>> 2 >>>> (a mm b) - 2 >>>> 0 >>>> >>>> Thanks, >>>> >>>> -- >>>> Raul >>>> >>>> On Thu, Feb 9, 2017 at 10:44 AM, William Tanksley, Jr >>>> <[email protected]> wrote: >>>>> >>>>> I'd be curious about whether J is automatically using one of the matrix >>>>> multiplication algorithms that avoid the problem. >>>>> >>>>> On Thu, Feb 9, 2017 at 7:26 AM Skip Cave <[email protected]> >>> >>> wrote: >>>>>> >>>>>> I posted the results Brian & Robert got from Gustafson's matrix >>> >>> multiply >>>>>> >>>>>> example in J and APL to the Unum Computing forum on Google Groups >>>>>> <https://groups.google.com/forum/#!forum/unum-computing>. Gustafson >>>>>> occasionally posts on that group. We'll see what he says. >>>>>> >>>>>> Skip Cave >>>>>> Cave Consulting LLC >>>>>> ---------------------------------------------------------------------- >>>>>> For information about J forums see http://www.jsoftware.com/forums.htm >>>>> >>>>> ---------------------------------------------------------------------- >>>>> For information about J forums see http://www.jsoftware.com/forums.htm >>> >>> ---------------------------------------------------------------------- >>> For information about J forums see http://www.jsoftware.com/forums.htm >> >> ---------------------------------------------------------------------- >> For information about J forums see http://www.jsoftware.com/forums.htm > > > -- > Robert Bernecky > Snake Island Research Inc > 18 Fifth Street > Ward's Island > Toronto, Ontario M5J 2B9 > > [email protected] > tel: +1 416 203 0854 > > ---------------------------------------------------------------------- > For information about J forums see http://www.jsoftware.com/forums.htm ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm
