J does have extended precision:
    e7=: 10x^7 0 0 7
    A=: 32 1 _1 8.0*e7
    B=: 4 1 _1 _16*e7
   A +/ .* B
2

But that is not precisely what's going on here:

   a=: 32e7 1 _1 8.0e7
   b=: 4.0e7 1 _1 _16e7
   a +/ .* b
2

   datatype A +/ .*B
extended
   datatype a +/ .*b
integer

Still, it looks like J's floats do illustrate that issue:

   a=: 32e7 1 _1 8.0e7+1.1-1.1
   b=: 4.0e7 1 _1 _16e7+1.1-1.1
   datatype a +/ .*b
floating
   a+/ .*b
0

Thanks,

-- 
Raul


On Mon, Feb 13, 2017 at 12:46 PM, William Tanksley, Jr
<[email protected]> wrote:
> Gustafson made two posts in response. I'm not sure whether he's right.
>
> Here's the complete discussion:
> https://groups.google.com/forum/#!topic/unum-computing/FCataJnoxFc
>
>  Here's the second post:
> Thank you for catching a transcription error. The example should be 32e7,
> not 3.2e7, and –16e7, no –1.6e7. Then the rounding behaves as described. I
> will correct my slide set!
>
> Here's the first post:
> The language is not using standard hardware floating point. J has
> provisions for extended precision, especially with numbers that happen to
> be integers as in my example. Try writing out the dot product as multiplies
> and adds, from left to right, and I think you will get a very different
> answer from J. If you enter the problem in Mathematica, expressing the
> numbers as integers, it will also compute the correct answer 2; but if you
> put a decimal point at the end of every integer, then Mathematica uses
> "machine precision" (IEEE 754 64-bit binary floats) and makes the error.
>
> 3.2e7 * 4.0e7 = 1.28e15 (exact)
>
> add 1*1, get 1.28e15 again, but round-to-nearest rules
>
> add (–1)*(–1), get 1.28e15 again,
>
> The product 8.0e7 * (–1.6e7) = –1.28e15 exactly, so adding that brings the
> sum to zero.
>
> APL and J are very savvy about linear algebra accuracy, so I suspect they
> both may use the exact accumulator idea to prevent such problems. Exact
> accumulators are part of every unum-type environment.
>
> On Mon, Feb 13, 2017 at 4:40 AM Robert Bernecky <[email protected]>
> wrote:
>
>> Gustafson claims on his slide 2 that
>> 64-bit IEEE-754 gives the wrong answer.
>>
>> Yet, naive execution of the algorithm in APL, J, and C (gcc)
>> all produce the correct answer using 64-bit reals. So, is he wrong?
>>
>> His penultimate slide mentions 80-bit precision, which is
>> what X86 boxes use for IEEE-754 math internally, only
>> converting back to 64-bit words when you have to store
>> a result. The above systems use 80-bit precision for intermediate
>> computations, so they get the correct result with 64-bit reals.
>>
>> Therefore, Gustafson's claim is correct, but I think it
>> is somewhat misleading.
>>
>> Bob
>>
>> On 2017-02-12 06:26 PM, Raul Miller wrote:
>> > Mmm... require it where, though?
>> >
>> > Thanks,
>> >
>>
>> --
>> Robert Bernecky
>> Snake Island Research Inc
>> 18 Fifth Street
>> Ward's Island
>> Toronto, Ontario M5J 2B9
>>
>> [email protected]
>> tel: +1 416 203 0854 <(416)%20203-0854>
>>
>> ----------------------------------------------------------------------
>> For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to