> On Behalf Of John Randall
> 
> Roger Hui wrote:
> > a. It would be useful to compile a list of the cases
> > where you feel J primitives return results that differ
> > from what "IEEE support" would provide.  For example:
> >
> > 0. 0%0
> > 1. tolerant comparison
> > etc.
> >
> > b. One possible approach for a programming language
> > to take is, for the arithmetic primitives, just
> > return what the hardware produces.  For example,
> > 1234567890 + 2223334440 would be a negative integer
> > rather than the 64-bit floating point number 3.4579e9.
> >
> >
> 
> I remain unconvinced that "IEEE support" is completely relevant to
> J, since one of the strengths of the language is the ability to ignore
> actual types of numerical data.
> 
> J can already represent _ and __ .  I assume that NaN results in
> domain error.  There is some argument for representing +/-0, but I am
> not sure this is completely helpful.  It would change 0%0 in
> incompatible ways.  I am happy with tolerant comparison.  Most of the

I do not understand why this has to be necessarily the case.  The result of
0%0 from the chip should be NaN; but, what would be the disadvantages of
overwriting it as 0 with the sign taken according the customary division
rules?  (I am probably missing something here.)

> concerns about floating-point arithmetic come from inaccurate
> libraries, and some platform inconsistencies like the "Fused MAC"
> problem.
> 
> J's policy of changing integer to floating-point rather than the
> overflow value is very useful.  It gives a plausible rather than an
> unsignalled erroneous result. (Actually, with the example given, I get
> the integer 3457902330 on a 64-bit machine.)  You can use x: if you
> really want bigger integers.  If you are twiddling the bits of
> integers based on their hardware representation, J's numerical model
> is not completely helpful.  On the other hand, sticking with hardware
> floating-point values seems fine.  Any implementations of arbitrary
> precision floating-point arithmetic I have used (for example, in Maple
> and Mathematica) have been painfully slow.
> 
> I would be reluctant to change the language, but I would be concerned
> about the quality of libraries, both in software and the microcoded
> functions like FSIN.

I tried to see where J stands on some issues raised in 'Floating points:
IEEE Standard unifies arithmetic model' by Cleve Moler,

   NB. MATLAB vs. J FP Implementations
   NB. Examples from
http://www.mathworks.com/company/newsletters/news_notes/pdf/Fall96Cleve.pdf 
   
   
   NB. Machine Epsilon (same for both)
   
   ((+: @ (-: ^: (1 (~:!.0) 1 +]) ^: _) @ 1:) ; (-.@:(3&*)@<:@(4 % 3:)) ;
(2^_52 "_)) ''
+-----------+-----------+-----------+
|2.22045e_16|2.22045e_16|2.22045e_16|
+-----------+-----------+-----------+
   
   
   NB. "Singular" System
   
   m=. +/ .*
   
   (A=. 2 2$ 10 1 3 0.3) ; (B=. 11 3.3) 
+------+------+
|10   1|11 3.3|
| 3 0.3|      |
+------+------+
   X ; (A m (X=.A %. B))   NB. J Answer (that should not have attempted to
give?)
+------------------+---------------+
|0.909091 0.0909091|9.18182 2.75455|
+------------------+---------------+
   X ; (A m (X=. _0.5 16)) NB. MATLAB Answer (which is acceptable)
+-------+------+
|_0.5 16|11 3.3|
+-------+------+
 
  
(SCILAB returns a different but also acceptable answer.)

   
   NB. Seventh Degree Polynomial
   
   X=. 0.988+10e_5*i.241 NB. X=. 0.988 to 1.012 step 0.0001
   Y0=. (X-1)^7
   Y1=. _1  + (7*X) + (_21*X^2) + (35*X^3) + (_35*X^4) + (21*X^5) + (_7*X^6)
+ (X^7)
   NB. Y2=. (X^7) + (_7*X^6) + (21*X^5) + (_35*X^4) + (35*X^3) + (_21*X^2) +
(7*X) + _1
   
   load'plot'  
   plot X;Y0,:Y1
   NB. plot X;Y1,:Y2

(SCILAB plot is similar; I wonder what a MATLAB plot would show.) 

Nevertheless, there are apparent advantages in the Floating Point model that
MATLAB follows, aside from +/- 0, for some applications (and it would be
nice to squeeze also extra representation bits when possible).
Incidentally, I assume that nowadays all the Pocket PC chips follow the IEEE
standard.  Is that correct?

According to Murphy, any J change would make programs crash (e.g., the x.,
y. change, etc.).  Alternatively, instead of changing the J FP model, an
interface a la R to an open source MATLAB clone (e.g., SCILAB or SciPy)
could become handy when it does matter; provided that one of them is FP
suitable.  Is there any?

> 
> Most of the topics raised on this forum about floating-point numbers
> have involved misunderstandings about their nature (e.g. printing
> numbers to greater precision than they are represented, value of pi,
> etc.) than the way J implements them, other than it ought to be magic.
>

Besides, most of the hard-hitting issues, if not all, could be circumvented
by clever programming.  However, one has to be aware of them beforehand and
there could be, sometimes, a substantial performance price and programming
time price to pay.  (I know, I am preaching to the preacher.) 

 
> Best wishes,
> 
> John
> 


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to