I believe your speculation about problems arising
from singular problems is likely true. John
Randall was more mathematical about it in his
response!
In playing with the numbers, showing more
precision is interesting - entering from your
message (which seems to have rather low precision
output for the kind of thing you are trying to
do).
Larg1
0.204761 0.164397 0.157506 0.157351 0.15296
0.143288 0.12958 0.113711 0.0972323 0.0813807
0.0668824
0j20 ": ,. Larg1
0.20476099999999999857
0.16439699999999998759
0.15750600000000000711
0.15735099999999999087
0.15296000000000001262
0.14328799999999999870
0.12958000000000000074
0.11371100000000000652
0.09723229999999999373
0.08138070000000000026
0.06688239999999999463
NB. Similarly
Larg2
0.0387946 0.0263726 0.0159726 0.0108677
0.00739584 0.00503257 0.00340808 0.00233151
0.00157923 0.00107516 0.000729004
0j20 ": ,. Larg2
0.03879459999999999853
0.02637259999999999951
0.01597259999999999999
0.01086769999999999924
0.00739583999999999988
0.00503257000000000010
0.00340807999999999996
0.00233151000000000018
0.00157923000000000004
0.00107515999999999992
0.00072900400000000000
J is nice in that it will give you the rational
numbers for the machine representation of the
decimals that you display -
x: ,.Larg1
204761r1000001
76254939150r463846293727
78753r500000
157351r1000002
478r3125
17911r125000
6479r50000
113711r1000002
213737815765r2198218243988
52117898282r640420864925
83603r1250004
x: ,.Larg2
157206474956r4052277248787
131863r4999992
79863r5000000
29126153078r2680065982479
2889r390625
129930091733r25817840930823
3277r961539
1067271874r457759938239
1881218819r1191225355927
26879r24999928
7904396765r10842734422918
Looking at Rarg in the same way:
Rarg
0.273501 33.6993 _15.6811
0.505944 50.5733 _10.6639
0.700719 52.4143 _6.45982
0.796474 47.0372 _4.393
0.861592 39.7353 _2.98746
0.905876 32.2908 _2.03162
0.935991 25.5423 _1.38161
0.95647 19.8067 _0.939561
0.970398 15.1266 _0.638949
0.979869 11.4137 _0.434517
0.98631 8.52817 _0.295493
0j20 ": Rarg
0.27350099999999999412 33.69930000000000092086 _15.68110000000000070486
0.50594399999999994932 50.57330000000000325144 _10.66389999999999993463
0.70071899999999998077 52.41429999999999722604 _6.45981999999999967343
0.79647400000000001530 47.03719999999999856755 _4.39299999999999979394
0.86159200000000002451 39.73530000000000228511 _2.98746000000000000441
0.90587600000000001454 32.29079999999999728288 _2.03162000000000020350
0.93599100000000001742 25.54230000000000089244 _1.38161000000000000476
0.95647000000000004238 19.80669999999999930651 _0.93956099999999997952
0.97039799999999998281 15.12659999999999982379 _0.63894899999999998919
0.97986899999999999000 11.41370000000000040075 _0.43451699999999998658
0.98631000000000002004 8.52816999999999936222 _0.29549300000000000566
x: Rarg
273501r1000000 336993r10000 _156811r10000
63243r125000 505733r10000 _106639r10000
23727502460838r33861651333613 524143r10000 _322991r50000
5609069600642r7042376274231 117593r2500 _4393r1000
107699r125000 397353r10000 _149373r50000
226469r250000 80727r2500 _101581r50000
4076720230969r4355512212157 255423r10000 _138161r100000
95647r100000 198067r10000 _5985016364203r6370013617214
485199r500000 75633r5000 _1238730296301r1938699796543
2980707641521r3041945037062 114137r10000 _345816761526r795864745283
98631r100000 852817r100000 _49249r166667
Then you could get "exact answers" with:
(x:Larg) %. (x:Rarg)
(I'll not display the quite long results...)
The nice thing about J here is that if you can
express your input data as rationals, then you
can count on the result you get from %. (they may
still be nonsensical, but at least they are
exact). While I don't think you can do the x:
trick in APL, you could look at the expanded
(internal) representations using format. You
might even find that the numbers are recorded
differently in their floating point
representations.
That J gives a different result for %.
(especially near edge conditions) from APL isn't
surprising since the machine math underneath is
almost certainly quite different.
- joey
At 16:09 -0400 2009/01/11, Robert O'Boyle wrote:
>I have come across a difference between APL and J in matrix divide. I went
>on the J Forum and see that there has been some discussion on this but
>couldnt see the answer to my question (which I might have missed). I have
>two situations. In the first case, my left and right arguments to matrix
>divide are:
>
>Larg
>
>0.204761 0.164397 0.157506 0.157351 0.15296 0.143288 0.12958 0.113711
>0.0972323 0.0813807 0.0668824
>
>and
>
>Rarg
>
>0.273501 33.6993 _15.6811
>0.505944 50.5733 _10.6639
>0.700719 52.4143 _6.45982
>0.796474 47.0372 _4.393
>0.861592 39.7353 _2.98746
>0.905876 32.2908 _2.03162
>0.935991 25.5423 _1.38161
> 0.95647 19.8067 _0.939561
>0.970398 15.1266 _0.638949
>0.979869 11.4137 _0.434517
> 0.98631 8.52817 _0.295493
>
>Larg %. Rarg gives me
>
>0.0763215 0.00125631 _0.008049
>
>which is what Domino in APL III ver 1.2 estimates as well.
>
>I am writing code to estimate parameter bias when some of the parameters are
>conditionally linear. In this second situation, the two arguments are
>
>Larg
>
>0.0387946 0.0263726 0.0159726 0.0108677 0.00739584 0.00503257 0.00340808
>0.00233151 0.00157923 0.00107516 0.000729004
>
>And
>
>Rarg (same matrix as above)
>
>Here Larg %. Rarg gives me
>
>0.000002152066984 _0.000000087245596 _0.002473889682694
>
>While APLs Domino gives me
>
>¯0.000007023400888 2.333431673E¯7 ¯0.002473197439
>
>It could be that the Rarg matrix in the second situation is close to
>singular and that both languages are giving non-sensical results but I am
>not sure. Could anyone explain the difference between what %. And Domino are
>doing?
>
>Thanks
>
>Bob
>
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm