Hi Bob -
I don't have J 5.04 handy but I ran (a variation of) your benchmarks on my
Win XP
laptop (T2500 at 2 GHz, 2 GB RAM, some kind of Intel dual-core chip) and the
thing I noticed
was the initial variation, to whit:
6!:2 'BENCH_CAT 10000 10000 10000000'
1734.1024
qts''
2006 10 12 22 27 48.437
6!:2 'BENCH_CAT 10000 1000 10000000'
286.92399
6!:2 'BENCH_CAT 10000 1000 10000000'
272.86227
6!:2 'BENCH_CAT 10000 10000 10000000'
273.74305
I modified your code to parametrize the hard-coded 100 and 1000 to be (0{y.)
and (1{.y.) with
the original argument as (2{y.). A measurement at a single point is nowhere
near as
interesting as how that measurement scales as the problem size increases.
The interesting thing I see here is the variation between the initial and
subsequent runs.
Here's my version of your code:
BENCH_ARITH =: 3 : 0
NB. Expects (2{y.) to be (0{y.)00 for integer and 10000.0 for floating
for. i. (0{y.)
do.
ARITH (1{y.) ? (2{y.)
end.
)
ARITH =: 3 : 0
t =. 2 * 3 * (2{y.) + (2{y.) + (2{y.) + (2{y.) + (2{y.) - (2{y.)
- (2{y.) - (2{y.) - (2{y.)
t =. t >.(t <. (y. >. (2{y.)))
t =. (2{y.) <: (2{y.)
t =. (2{y.) = (2{y.)
t =. (2{y.) ~: (2{y.)
)
NB. ===============================================================
NB. Transcendentals
BENCH_TRANS =: 3 : 0 NB. Use (1{y.).0 for (2{y.)
for. i. (0{y.)
do.
TRANS (1{y.) ? (2{y.)
end.
)
TRANS =: 3 : 0
q =. ^.(^(|(1&o. (2{y.))))
)
NB. ================================================================
NB. Memory allocation
BENCH_CAT =: 3 : 0 NB. Use 0 for (2{y.)
for. i. (0{y.)
do.
x =: 0$(2{y.)
for_a. i. (0{y.)
do.
x =: x, a
end.
end.
)
However, the most important point I have to make is that
WHAT YOU ARE MEASURING HERE IS NOT THE MOST IMPORTANT THING TO
MEASURE
(pardon the all-caps)
To explain briefly: how much time the computer uses is usually irrelevant:
my own time is infinitely more precious.
If the notational advantages of J6 surpass those of J5 by 10 seconds per
year,
this far outweighs any so-called performance advantage of any vintage,
resurrected benchmarks
versus how much of my time is spent doing something.
Regards,
Devon
On 10/12/06, O'Boyle, Robert <[EMAIL PROTECTED]> wrote:
I am an APL programmer from many years ago and have recently been
exploring
J as an analytical tool. For instance, I am particularly interested in its
auto differentiation possibilities. But that's another story.
I have benchmark programs that were originally used with STSC APL and have
been used to compare performance across platforms, languages and the like.
They have lots of looping in them (a J & APL no no) necessitated by the
comparisons that have been conducted. That should not be important here (I
don't think but...). I used the script to check the relative performance
of
J 504 and J 601 using the same computer (running Win XP Pro). Here are the
J
504 scripts (with y. modified to y for J 601) to test performance of
integer
and floating operations, as well as those with transandentals and memory
allocation:
NB.=============================================================
NB. Integers & floats
BENCH_ARITH =: 3 : 0 NB. Expects y. to be 10000 for integer and
10000.0 for floating
for. i. 100
do.
ARITH 10000 ? y.
end.
)
ARITH =: 3 : 0
t =. 2 * 3 * y. + y. + y. + y. + y. - y. - y. - y. - y.
t =. t >.(t <. (y. >. y.))
t =. y. <: y.
t =. y. = y.
t =. y. ~: y.
)
NB. ===============================================================
NB. Transandentals
BENCH_TRANS =: 3 : 0 NB. Use 10000.0 for y.
for. i. 100
do.
TRANS 10000 ? y.
end.
)
TRANS =: 3 : 0
q =. ^.(^(|(1&o. y.)))
)
NB. ================================================================
NB. Memory allocation
BENCH_CAT =: 3 : 0 NB. Use 0 for y.
for. i. 100
do.
x =: 0$y.
for_a. i. 100
do.
x =: x, a
end.
end.
)
NB. ================================================================
With j 504, integer and floating operations took about 0.313 seconds while
in J 601, the same script took 0.547 seconds.
For transandentals, the times were 0.535 (J 504) and 0.759 (J 601).
For memory allocation, the times were very similar (0.088 and 0.082) with
J
601 being a bit faster.
I was surprised that J 601 was slower than J 504 in number crunching
operations. Is there any explanation for this? Is J 601 more optimised for
array operations than J 504 was? Any info would be useful.
Thanks
Bob O'Boyle
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm
--
Devon McCormick
^me^ at acm.
org is my
preferred e-mail
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm