Roger Hui wrote:
> Yes it's silly, but there were some worthwhile points:
>
> a. Benchmarks require care. e.g. exclude the time to
> generate the random numbers; in fact, exclude
> everything you can except for the operations you
> are comparing.
>
OK

> b. Regarding your second paragraph ("Does J reduce
> thinking time? Not really in this case. ...")
>
> Depends on what you mean by "this case".
>

I was looking specifically at the matrix*vector case.

> b.1 It used to be that MatLab has a small limit (2 or 3)
> on the rank (dimensionality) of an array, so that if you
> want to do inner product of higher-ranked arrays J does
> have a thinking advantage.  In fact, knowing that there
> is such a limit the thinking is more likely to be,
> why would I want to do that?
>
I do not know what the limit is: it is higher than 2 or 3, but it is still
there.  Indexing (and even declaring) arrays of rank higher than 2 is very
clumsy in MATLAB, and clearly an add-on.


> b.2 J permits inner products on other operations, such
> as ~:/ .* or *./ .> .

Good point.  Not only are there other operations, but their expression is
illuminating: I like the analogy between +/ .* and -/ .* .

> b.3 If you consider matrix inversion as part of "this case"
> then the Hilbert matrix example may be a plus on the
> J side.  I don't whether MatLab can invert the 40-by-40
> Hilbert matrix but if it does not have extended precision
> it would not be able to.
>
It cannot: it only deals with 64-bit floating point.

I like using extended integers and rationals: these tend to be ignored in
scientific computing, since there you are only interested in floating
point.

Best wishes,

John

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to