It seems to me the computing cost of interpreted languages lies in
looking up variables, managing memory, checking and converting data
types.  Amortized over large array operations, this cost vanishes.
Furthermore, Roger chose optimal reasonable algorithms to implement
features.  For instance, I believe that red-black trees serve the symbol
computations.  Therefor when possible avoid working in boxes which need
to be individually type checked, and arrange your programs to compute
with large arrays.  And if you're really on the ball you'll use the
special phrases indicated at the link of Roger Hui's recent post.

>Date: Thu, 30 May 2013 20:34:18 +0800
>From: Robert Herman <[email protected]>
>To: [email protected]
>Subject: Re: [Jchat] J - speed, algorithms and structure
>Message-ID:
><cabvy+wsaq-ctvavjr0k0jrfa1hwz3+kn-dmnkr+o4odevp0...@mail.gmail.com>
>Content-Type: text/plain; charset=ISO-8859-1
>
>Yes, different tools do make you see the same problem from different
>angles. I am trying to repeat my exercises in J also in Mathematica and
>vice-versa. Some colleagues are saying I should learn F#, however, I
find J
>to be more mind-opening in the sense that I have dabbled with some
>functional languages and you can do functional in J too.
>My speed question is not simply a general benchmark question. I am
curious
>on how things work beneath the IDE and the J scripts. For instance, how
pi
>or 'o. 1' is implemented. Is it calling a C routine that uses a
standard
>way of calculating pi? Or Sin (1 o. 1r3p1 = 0.866)? I am not a HFT
looking
>to shave calculation times by milliseconds (nanoseconds?), but when I
>dabbled with the programming language Oz, some calculations took
minutes!
>Thanks again.
>
>Rob


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to