A comparative timing on Mac OSX 2.1 GHZ PowerPC G5:

      erf    =: (1 H. 1.5)@*: * 2p_0.5&* % ^@:*:
      n01cdf =: -: @ >: @ erf @ %&(%:2)
      ny     =: n01cdf nx=: i:5j2000
      randn  =: nx {~ ny I. [EMAIL PROTECTED]&0

      matrix =: randn(312,256)
      vector =: randn(256,1)
   1000 * 1000 (6!:2) 'matrix +/ .* vector'
0.529984


Intel machines might be faster... e.g. on Linux 3.0 GHZ
dual core machine:

      erf    =: (1 H. 1.5)@*: * 2p_0.5&* % ^@:*:
      n01cdf =: -: @ >: @ erf @ %&(%:2)
      ny     =: n01cdf nx=: i:5j2000
      randn  =: nx {~ ny I. [EMAIL PROTECTED]&0

      matrix =: randn(312,256)
      vector =: randn(256,1)
   1000 * 1000 (6!:2) 'matrix +/ .* vector'
0.229005

Both of these were run in j602 beta.

- joey

At 10:13  -0800 2007/11/08, [EMAIL PROTECTED] wrote:
I told my friend about how nice J was.  Am I wrong?

He said, how fast can you multiply a 1000x1000 matrix times
a 1x1000 vector to get a 1x1000 resultant vector.

his MatLab code ran in about .6 seconds.

His code follows..

matrix = randn(312,256);
vector = randn(256,1);

tic
for i = 1:1000
    output = matrix*vector;
end
toc

I wrote my code which runs in about 2.5 minutes

here is my code:

#!/home/efittery/bin/jconsole

A =: 256 312 $ 1+ i. 312 * 256x
B =: 312   1 $ 1+ i. 312x

jumbo =: monad define
    for. i. y do.
        yVector =: A +/ .* B
    end.
    )

jumbo 1000

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to