On 26/09/2012 19:42, Paul Carrico wrote:

Dear All,

A funny result for calculating the norm of a tensor ... for ounce a "traditional" method (that probably uses vectorization in back stage) is faster than the norm function ...

Paul

mode(0);

A= [ 1 2 3 ; 4 5 6; 7 8 9]

n= 1000

fori = 1 : n

for j = 1 : n

B(i,j) = i*j;

end

end

/// Scilab function/

_tic_();

_norm_(B)

t1= _toc_()

/// "traditional" method/

/// norm = sqrt(trace(A_transp * A))/

_tic_

norm_= (_trace_(B'*B))**0.5

t2= _toc_()


Hi
Yep, same result for me, the handmade computation is 4 times faster. Maybe norm i written assuming B is a complex hypermatrix or something.

But the real improvement would be to use a "vectorized" code to generate B, try this:


tic
n= 1000

fori = 1 : n

for j = 1 : n

B(i,j) = i*j;

end

end
toc


tic
BB=(1:n)'*(1:n);
toc

the second method is several hundred times faster than the first.


Note that the first method uses loops when matrix computations are able to do the job, and, worst, B is not initialized.

This means that every while, scilab will notice that B is larger than expected, and will ask for a new, larger place in memory to store B, will copy the existing B from the old place to the new one, only to find out later that B is actually even bigger. I do not know how exactly this is implemented, but this can happen basically for each new value of i, so 1000 time.

The take away message is: when having to building a matrix with a loop, "initailize" your matrix:

tic
n= 1000

fori = 1 : n

for j = 1 : n

Bad(i,j) = i*j;

end

end
bad=toc()


tic
Good = zeros(n,n);
n= 1000

fori = 1 : n

for j = 1 : n

Good(i,j) = i*j;

end

end
good=toc()

bad/good

//(in my PC this is already 3x faster)

tic
best=(1:n)'*(1:n);
best=toc()

bad/best
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users

Reply via email to