Hello,
Ok, I will try dynamic link since I want the project to stay into the
Scilab world, but another alternative is Julia (http://julialang.org/).
I have made some tests this night and got a speedup of 25, the
explanation is JIT compilation. Maybe will we have JIT compilation in
Scilab 6 ?
Best regards,
S.
Le 24/04/2015 09:30, [email protected] a écrit :
Hello Stephane,
We have a Scilab program which performs a numerical integration on
data points in 3-dimensions - it has two nested loops. When the
number of data points was large this was slow so we implemented the
calculation function in C and got a speed improvement of about 24 times !
We also found three other improvements:
using pointer arithmetic was faster than 'for' loops,
'pow(x, 2)' was faster than x*x,
handling the data as 3 (N x 1) vectors was faster than using 1
(N x 3) matrix.
each of these giving something like a 3-4% improvement - small
compared to x24 but still worth having.
If you don't mind tackling the dynamic linking it's probably worth the
effort if you'll use this program a few times - good luck.
Adrian.
*Adrian Weeks *
Development Engineer, Hardware Engineering EMEA
Office: +44 (0)2920 528500 | Desk: +44 (0)2920 528523 | Fax: +44
(0)2920 520178_
[email protected]_ <mailto:[email protected]>
HID Global Logo <http://www.hidglobal.com/>
Unit 3, Cae Gwyrdd,
Green meadow Springs,
Cardiff, UK,
CF15 7AB._
__www.hidglobal.com_ <http://www.hidglobal.com/>
From: Stéphane Mottelet <[email protected]>
To: "International users mailing list for Scilab."
<[email protected]>
Date: 23/04/2015 22:52
Subject: [Scilab-users] Ways to speed up simple things in Scilab ?
Sent by: "users" <[email protected]>
------------------------------------------------------------------------
Hello,
I am currently working on a project where Scilab code is automatically
generated, and after many code optimization, the remaining bottleneck is
the time that Scilab spends to execute simple code like this (full
script (where the vector has 839 lines) with timings is attached) :
M1_v=[v(17)
v(104)
v(149)
-(v(18)+v(63)+v(103))
-(v(18)+v(63)+v(103))
v(17)
...
v(104)
v(149)
]
This kind of large vectors are the used to build a sparse matrix each
time the vector v changes, but with a constant sparsity pattern.
Actually, the time spent by Scilab in the statement
M1=sparse(M1_ij,M1_v,[n1,n2])
is negligible compared to the time spent to build f M1_v...
I have also noticed that if you need to define such a matrix with more
that one column, the time elapsed is not linear with respect to the
number of columns: typically 4 times slower for 2 columns. In fact the
statement
v=[1 1
...
1000 1000]
is even two times slower than
v1=[1
...
1000];
v2=[1
....
1000];
v=[v1 v2];
So my question to users who have the experience of dynamic link of user
code : do you think that using dynamic link of compiled generated C code
could improve the timings ?
In advance, thanks for your help !
S.
[attachment "test.sce" deleted by Adrian Weeks/CWL/EU/ITG]
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users
--
Département de Génie Informatique
EA 4297 Transformations Intégrées de la Matière Renouvelable
Université de Technologie de Compiègne - CS 60319
60203 Compiègne cedex
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users