Hello,
I need to read in a csv of about 360.000 lines with date and numerical
values. Attached is a sample excerpt of that file.
So far I did:
==== CODE ====
// read in
tic
mydat = csvRead('dat04-2011.csv', ';', ',', 'double', [], [], [], 6);
toc (= 5,213 secs)
mydat = mydat(:,2:6);
tic
mystring = csvRead('dat04-2011.csv', ';', ',', 'string', [], [], [], 6);
toc (= 3,077 secs)
mystring = mystring(:,1);
tic
for i=1:size(mydat,1)
mydate(i,:) = strtod(strsplit(mystring(i,1),['.';' ';':']))';
end
toc (= 186,473 secs)
==== CODE ====
(I filled in the toc values).
As you can see this is unfortunately very slow. The read in of the csv, but
especially the for loop.
So I have several question:
1)
Is there a faster way to read in the csv? Note that I need the 'header'
option.
2)
Instead of the loop I would like to use
mydate = strtod(strsplit(mystring(:,1),['.';' ';':']))';
but this doesn't work. Is there another way to avoid the loop?
3)
The raw csv file is around 15MB, but when I want to read it in the second
time, Scilab says this will exceed the stacksize. Which is default by 76MB.
So I don't quite understand how two times the 15MB file takes so much
memory? I raised the stacksize now, but I would rather like not to.
Any help is appreciated.
Thanks!
Richard# S
# Parame
# Unit:
# Titles:
Timity
# Data:
03.01.2004 14:20;9,33;6,96;11,1;0,75;2
05.01.2004 13:40;8,58;7,34;9,56;0,38;2
10.01.2004 13:10;7,33;6,19;8,79;0,58;2
13.01.2004 06:10;16,07;12,92;20,62;1,27;2
25.01.2004 18:20;4,15;3,88;4,46;0,15;2
15.02.2004 00:30;3,49;3,11;3,78;0,17;2
27.02.2004 03:10;8,33;7,34;9,46;0,36;2
15.03.2004 08:50;15,04;13,31;17,16;0,49;2
19.03.2004 06:00;14,4;13,02;15,62;0,38;2
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users