On 20 Mar 2012, at 14:40, Chao YUE wrote:
> I would be in agree. thanks!
> I use gawk to separate the file into many files by year, then it would be
> easier to handle.
> anyway, it's not a good practice to produce such huge line txt files
Indeed it's not, but it's also not good practice to
I would be in agree. thanks!
I use gawk to separate the file into many files by year, then it would be
easier to handle.
anyway, it's not a good practice to produce such huge line txt files
Chao
2012/3/20 David Froger
> Hi,
>
> I think writing a Python script that convert your txt file to o
Hi,
I think writing a Python script that convert your txt file to one netcdf file,
reading the txt file one line at a time, and then use the netcdf file normally
would be a good solution!
Best,
David
Excerpts from Chao YUE's message of mar. mars 20 13:33:56 +0100 2012:
> Dear all,
>
> I receiv
Dear all,
I received a file from others which contains ~30 million lines and in size
of ~500M.
I try read it with numpy.genfromtxt in ipython interactive mode. Then
ipython crashed.
The data contains lat,lon,var1,year, the year ranges from 1001 to 2006.
Finally I want to write the
data to netcdf f