On 02.09.2011, at 1:47AM, Russell E. Owen wrote:
I've made a pull request
https://github.com/numpy/numpy/pull/144
implementing that option as a switch 'prescan'; could you review it in
particular regarding the following:
Is the option reasonably named and documented?
In the case the
In article
781af0c6-b761-4abb-9798-938558253...@astro.physik.uni-goettingen.de,
Derek Homeier de...@astro.physik.uni-goettingen.de wrote:
On 11.08.2011, at 8:50PM, Russell E. Owen wrote:
It seems a shame that loadtxt has no argument for predicted length,
which would allow preallocation
In article
ca+rwobwjyy_abjijnxepkseraeom608uimywffgag-6xdgs...@mail.gmail.com,
Torgil Svensson torgil.svens...@gmail.com wrote:
Try the fromiter function, that will allow you to pass an iterator
which can read the file line by line and not preload the whole file.
file_iterator =
Try the fromiter function, that will allow you to pass an iterator
which can read the file line by line and not preload the whole file.
file_iterator = iter(open('filename.txt')
line_parser = lambda x: map(float,x.split('\t'))
a=np.fromiter(itertools.imap(line_parser,file_iterator),dtype=float)
In article
CANm_+Zqmsgo8Q+Oz_0RCya-hJv4Q7PqynDb=lzrgvbtxgy3...@mail.gmail.com,
Anne Archibald aarch...@physics.mcgill.ca wrote:
There was also some work on a semi-mutable array type that allowed
appending along one axis, then 'freezing' to yield a normal numpy
array (unfortunately I'm not
On 8/10/2011 1:01 PM, Anne Archibald wrote:
There was also some work on a semi-mutable array type that allowed
appending along one axis, then 'freezing' to yield a normal numpy
array (unfortunately I'm not sure how to find it in the mailing list
archives).
That was me, and here is the thread
aarrgg!
I cleaned up the doc string a bit, but didn't save before sending --
here it is again, Sorry about that.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/ORR(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
A coworker is trying to load a 1Gb text data file into a numpy array
using numpy.loadtxt, but he says it is using up all of his machine's 6Gb
of RAM. Is there a more efficient way to read such text data files?
-- Russell
___
NumPy-Discussion mailing
On 10 Aug 2011, at 19:22, Russell E. Owen wrote:
A coworker is trying to load a 1Gb text data file into a numpy array
using numpy.loadtxt, but he says it is using up all of his machine's 6Gb
of RAM. Is there a more efficient way to read such text data files?
The npyio routines (loadtxt as
There was also some work on a semi-mutable array type that allowed
appending along one axis, then 'freezing' to yield a normal numpy
array (unfortunately I'm not sure how to find it in the mailing list
archives). One could write such a setup by hand, using mmap() or
realloc(), but I'd be inclined
On Wed, Aug 10, 2011 at 04:01:37PM -0400, Anne Archibald wrote:
A 1 Gb text file is a miserable object anyway, so it might be desirable
to convert to (say) HDF5 and then throw away the text file.
+1
G
___
NumPy-Discussion mailing list
On 10 Aug 2011, at 22:03, Gael Varoquaux wrote:
On Wed, Aug 10, 2011 at 04:01:37PM -0400, Anne Archibald wrote:
A 1 Gb text file is a miserable object anyway, so it might be desirable
to convert to (say) HDF5 and then throw away the text file.
+1
There might be concerns about ensuring data
On 10. aug. 2011, at 21.03, Gael Varoquaux wrote:
On Wed, Aug 10, 2011 at 04:01:37PM -0400, Anne Archibald wrote:
A 1 Gb text file is a miserable object anyway, so it might be desirable
to convert to (say) HDF5 and then throw away the text file.
+1
G
+1 and a very warm recommendation
13 matches
Mail list logo