/libmkl_intel_thread.a
$(MKLROOT)/lib/em64t/libmkl_core.a -Wl,--end-group -openmp -lpthread
On Mon, Mar 14, 2011 at 11:58 PM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
On Tue, Mar 15, 2011 at 8:12 AM, Mag Gam magaw...@gmail.com wrote:
Trying to compile Numpy with Intel's MKL. I have exported
Trying to compile Numpy with Intel's MKL. I have exported the proper
paths for BLAS and LAPACK and I think the build script found it.
However, I am having a lot of trouble with ATLAS. What library file
should I use for it?
tia
___
NumPy-Discussion
Planning to compile Numpy with Intel C compiler
(http://www.scipy.org/Installing_SciPy/Linux#head-7ce43956a69ec51c6f2cedd894a4715d5bfff974).
I was wondering if there was a benefit.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
be faster then the access to the multiple dictionnary access. But
don't forget, you change an algo of O(n), by O(nlogn) with a lower constant.
So the n should not be too big. Just try different value.
Frédéric Bastien
On Thu, Jul 9, 2009 at 7:14 AM, Mag Gam magaw...@gmail.com wrote:
The problem
The problem is the array is very large. We are talking about 200+ million rows.
On Thu, Jul 9, 2009 at 4:41 AM, David Warde-Farleyd...@cs.toronto.edu wrote:
On 9-Jul-09, at 1:12 AM, Mag Gam wrote:
Here is what I have, which does it 1x1:
z={} #dictionary
r=csv.reader(file)
for i,row
Hey All
I am reading thru a file and trying to store the values into another
array, but instead of storing the values 1 by 1, I would like to store
them in bulk sets for optimization purposes.
Here is what I have, which does it 1x1:
z={} #dictionary
r=csv.reader(file)
for i,row in
Is it possible to use loadtxt in a mult thread way? Basically, I want
to process a very large CSV file (100+ million records) and instead of
loading thousand elements into a buffer process and then load another
1 thousand elements and process and so on...
I was wondering if there is a technique
a some sample
code for mapping a compressed csv file into memory? and loading the
dataset into a dset (hdf5 structure)?
TIA
On Thu, Jun 25, 2009 at 9:50 PM, Anne
Archibaldperidot.face...@gmail.com wrote:
2009/6/25 Mag Gam magaw...@gmail.com:
Hello.
I am very new to NumPy and Python. We
:
A Friday 26 June 2009 12:38:11 Mag Gam escrigué:
Thanks everyone for the great and well thought out responses!
To make matters worse, this is actually a 50gb compressed csv file. So
it looks like this, 2009.06.01.plasmasub.csv.gz
We get this data from another lab from the Westcoast every night
.
On Fri, Jun 26, 2009 at 7:31 AM, Francesc Altedfal...@pytables.org wrote:
A Friday 26 June 2009 13:09:13 Mag Gam escrigué:
I really like the slice by slice idea!
Hmm, after looking at the np.loadtxt() docstrings it seems it works by loading
the complete file at once, so you shouldn't use
10 matches
Mail list logo