### [Numpy-discussion] ANN: Numexpr 2.0 released

to Mark Wiebe for such an important contribution! For some benchmarks on the new virtual machine, see: http://code.google.com/p/numexpr/wiki/NewVM Also, Gaëtan de Menten contributed important bug fixes, code cleanup as well as speed enhancements. Francesc Alted contributed some fixes, and added

### Re: [Numpy-discussion] ANN: Numexpr 2.0 released

that ufuncs provide: namely reduce_at, accumulate, reduce ? It is entirely possible that they are already in there but I could not figure out how to use them. If they aren't it would be great to have them. No, these are not implemented, but we will gladly accept contributions ;) -- Francesc Alted

### [Numpy-discussion] ANN: Numexpr 2.0.1 released

. Enjoy! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] ANN: Numexpr 2.0.1 released

-boun...@scipy.org] On Behalf Of Francesc Alted [fal...@gmail.com] Sent: 08 January 2012 12:49 To: Discussion of Numerical Python; numexpr Subject: [Numpy-discussion] ANN: Numexpr 2.0.1 released == Announcing Numexpr 2.0.1 == Numexpr

### Re: [Numpy-discussion] simple manipulations of numpy arrays

of the tutorial: https://github.com/FrancescAlted/carray/blob/master/doc/tutorial.rst Hope it helps, -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] simple manipulations of numpy arrays

On Feb 10, 2012, at 4:50 PM, Francesc Alted wrote: https://github.com/FrancescAlted/carry Hmm, this should be: https://github.com/FrancescAlted/carray Blame my (too) smart spell corrector. -- Francesc Alted ___ NumPy-Discussion mailing list

### Re: [Numpy-discussion] Commit rights to NumPy for Francesc Alted

On Feb 12, 2012, at 12:07 AM, Ralf Gommers wrote: On Sat, Feb 11, 2012 at 11:06 PM, Fernando Perez fperez@gmail.com wrote: On Sat, Feb 11, 2012 at 11:11 AM, Travis Oliphant tra...@continuum.io wrote: I propose to give Francesc Alted commit rights to the NumPy project. +1. Thanks

### Re: [Numpy-discussion] Index Array Performance

that the indices where integers, so this is probably the reason why it is that much faster. This is not to say that indexing in NumPy could not be accelerated, but it won't be trivial, IMO. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion

### Re: [Numpy-discussion] Change in scalar upcasting rules for 1.6.x?

in the code base, but anyway, I think that would be a nice thing to support for NumPy 2.0. Just a thought, -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### [Numpy-discussion] David M. Cooke?

if somebody knows about him. If so, please tell me. Thanks! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Numpy governance update

clear, I'm a Continuum guy. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Numpy governance update

the capacity and dedication of a single individual can shape the world. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Proposed Roadmap Overview

as with lazy: arr = A + B + C # with all of these NumPy arrays # compute upon exiting… Hmm, that would be cute indeed. Do you have an idea on how the code in the with context could be passed to the Python AST compiler (à la numexpr.evaluate(A + B + C))? -- Francesc Alted

### Re: [Numpy-discussion] ndarray and lazy evaluation (was: Proposed Rodmap Overview)

this further. See you, -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Proposed Roadmap Overview

the issue that make David Cooke to create numexpr. A more in-deep explanation about this problem can be seen in: http://www.euroscipy.org/talk/1657 which includes some graphical explanations. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy

### Re: [Numpy-discussion] np.longlong casts to int

? float128 128 bits, platform? Exactly. I'd update this to read: float9696 bits. Only available on 32-bit (i386) platforms. float128 128 bits. Only available on 64-bit (AMD64) platforms. -- Francesc Alted ___ NumPy-Discussion mailing

### Re: [Numpy-discussion] np.longlong casts to int

On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote: On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted franc...@continuum.io wrote: Exactly. I'd update this to read: float9696 bits. Only available on 32-bit (i386) platforms. float128 128 bits. Only available on 64-bit (AMD64

### Re: [Numpy-discussion] np.longlong casts to int

On Feb 23, 2012, at 6:06 AM, Francesc Alted wrote: On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote: On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted franc...@continuum.io wrote: Exactly. I'd update this to read: float9696 bits. Only available on 32-bit (i386) platforms

### Re: [Numpy-discussion] mkl usage

, see some speedups in a numexpr linked against MKL here: http://code.google.com/p/numexpr/wiki/NumexprVML See also how native multi-threading implementation in numexpr beats MKL's one (at least for this particular example). -- Francesc Alted

### Re: [Numpy-discussion] mkl usage

On Feb 23, 2012, at 2:19 PM, Neal Becker wrote: Pauli Virtanen wrote: 23.02.2012 20:44, Francesc Alted kirjoitti: On Feb 23, 2012, at 1:33 PM, Neal Becker wrote: Is mkl only used for linear algebra? Will it speed up e.g., elementwise transendental functions? Yes, MKL comes with VML

### Re: [Numpy-discussion] Possible roadmap addendum: building better text file readers

to a Mac, this is good to know. Thanks! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Possible roadmap addendum: building better text file readers

interface could matter: it's good to set up your code so it can use mmap() instead of read(), since this can reduce overhead. read() has to copy the data from the disk into OS memory, and then from OS memory into your process's memory; mmap() skips the second step. Cool. Nice trick! -- Francesc

### Re: [Numpy-discussion] [Numpy] quadruple precision

, it should not be defined. Uh, I foresee many portability problems for people using this, but perhaps it is worth the mess. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy

### Re: [Numpy-discussion] subclassing array in c

SIGNATURE- ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion

### Re: [Numpy-discussion] numpy videos

tables with an unlimited number of rows on-disk and, by using its integrated indexing engine (OPSI), you can perform quick lookups based on strings (or whatever other type). Look into these examples: http://www.pytables.org/moin/HowToUse#Selectingvalues HTH, -- Francesc Alted

### Re: [Numpy-discussion] numpy videos

://pytables.github.com/usersguide/optimization.html#accelerating-your-searches for more detailed rational and benchmarks in big datasets. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy

### Re: [Numpy-discussion] numpy + MKL problems

hours. Could you please bisect (http://webchick.net/node/99) and tell us which commit is the bad one? Thanks! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] numpy + MKL problems

? Have you upgraded MKL? GCC? Installed Intel C compiler? -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Looking for people interested in helping with Python compiler to LLVM

the big picture on this. But the general idea is really appealing. Thanks, -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Looking for people interested in helping with Python compiler to LLVM

On Mar 20, 2012, at 2:29 PM, Dag Sverre Seljebotn wrote: Francesc Alted franc...@continuum.io wrote: On Mar 20, 2012, at 12:49 PM, mark florisson wrote: Cython and Numba certainly overlap. However, Cython requires: 1) learning another language 2) creating an extension module

### Re: [Numpy-discussion] \*\*\*\*\*SPAM\*\*\*\*\* Re: \*\*\*\*\*SPAM\*\*\*\*\* Re: Numpy forIronPython 2.7 DLR app?

On 4/2/12 10:46 AM, William Johnston wrote: Hello, My email server went down. Did anyone respond to this post? You can check the mail archive here: http://mail.scipy.org/pipermail/numpy-discussion -- Francesc Alted ___ NumPy-Discussion mailing

### Re: [Numpy-discussion] Why is numpy.abs so much slower on complex64 than complex128 under windows 32-bit?

are considering the memory fetch time too (which is often much more realistic). -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Why is numpy.abs so much slower on complex64 than complex128 under windows 32-bit?

On 4/10/12 9:55 AM, Henry Gomersall wrote: On 10/04/2012 16:36, Francesc Alted wrote: In [10]: timeit c = numpy.complex64(numpy.abs(numpy.complex128(b))) 100 loops, best of 3: 12.3 ms per loop In [11]: timeit c = numpy.abs(b) 100 loops, best of 3: 8.45 ms per loop in your windows box

### Re: [Numpy-discussion] Why is numpy.abs so much slower on complex64 than complex128 under windows 32-bit?

On 4/10/12 11:43 AM, Henry Gomersall wrote: On 10/04/2012 17:57, Francesc Alted wrote: I'm using numexpr in the end, but this is slower than numpy.abs under linux. Oh, you mean the windows version of abs(complex64) in numexpr is slower than a pure numpy.abs(complex64) under linux? That's

### Re: [Numpy-discussion] sparse array data

object in PyTables [2] and indexing the dimensions for getting much improved speed for accessing elements in big sparse arrays. Using a table in a relational database (indexed for dimensions) could be an option too. [2] https://github.com/PyTables/PyTables Hope this helps, -- Francesc Alted

### Re: [Numpy-discussion] sparse array data

: -- 129 raise TypeError('invalid input format') 130 131 try: TypeError: invalid input format -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo

### Re: [Numpy-discussion] sparse array data

. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] sparse array data

-trees for sparse ops a while ago; did you ever talk to him about those ideas? Yup, the b-tree idea fits very well for indexing the coordinates. Although one problem with b-trees is that they do not compress well in general. -- Francesc Alted ___ NumPy

### Re: [Numpy-discussion] question about in-place operations

-node.org/python-autumnschool-2010/materials/starving_cpus Of course numexpr has less overhead (and can use multiple cores) than using plain NumPy. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman

### Re: [Numpy-discussion] question about in-place operations

to destination buffers. Still, one can see that using several threads can accelerate this copy well beyond memcpy speed. So, definitely, several cores can make your memory I/O bounded computations go faster. -- Francesc Alted ___ NumPy-Discussion mailing

### Re: [Numpy-discussion] SSE Optimization

://gruntthepeon.free.fr/ssemath/ I'd say that NumPy could benefit a lot of integrating optimized versions for transcendental functions (as the link above). Good luck! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org

### [Numpy-discussion] [ANN] carray 0.5 released

://carray.pytables.org/docs/manual Home of Blosc compressor: http://blosc.pytables.org User's mail list: car...@googlegroups.com http://groups.google.com/group/carray Enjoy! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion

### [Numpy-discussion] ANN: python-blosc 1.0.4 released

There is an official mailing list for Blosc at: bl...@googlegroups.com http://groups.google.es/group/blosc -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### [Numpy-discussion] [ANN] python-blosc 1.0.5 released

...@googlegroups.com http://groups.google.es/group/blosc -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] testing with amd libm/acml

): http://software.intel.com/sites/products/documentation/hpc/mkl/vml/functions/exp.html Pretty amazing. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] testing with amd libm/acml

of cores detected in the system is the default in numexpr; if you want less, you will need to use set_num_threads(nthreads) function. But agreed, sometimes using too many threads could effectively be counter-producing. -- Francesc Alted ___ NumPy

### Re: [Numpy-discussion] numexpr question

about caching it yourself. The best forum for discussing numexpr is this: https://groups.google.com/forum/?fromgroups#!forum/numexpr -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo

### Re: [Numpy-discussion] testing with amd libm/acml

/herumi/fmath/blob/master/fmath.hpp#L480 Hey, that's cool. I was a bit disappointed not finding this sort of work in open space. It seems that this lacks threading support, but that should be easy to implement by using OpenMP directives. -- Francesc Alted

### Re: [Numpy-discussion] testing with amd libm/acml

On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote: On 11/08/2012 06:06 PM, Francesc Alted wrote: On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote: On 11/07/2012 08:41 PM, Neal Becker wrote: Would you expect numexpr without MKL to give a significant boost? If you need higher performance than what

### Re: [Numpy-discussion] testing with amd libm/acml

On 11/8/12 7:55 PM, Dag Sverre Seljebotn wrote: On 11/08/2012 06:59 PM, Francesc Alted wrote: On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote: On 11/08/2012 06:06 PM, Francesc Alted wrote: On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote: On 11/07/2012 08:41 PM, Neal Becker wrote: Would you

### Re: [Numpy-discussion] Numpy's policy for releasing memory

too much in memory profilers to be too exact and rather focus on the big picture (i.e. my app is reclaiming a lot of memory for a large amount o time? if yes, then start worrying, but not before). -- Francesc Alted ___ NumPy-Discussion mailing list

### Re: [Numpy-discussion] Crash using reshape...

In []: np.intp Out[]: numpy.int64 If you see 'numpy.int32' here then that is the problem. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Crash using reshape...

reproduce that too (using 1.6.1). Could you please file a ticket for this? Smells like a bug to me. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] the difference between + and np.add?

+01, ..., 4.9850e+07, 4.9900e+07, 4.9950e+07]) Again, the computations are the same, but how you manage memory is critical. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org

### Re: [Numpy-discussion] the difference between + and np.add?

On 11/23/12 8:00 PM, Chris Barker - NOAA Federal wrote: On Thu, Nov 22, 2012 at 6:20 AM, Francesc Alted franc...@continuum.io wrote: As Nathaniel said, there is not a difference in terms of *what* is computed. However, the methods that you suggested actually differ on *how* they are computed

### Re: [Numpy-discussion] Conditional update of recarray field

, so this is why it works. Would it be possible to emit a warning message in the case of faulty assignments? The only solution that I can see for this is that the fancy indexing would return a view, and not a different object, but NumPy containers are not prepared for this. -- Francesc Alted

### Re: [Numpy-discussion] Conditional update of recarray field

a copy, not a view. And yes, fancy indexing returning a copy is standard for all ndarrays. Hope it is clearer now (although admittedly it is a bit strange at first sight), -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

### Re: [Numpy-discussion] Byte aligned arrays

data, I'd be surprised that the difference in performance would be even noticeable. Can you tell us which difference in performance are you seeing for an AVX-aligned array and other that is not AVX-aligned? Just curious. -- Francesc Alted ___ NumPy

### Re: [Numpy-discussion] Byte aligned arrays

On 12/20/12 9:53 AM, Henry Gomersall wrote: On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote: The only scenario that I see that this would create unaligned arrays is for machines having AVX. But provided that the Intel architecture is making great strides in fetching unaligned data

### Re: [Numpy-discussion] Byte aligned arrays

On 12/20/12 7:35 PM, Henry Gomersall wrote: On Thu, 2012-12-20 at 15:23 +0100, Francesc Alted wrote: On 12/20/12 9:53 AM, Henry Gomersall wrote: On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote: The only scenario that I see that this would create unaligned arrays is for machines

### Re: [Numpy-discussion] Byte aligned arrays

On 12/21/12 11:58 AM, Henry Gomersall wrote: On Fri, 2012-12-21 at 11:34 +0100, Francesc Alted wrote: Also this convolution code: https://github.com/hgomersall/SSE-convolution/blob/master/convolve.c Shows a small but repeatable speed-up (a few %) when using some aligned loads (as many as I

### Re: [Numpy-discussion] Byte aligned arrays

On 12/21/12 1:35 PM, Dag Sverre Seljebotn wrote: On 12/20/2012 03:23 PM, Francesc Alted wrote: On 12/20/12 9:53 AM, Henry Gomersall wrote: On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote: The only scenario that I see that this would create unaligned arrays is for machines having AVX

### Re: [Numpy-discussion] pip install numpy throwing a lot of output.

to install them, so compile messages are meaningful. Another question would be to reduce the amount of compile messages by default in NumPy, but I don't think this is realistic (and even not desirable). -- Francesc Alted ___ NumPy-Discussion mailing

### Re: [Numpy-discussion] pip install numpy throwing a lot of output.

On 2/12/13 3:18 PM, Daπid wrote: On 12 February 2013 14:58, Francesc Alted franc...@continuum.io wrote: Yes, I think that's expected. Just to make sure, can you send some excerpts of the errors that you are getting? Actually the errors are at the beginning of the process, so they are out

### Re: [Numpy-discussion] GSOC 2013

takes 9 bytes to host the structure, while a `aligned=True` will take 16 bytes. I'd rather let the default as it is, and in case performance is critical, you can always copy the unaligned field to a new (homogeneous) array. -- Francesc Alted ___ NumPy

### Re: [Numpy-discussion] aligned / unaligned structured dtype behavior (was: GSOC 2013)

is a bit slower that NumPy because sum() is not parallelized internally. Hmm, provided that, I'm wondering if some internal copies to L1 in NumPy could help improving unaligned performance. Worth a try? -- Francesc Alted ___ NumPy-Discussion mailing

### Re: [Numpy-discussion] aligned / unaligned structured dtype behavior

On 3/7/13 6:47 PM, Francesc Alted wrote: On 3/6/13 7:42 PM, Kurt Smith wrote: And regarding performance, doing simple timings shows a 30%-ish slowdown for unaligned operations: In [36]: %timeit packed_arr['b']**2 100 loops, best of 3: 2.48 ms per loop In [37]: %timeit aligned_arr['b']**2

### Re: [Numpy-discussion] fast numpy.fromfile skipping data chunks

to read data skipping some records (I am reading data recorded at high frequency, so basically I want to read subsampling). [clip] You can do a fid.seek(offset) prior to np.fromfile() and the it will read from offset. See the docstrings for `file.seek()` on how to use it. -- Francesc Alted

### Re: [Numpy-discussion] fast numpy.fromfile skipping data chunks

On 3/13/13 3:53 PM, Francesc Alted wrote: On 3/13/13 2:45 PM, Andrea Cimatoribus wrote: Hi everybody, I hope this has not been discussed before, I couldn't find a solution elsewhere. I need to read some binary data, and I am using numpy.fromfile to do this. Since the files are huge

### Re: [Numpy-discussion] timezones and datetime64

. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] timezones and datetime64

that this was not needed because timestamps+timedelta would be enough. The NEP still reflects this discussion: https://github.com/numpy/numpy/blob/master/doc/neps/datetime-proposal.rst#why-the-origin-metadata-disappeared This is just an historical note, not that we can't change that again. -- Francesc Alted

### Re: [Numpy-discussion] timezones and datetime64

On 4/4/13 8:56 PM, Chris Barker - NOAA Federal wrote: On Thu, Apr 4, 2013 at 10:54 AM, Francesc Alted franc...@continuum.io wrote: That makes a difference. This can be specially important for creating user-defined time origins: In []: np.array(int(1.5e9), dtype='datetime64[s]') + np.array(1

### [Numpy-discussion] ANN: numexpr 2.1 RC1

, suggestions, gripes, kudos, etc. you may have. Enjoy! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Profiling (was GSoC : Performance parity between numpy arrays and Python scalars)

this feature extensively for optimizing parts of the Blosc compressor, and I cannot be more happier (to the point that, if it were not for Valgrind, I could not figure out many interesting memory access optimizations). -- Francesc Alted ___ NumPy

### [Numpy-discussion] ANN: python-blosc 1.1 RC1 available for testing

://groups.google.es/group/blosc Licenses Both Blosc and its Python wrapper are distributed using the MIT license. See: https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES for more details. -- Francesc Alted ___ NumPy-Discussion mailing

### [Numpy-discussion] ANN: python-blosc 1.1 (final) released

list for Blosc at: bl...@googlegroups.com http://groups.google.es/group/blosc Licenses Both Blosc and its Python wrapper are distributed using the MIT license. See: https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES for more details. Enjoy! -- Francesc Alted

### Re: [Numpy-discussion] RAM problem during code execution - Numpya arrays

% of RAM used and in 1-2hour is totally full used)? Please help me, I'm totally stuck! Thanks a lot! ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Francesc Alted

### [Numpy-discussion] [ANN] numexpr 2.2 released

is hosted at Google code in: http://code.google.com/p/numexpr/ You can get the packages from PyPI as well: http://pypi.python.org/pypi/numexpr Share your experience = Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy data! -- Francesc Alted

### Re: [Numpy-discussion] -ffast-math

of). Maybe you are running in a multi-core machine now and you are seeing better speedup because of this? Also, your expressions are made of transcendental functions, so linking numexpr with MKL could accelerate computations a good deal too. -- Francesc Alted

### Re: [Numpy-discussion] Catching out-of-memory error before it happens

___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy

### [Numpy-discussion] ANN: numexpr 2.3 (final) released

= Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy data! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### [Numpy-discussion] ANN: python-blosc 1.2.0 released

://github.com/ContinuumIO/python-blosc/blob/master/LICENSES for more details. -- Francesc Alted Continuum Analytics, Inc. -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy

### [Numpy-discussion] ANN: BLZ 0.6.1 has been released

! Francesc Alted Continuum Analytics, Inc. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] argsort speed

___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman

### [Numpy-discussion] ANN: numexpr 2.3.1 released

): http://pypi.python.org/pypi/numexpr Share your experience = Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy data! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http

### Re: [Numpy-discussion] last call for fixes for numpy 1.8.1rc1

to add gh-4284 after some though tomorrow. Cheers, Julian ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Francesc Alted

### Re: [Numpy-discussion] last call for fixes for numpy 1.8.1rc1

it if thats enough. It would bump some temporary arrays of nditer from 32kb to 128kb, I think that would still be fine, but getting to the point where we should move them onto the heap. On 28.02.2014 12:41, Francesc Alted wrote: Hi Julian, Any chance that NPY_MAXARGS could be increased

### Re: [Numpy-discussion] last call for fixes for numpy 1.8.1rc1

I'm more worried about running out of stack space, though the limit is usually 8mb so taking 128kb for a short while should be ok. On 28.02.2014 13:32, Francesc Alted wrote: Well, what numexpr is using is basically NpyIter_AdvancedNew: https://github.com/pydata

### [Numpy-discussion] ANN: numexpr 2.4 RC1

, kudos, etc. you may have. Enjoy data! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### [Numpy-discussion] ANN: numexpr 2.4 RC2

=== Announcing Numexpr 2.4 RC2 === Numexpr is a fast numerical expression evaluator for NumPy. With it, expressions that operate on arrays (like 3*a+4*b) are accelerated and use less memory than doing the same calculation in Python. It wears

### Re: [Numpy-discussion] PEP 465 has been accepted / volunteers needed

;-). no -- it's your high tolerance for _reading_ emails... Far too many of us have a high tolerance for writing them! Ha ha, very true! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy

### [Numpy-discussion] ANN: numexpr 2.4 is out

know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy data! -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] High-quality memory profiling for numpy in python 3.5 / volunteers needed

NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Francesc Alted ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] High-quality memory profiling for numpy in python 3.5 / volunteers needed

El 17/04/14 19:28, Julian Taylor ha escrit: On 17.04.2014 18:06, Francesc Alted wrote: In [4]: x_unaligned = np.zeros(shape, dtype=[('y1',np.int8),('x',np.float64),('y2',np.int8,(7,))])['x'] on arrays of this size you won't see alignment issues you are dominated by memory bandwidth

### Re: [Numpy-discussion] High-quality memory profiling for numpy in python 3.5 / volunteers needed

El 17/04/14 21:19, Julian Taylor ha escrit: On 17.04.2014 20:30, Francesc Alted wrote: El 17/04/14 19:28, Julian Taylor ha escrit: On 17.04.2014 18:06, Francesc Alted wrote: In [4]: x_unaligned = np.zeros(shape, dtype=[('y1',np.int8),('x',np.float64),('y2',np.int8,(7,))])['x'] on arrays

### Re: [Numpy-discussion] About the npz format

-r--r-- 1 faltet users 48M 18 abr 13:47 x-lz4.blp -rw-r--r-- 1 faltet users 49M 18 abr 13:47 x-blosclz.blp -rw-r--r-- 1 faltet users 382M 18 abr 13:42 x.npy But again, we are talking about a specially nice compression case. -- Francesc Alted

### Re: [Numpy-discussion] High-quality memory profiling for numpy in python 3.5 / volunteers needed

El 18/04/14 13:39, Francesc Alted ha escrit: So, sqrt in numpy has barely the same speed than the one in MKL. Again, I wonder why :) So by peeking into the code I have seen that you implemented sqrt using SSE2 intrinsics. Cool! -- Francesc Alted

### Re: [Numpy-discussion] IDL vs Python parallel computing

throughput. Having said this, there are several packages that work on top of NumPy that can use multiple cores when performing numpy operations, like numexpr (https://github.com/pydata/numexpr), or Theano (http://deeplearning.net/software/theano/tutorial/multi_cores.html) -- Francesc Alted

### [Numpy-discussion] ANN: python-blosc 1.2.7 released

://groups.google.es/group/blosc Licenses Both Blosc and its Python wrapper are distributed using the MIT license. See: https://github.com/Blosc/python-blosc/blob/master/LICENSES for more details. **Enjoy data!** -- Francesc Alted

### [Numpy-discussion] [CORRECTION] python-blosc 1.2.4 released (Was: ANN: python-blosc 1.2.7 released)

Indeed it was 1.2.4 the version just released and not 1.2.7. Sorry for the typo! Francesc On 7/7/14, 8:20 PM, Francesc Alted wrote: = Announcing python-blosc 1.2.4 = What is new? This is a maintenance release, where