to
Mark Wiebe for such an important contribution!
For some benchmarks on the new virtual machine, see:
http://code.google.com/p/numexpr/wiki/NewVM
Also, Gaëtan de Menten contributed important bug fixes, code cleanup
as well as speed enhancements. Francesc Alted contributed some fixes,
and added
that
ufuncs provide: namely reduce_at, accumulate, reduce ? It is entirely
possible that they are already in there but I could not figure out how
to use them. If they aren't it would be great to have them.
No, these are not implemented, but we will gladly accept contributions ;)
--
Francesc Alted
.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
-boun...@scipy.org]
On Behalf Of Francesc Alted [fal...@gmail.com]
Sent: 08 January 2012 12:49
To: Discussion of Numerical Python; numexpr
Subject: [Numpy-discussion] ANN: Numexpr 2.0.1 released
==
Announcing Numexpr 2.0.1
==
Numexpr
of the tutorial:
https://github.com/FrancescAlted/carray/blob/master/doc/tutorial.rst
Hope it helps,
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Feb 10, 2012, at 4:50 PM, Francesc Alted wrote:
https://github.com/FrancescAlted/carry
Hmm, this should be:
https://github.com/FrancescAlted/carray
Blame my (too) smart spell corrector.
-- Francesc Alted
___
NumPy-Discussion mailing list
On Feb 12, 2012, at 12:07 AM, Ralf Gommers wrote:
On Sat, Feb 11, 2012 at 11:06 PM, Fernando Perez fperez@gmail.com wrote:
On Sat, Feb 11, 2012 at 11:11 AM, Travis Oliphant tra...@continuum.io wrote:
I propose to give Francesc Alted commit rights to the NumPy project.
+1.
Thanks
that the indices where
integers, so this is probably the reason why it is that much faster.
This is not to say that indexing in NumPy could not be accelerated, but it
won't be trivial, IMO.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion
in the code base, but anyway, I think that would
be a nice thing to support for NumPy 2.0.
Just a thought,
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
if somebody knows about
him. If so, please tell me.
Thanks!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
clear, I'm a Continuum guy.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the
capacity and dedication of a single individual can shape the world.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
as
with lazy:
arr = A + B + C # with all of these NumPy arrays
# compute upon exiting…
Hmm, that would be cute indeed. Do you have an idea on how the code in the
with context could be passed to the Python AST compiler (à la
numexpr.evaluate(A + B + C))?
-- Francesc Alted
this further.
See you,
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the issue that make
David Cooke to create numexpr. A more in-deep explanation about this problem
can be seen in:
http://www.euroscipy.org/talk/1657
which includes some graphical explanations.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy
?
float128 128 bits, platform?
Exactly. I'd update this to read:
float9696 bits. Only available on 32-bit (i386) platforms.
float128 128 bits. Only available on 64-bit (AMD64) platforms.
-- Francesc Alted
___
NumPy-Discussion mailing
On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote:
On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted franc...@continuum.io
wrote:
Exactly. I'd update this to read:
float9696 bits. Only available on 32-bit (i386) platforms.
float128 128 bits. Only available on 64-bit (AMD64
On Feb 23, 2012, at 6:06 AM, Francesc Alted wrote:
On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote:
On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted franc...@continuum.io
wrote:
Exactly. I'd update this to read:
float9696 bits. Only available on 32-bit (i386) platforms
, see some speedups in a numexpr linked against MKL here:
http://code.google.com/p/numexpr/wiki/NumexprVML
See also how native multi-threading implementation in numexpr beats MKL's one
(at least for this particular example).
-- Francesc Alted
On Feb 23, 2012, at 2:19 PM, Neal Becker wrote:
Pauli Virtanen wrote:
23.02.2012 20:44, Francesc Alted kirjoitti:
On Feb 23, 2012, at 1:33 PM, Neal Becker wrote:
Is mkl only used for linear algebra? Will it speed up e.g., elementwise
transendental functions?
Yes, MKL comes with VML
to a Mac, this is good to know. Thanks!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
interface could matter: it's good to set
up your code so it can use mmap() instead of read(), since this can
reduce overhead. read() has to copy the data from the disk into OS
memory, and then from OS memory into your process's memory; mmap()
skips the second step.
Cool. Nice trick!
-- Francesc
, it should not be defined.
Uh, I foresee many portability problems for people using this, but perhaps it
is worth the mess.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
SIGNATURE-
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion
tables with an unlimited number of rows on-disk and, by using its
integrated indexing engine (OPSI), you can perform quick lookups based on
strings (or whatever other type). Look into these examples:
http://www.pytables.org/moin/HowToUse#Selectingvalues
HTH,
-- Francesc Alted
://pytables.github.com/usersguide/optimization.html#accelerating-your-searches
for more detailed rational and benchmarks in big datasets.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
hours. Could you please bisect
(http://webchick.net/node/99) and tell us which commit is the bad one?
Thanks!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
? Have you
upgraded MKL? GCC? Installed Intel C compiler?
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the big picture on this. But the general idea is really
appealing.
Thanks,
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mar 20, 2012, at 2:29 PM, Dag Sverre Seljebotn wrote:
Francesc Alted franc...@continuum.io wrote:
On Mar 20, 2012, at 12:49 PM, mark florisson wrote:
Cython and Numba certainly overlap. However, Cython requires:
1) learning another language
2) creating an extension module
On 4/2/12 10:46 AM, William Johnston wrote:
Hello,
My email server went down.
Did anyone respond to this post?
You can check the mail archive here:
http://mail.scipy.org/pipermail/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing
are considering the memory fetch time too (which is often much more
realistic).
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 4/10/12 9:55 AM, Henry Gomersall wrote:
On 10/04/2012 16:36, Francesc Alted wrote:
In [10]: timeit c = numpy.complex64(numpy.abs(numpy.complex128(b)))
100 loops, best of 3: 12.3 ms per loop
In [11]: timeit c = numpy.abs(b)
100 loops, best of 3: 8.45 ms per loop
in your windows box
On 4/10/12 11:43 AM, Henry Gomersall wrote:
On 10/04/2012 17:57, Francesc Alted wrote:
I'm using numexpr in the end, but this is slower than numpy.abs under linux.
Oh, you mean the windows version of abs(complex64) in numexpr is slower
than a pure numpy.abs(complex64) under linux? That's
object in PyTables [2]
and indexing the dimensions for getting much improved speed for
accessing elements in big sparse arrays. Using a table in a relational
database (indexed for dimensions) could be an option too.
[2] https://github.com/PyTables/PyTables
Hope this helps,
--
Francesc Alted
:
-- 129 raise TypeError('invalid input format')
130
131 try:
TypeError: invalid input format
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
-trees for sparse ops a
while ago; did you ever talk to him about those ideas?
Yup, the b-tree idea fits very well for indexing the coordinates.
Although one problem with b-trees is that they do not compress well in
general.
--
Francesc Alted
___
NumPy
-node.org/python-autumnschool-2010/materials/starving_cpus
Of course numexpr has less overhead (and can use multiple cores) than
using plain NumPy.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
to destination buffers. Still, one can see that using
several threads can accelerate this copy well beyond memcpy speed.
So, definitely, several cores can make your memory I/O bounded
computations go faster.
--
Francesc Alted
___
NumPy-Discussion mailing
://gruntthepeon.free.fr/ssemath/
I'd say that NumPy could benefit a lot of integrating optimized versions
for transcendental functions (as the link above).
Good luck!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
://carray.pytables.org/docs/manual
Home of Blosc compressor:
http://blosc.pytables.org
User's mail list:
car...@googlegroups.com
http://groups.google.com/group/carray
Enjoy!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion
There is an official mailing list for Blosc at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
...@googlegroups.com
http://groups.google.es/group/blosc
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
):
http://software.intel.com/sites/products/documentation/hpc/mkl/vml/functions/exp.html
Pretty amazing.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
of cores detected in the system is the default in
numexpr; if you want less, you will need to use
set_num_threads(nthreads) function. But agreed, sometimes using too
many threads could effectively be counter-producing.
--
Francesc Alted
___
NumPy
about caching it yourself.
The best forum for discussing numexpr is this:
https://groups.google.com/forum/?fromgroups#!forum/numexpr
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
/herumi/fmath/blob/master/fmath.hpp#L480
Hey, that's cool. I was a bit disappointed not finding this sort of
work in open space. It seems that this lacks threading support, but
that should be easy to implement by using OpenMP directives.
--
Francesc Alted
On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
On 11/08/2012 06:06 PM, Francesc Alted wrote:
On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
On 11/07/2012 08:41 PM, Neal Becker wrote:
Would you expect numexpr without MKL to give a significant boost?
If you need higher performance than what
On 11/8/12 7:55 PM, Dag Sverre Seljebotn wrote:
On 11/08/2012 06:59 PM, Francesc Alted wrote:
On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
On 11/08/2012 06:06 PM, Francesc Alted wrote:
On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
On 11/07/2012 08:41 PM, Neal Becker wrote:
Would you
too much in memory profilers to be too
exact and rather focus on the big picture (i.e. my app is reclaiming a
lot of memory for a large amount o time? if yes, then start worrying,
but not before).
--
Francesc Alted
___
NumPy-Discussion mailing list
In []: np.intp
Out[]: numpy.int64
If you see 'numpy.int32' here then that is the problem.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
reproduce that too (using 1.6.1). Could you please file a
ticket for this? Smells like a bug to me.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
+01, ...,
4.9850e+07, 4.9900e+07, 4.9950e+07])
Again, the computations are the same, but how you manage memory is critical.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
On 11/23/12 8:00 PM, Chris Barker - NOAA Federal wrote:
On Thu, Nov 22, 2012 at 6:20 AM, Francesc Alted franc...@continuum.io wrote:
As Nathaniel said, there is not a difference in terms of *what* is
computed. However, the methods that you suggested actually differ on
*how* they are computed
, so
this is why it works.
Would it be
possible to emit a warning message in the case of faulty assignments?
The only solution that I can see for this is that the fancy indexing
would return a view, and not a different object, but NumPy containers
are not prepared for this.
--
Francesc Alted
a copy, not a view.
And yes, fancy indexing returning a copy is standard for all ndarrays.
Hope it is clearer now (although admittedly it is a bit strange at first
sight),
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
data, I'd be surprised that
the difference in performance would be even noticeable.
Can you tell us which difference in performance are you seeing for an
AVX-aligned array and other that is not AVX-aligned? Just curious.
--
Francesc Alted
___
NumPy
On 12/20/12 9:53 AM, Henry Gomersall wrote:
On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
The only scenario that I see that this would create unaligned arrays
is
for machines having AVX. But provided that the Intel architecture is
making great strides in fetching unaligned data
On 12/20/12 7:35 PM, Henry Gomersall wrote:
On Thu, 2012-12-20 at 15:23 +0100, Francesc Alted wrote:
On 12/20/12 9:53 AM, Henry Gomersall wrote:
On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
The only scenario that I see that this would create unaligned
arrays
is
for machines
On 12/21/12 11:58 AM, Henry Gomersall wrote:
On Fri, 2012-12-21 at 11:34 +0100, Francesc Alted wrote:
Also this convolution code:
https://github.com/hgomersall/SSE-convolution/blob/master/convolve.c
Shows a small but repeatable speed-up (a few %) when using some
aligned
loads (as many as I
On 12/21/12 1:35 PM, Dag Sverre Seljebotn wrote:
On 12/20/2012 03:23 PM, Francesc Alted wrote:
On 12/20/12 9:53 AM, Henry Gomersall wrote:
On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
The only scenario that I see that this would create unaligned arrays
is
for machines having AVX
to install them, so
compile messages are meaningful. Another question would be to reduce the
amount of compile messages by default in NumPy, but I don't think this
is realistic (and even not desirable).
--
Francesc Alted
___
NumPy-Discussion mailing
On 2/12/13 3:18 PM, Daπid wrote:
On 12 February 2013 14:58, Francesc Alted franc...@continuum.io wrote:
Yes, I think that's expected. Just to make sure, can you send some
excerpts of the errors that you are getting?
Actually the errors are at the beginning of the process, so they are
out
takes 9 bytes to host the
structure, while a `aligned=True` will take 16 bytes. I'd rather let
the default as it is, and in case performance is critical, you can
always copy the unaligned field to a new (homogeneous) array.
--
Francesc Alted
___
NumPy
is
a bit slower that NumPy because sum() is not parallelized internally.
Hmm, provided that, I'm wondering if some internal copies to L1 in NumPy
could help improving unaligned performance. Worth a try?
--
Francesc Alted
___
NumPy-Discussion mailing
On 3/7/13 6:47 PM, Francesc Alted wrote:
On 3/6/13 7:42 PM, Kurt Smith wrote:
And regarding performance, doing simple timings shows a 30%-ish
slowdown for unaligned operations:
In [36]: %timeit packed_arr['b']**2
100 loops, best of 3: 2.48 ms per loop
In [37]: %timeit aligned_arr['b']**2
to read
data skipping some records (I am reading data recorded at high frequency, so
basically I want to read subsampling).
[clip]
You can do a fid.seek(offset) prior to np.fromfile() and the it will
read from offset. See the docstrings for `file.seek()` on how to use it.
--
Francesc Alted
On 3/13/13 3:53 PM, Francesc Alted wrote:
On 3/13/13 2:45 PM, Andrea Cimatoribus wrote:
Hi everybody, I hope this has not been discussed before, I couldn't
find a solution elsewhere.
I need to read some binary data, and I am using numpy.fromfile to do
this. Since the files are huge
.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
that this was not needed because timestamps+timedelta would
be enough. The NEP still reflects this discussion:
https://github.com/numpy/numpy/blob/master/doc/neps/datetime-proposal.rst#why-the-origin-metadata-disappeared
This is just an historical note, not that we can't change that again.
--
Francesc Alted
On 4/4/13 8:56 PM, Chris Barker - NOAA Federal wrote:
On Thu, Apr 4, 2013 at 10:54 AM, Francesc Alted franc...@continuum.io wrote:
That makes a difference. This can be specially important for creating
user-defined time origins:
In []: np.array(int(1.5e9), dtype='datetime64[s]') + np.array(1
, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
this feature extensively for optimizing parts of the Blosc
compressor, and I cannot be more happier (to the point that, if it were
not for Valgrind, I could not figure out many interesting memory access
optimizations).
--
Francesc Alted
___
NumPy
://groups.google.es/group/blosc
Licenses
Both Blosc and its Python wrapper are distributed using the MIT license.
See:
https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES
for more details.
--
Francesc Alted
___
NumPy-Discussion mailing
list for Blosc at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
Licenses
Both Blosc and its Python wrapper are distributed using the MIT license.
See:
https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES
for more details.
Enjoy!
--
Francesc Alted
% of RAM used and in 1-2hour is totally full used)?
Please help me, I'm totally stuck!
Thanks a lot!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
is hosted at Google code in:
http://code.google.com/p/numexpr/
You can get the packages from PyPI as well:
http://pypi.python.org/pypi/numexpr
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
--
Francesc Alted
of). Maybe you are running in a multi-core machine
now and you are seeing better speedup because of this? Also, your
expressions are made of transcendental functions, so linking numexpr
with MKL could accelerate computations a good deal too.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
://github.com/ContinuumIO/python-blosc/blob/master/LICENSES
for more details.
--
Francesc Alted
Continuum Analytics, Inc.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
!
Francesc Alted
Continuum Analytics, Inc.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
):
http://pypi.python.org/pypi/numexpr
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
to add gh-4284 after some though tomorrow.
Cheers,
Julian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
it if thats
enough.
It would bump some temporary arrays of nditer from 32kb to 128kb, I
think that would still be fine, but getting to the point where we should
move them onto the heap.
On 28.02.2014 12:41, Francesc Alted wrote:
Hi Julian,
Any chance that NPY_MAXARGS could be increased
I'm more worried about running out of stack space, though the limit
is usually 8mb so taking 128kb for a short while should be ok.
On 28.02.2014 13:32, Francesc Alted wrote:
Well, what numexpr is using is basically NpyIter_AdvancedNew:
https://github.com/pydata
, kudos, etc. you may
have.
Enjoy data!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
===
Announcing Numexpr 2.4 RC2
===
Numexpr is a fast numerical expression evaluator for NumPy. With it,
expressions that operate on arrays (like 3*a+4*b) are accelerated
and use less memory than doing the same calculation in Python.
It wears
;-).
no -- it's your high tolerance for _reading_ emails...
Far too many of us have a high tolerance for writing them!
Ha ha, very true!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
El 17/04/14 19:28, Julian Taylor ha escrit:
On 17.04.2014 18:06, Francesc Alted wrote:
In [4]: x_unaligned = np.zeros(shape,
dtype=[('y1',np.int8),('x',np.float64),('y2',np.int8,(7,))])['x']
on arrays of this size you won't see alignment issues you are dominated
by memory bandwidth
El 17/04/14 21:19, Julian Taylor ha escrit:
On 17.04.2014 20:30, Francesc Alted wrote:
El 17/04/14 19:28, Julian Taylor ha escrit:
On 17.04.2014 18:06, Francesc Alted wrote:
In [4]: x_unaligned = np.zeros(shape,
dtype=[('y1',np.int8),('x',np.float64),('y2',np.int8,(7,))])['x']
on arrays
-r--r-- 1 faltet users 48M 18 abr 13:47 x-lz4.blp
-rw-r--r-- 1 faltet users 49M 18 abr 13:47 x-blosclz.blp
-rw-r--r-- 1 faltet users 382M 18 abr 13:42 x.npy
But again, we are talking about a specially nice compression case.
--
Francesc Alted
El 18/04/14 13:39, Francesc Alted ha escrit:
So, sqrt in numpy has barely the same speed than the one in MKL.
Again, I wonder why :)
So by peeking into the code I have seen that you implemented sqrt using
SSE2 intrinsics. Cool!
--
Francesc Alted
throughput.
Having said this, there are several packages that work on top of NumPy
that can use multiple cores when performing numpy operations, like
numexpr (https://github.com/pydata/numexpr), or Theano
(http://deeplearning.net/software/theano/tutorial/multi_cores.html)
--
Francesc Alted
://groups.google.es/group/blosc
Licenses
Both Blosc and its Python wrapper are distributed using the MIT license.
See:
https://github.com/Blosc/python-blosc/blob/master/LICENSES
for more details.
**Enjoy data!**
--
Francesc Alted
Indeed it was 1.2.4 the version just released and not 1.2.7. Sorry for
the typo!
Francesc
On 7/7/14, 8:20 PM, Francesc Alted wrote:
=
Announcing python-blosc 1.2.4
=
What is new?
This is a maintenance release, where
1 - 100 of 465 matches
Mail list logo