onvenient as
>
> with lazy:
> arr = A + B + C # with all of these NumPy arrays
> # compute upon exiting…
Hmm, that would be cute indeed. Do you have an idea on how the code in the
with context could be passed to the Python AST compiler (à la
numexpr.evaluate("A +
s://github.com/numpy/numpy/blob/master/doc/neps/deferred-ufunc-evaluation.rst
> Also it would be better to talk in person about this if
> possible (I'm in Berkeley now and will attend PyData and PyCon).
Nice. Most of Continuum crew (me included) will be attending to both
conferences. Mark W. will make PyCon only, but will be a good occasion to
discuss this further.
See you,
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
s much faster to only transfer an element (or small block) from each
> of A, B, and D to CPU cache, then do the entire expression, then
> transfer the result back. This is easy to code in Cython/Fortran/C and
> impossible with NumPy/Python.
>
> This is why numexpr/Theano exi
; However, it is not as easily readable as the user guide (which makes
> sense !).
>
> Does the following statements mean that those types are not available on
> all platforms ?
> float96 96 bits, platform?
> float128 128 bits,
On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote:
> On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted
> wrote:
>> Exactly. I'd update this to read:
>>
>> float9696 bits. Only available on 32-bit (i386) platforms.
>> float128 128 bits. Only av
On Feb 23, 2012, at 6:06 AM, Francesc Alted wrote:
> On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote:
>
>> On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted
>> wrote:
>>> Exactly. I'd update this to read:
>>>
>>> float9696 bits. Only
On Feb 23, 2012, at 10:26 AM, Matthew Brett wrote:
> Hi,
>
> On Thu, Feb 23, 2012 at 4:23 AM, Francesc Alted wrote:
>> On Feb 23, 2012, at 6:06 AM, Francesc Alted wrote:
>>> On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote:
>>>
>>>> On
ta.htm
Also, see some speedups in a numexpr linked against MKL here:
http://code.google.com/p/numexpr/wiki/NumexprVML
See also how native multi-threading implementation in numexpr beats MKL's one
(at least for this particular example).
-- Fra
On Feb 23, 2012, at 2:19 PM, Neal Becker wrote:
> Pauli Virtanen wrote:
>
>> 23.02.2012 20:44, Francesc Alted kirjoitti:
>>> On Feb 23, 2012, at 1:33 PM, Neal Becker wrote:
>>>
>>>> Is mkl only used for linear algebra? Will it speed up e.g.
> In Mac OSX:
>
> $ purge
Now that I switched to a Mac, this is good to know. Thanks!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
your code so it can use mmap() instead of read(), since this can
> reduce overhead. read() has to copy the data from the disk into OS
> memory, and then from OS memory into your process's memory; mmap()
> skips the second step.
Cool. Nice trick!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
mpiler-dependency. The new type will be only available on platforms that
has GCC 4.6 or above. Again, using the new name for this should be fine. On
platforms/compilers not supporting the quad128 thing, it should not be defined.
Uh, I foresee many portability problems for people using this, but
on@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
> Christoph Gohle
> - --
> Max-Planck-Institut für Quantenoptik
> Abteilung Quantenvielteilchensysteme
> Hans-Kopfermann-Strasse 1
> 85748 Garching
>
> christoph.go...@mpq.mpg.de
> tel: +49 89 32905 283
> fax: +49 89 32905 313
>
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2.0.14 (Darwin)
>
> iEYEARECAAYFAk9ZqnQACgkQLYu25rCEIzthWACgi0dYy2nh83w57Ho8emkvJZ8z
> KrkAnistJfaU29tzul8nrJBYsrdmksJk
> =Iyr4
> -END PGP SIGNATURE-
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
tions with large
arrays, store tables with an unlimited number of rows on-disk and, by using its
integrated indexing engine (OPSI), you can perform quick lookups based on
strings (or whatever other type). Look into these examples:
http://www.pytables.org/moin/HowToUse#Selectingvalues
ray([('key500', 500)],
dtype=[('f0', 'S8'), ('f1', 'http://pytables.github.com/usersguide/optimization.html#accelerating-your-searches
for more detailed rational and benchmarks in big datasets.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
lated with:
http://projects.scipy.org/numpy/ticket/993
being fixed in the last few hours. Could you please bisect
(http://webchick.net/node/99) and tell us which commit is the bad one?
Thanks!
-- Francesc Alted
___
NumPy-Discussion mailing li
l_lapack_dgetrf
So, if numpy has not changed, then something else does, right? Have you
upgraded MKL? GCC? Installed Intel C compiler?
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
nyway, my question is, is there
> interest from at least the numba and numexpr projects (if code can be
> transformed into vector operations, it makes sense to use numexpr for
> that, I'm not sure what numba's interest is in that).
I'm definitely interested for the numexpr part.
On Mar 20, 2012, at 2:29 PM, Dag Sverre Seljebotn wrote:
> Francesc Alted wrote:
>
>> On Mar 20, 2012, at 12:49 PM, mark florisson wrote:
>>>> Cython and Numba certainly overlap. However, Cython requires:
>>>>
>>>> 1) learning another lan
On 4/2/12 10:46 AM, William Johnston wrote:
> Hello,
>
> My email server went down.
>
> Did anyone respond to this post?
You can check the mail archive here:
http://mail.scipy.org/pipermail/numpy-discussion
--
Francesc Alted
___
N
or more, you are
discarding any memory effect. However, when you run the loop only once,
you are considering the memory fetch time too (which is often much more
realistic).
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 4/10/12 9:55 AM, Henry Gomersall wrote:
> On 10/04/2012 16:36, Francesc Alted wrote:
>> In [10]: timeit c = numpy.complex64(numpy.abs(numpy.complex128(b)))
>> 100 loops, best of 3: 12.3 ms per loop
>>
>> In [11]: timeit c = numpy.abs(b)
>> 100 loops, best of
On 4/10/12 11:43 AM, Henry Gomersall wrote:
> On 10/04/2012 17:57, Francesc Alted wrote:
>>> I'm using numexpr in the end, but this is slower than numpy.abs under linux.
>> Oh, you mean the windows version of abs(complex64) in numexpr is slower
>> than a pure num
accessing elements in big sparse arrays. Using a table in a relational
database (indexed for dimensions) could be an option too.
[2] https://github.com/PyTables/PyTables
Hope this helps,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 5/2/12 4:07 PM, Stéfan van der Walt wrote:
> Hi Francesc
>
> On Wed, May 2, 2012 at 1:53 PM, Francesc Alted wrote:
>> and add another one for the actual values of the array. For a 3-D
>> sparse array, this looks like:
>>
>> dim0 | dim1 | dim2 | value
>&
On 5/2/12 4:20 PM, Nathaniel Smith wrote:
> On Wed, May 2, 2012 at 9:53 PM, Francesc Alted wrote:
>> On 5/2/12 11:16 AM, Wolfgang Kerzendorf wrote:
>>> Hi all,
>>>
>>> I'm currently writing a code that needs three dimensional data (for the
>>&g
On 5/2/12 5:28 PM, Stéfan van der Walt wrote:
> On Wed, May 2, 2012 at 3:20 PM, Francesc Alted wrote:
>> On 5/2/12 4:07 PM, Stéfan van der Walt wrote:
>> Well, as the OP said, coo_matrix does not support dimensions larger than
>> 2, right?
> That's just an implement
y did what numexpr does.
Yeah. You basically re-discovered the blocking technique. For a more
general example on how to apply the blocking technique with NumPy see
the section "CPU vs Memory Benchmark" in:
https://python.g-node.org/python-autumnschool-2010/materials/starving_cpus
O
st points on each of the plots means that Blosc is in compression
level 0, that is, it does not compress at all, and it basically copies
data from origin to destination buffers. Still, one can see that using
several threads can accelerate this copy well beyond me
incompatibility) on introducing the new enums in NumPy. But they could
be used for future PyTables versions (and other HDF5 wrappers), which is
a good thing indeed.
My 2 cents,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
http://gruntthepeon.free.fr/ssemath/
I'd say that NumPy could benefit a lot of integrating optimized versions
for transcendental functions (as the link above).
Good luck!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mai
les.org/download
Manual:
http://carray.pytables.org/docs/manual
Home of Blosc compressor:
http://blosc.pytables.org
User's mail list:
car...@googlegroups.com
http://groups.google.com/group/carray
Enjoy!
-- Francesc Alted
___
NumPy-Discu
iling list
There is an official mailing list for Blosc at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
sc at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
recision):
http://software.intel.com/sites/products/documentation/hpc/mkl/vml/functions/exp.html
Pretty amazing.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
e *total* amount of cores detected in the system is the default in
numexpr; if you want less, you will need to use
set_num_threads(nthreads) function. But agreed, sometimes using too
many threads could effectively be counter-producing.
--
Francesc Alted
__
ression will be re-used
without problems. So you don't have to worry about caching it yourself.
The best forum for discussing numexpr is this:
https://groups.google.com/forum/?fromgroups#!forum/numexpr
--
Francesc Alted
___
NumPy-Discussion
this:
>
> https://github.com/herumi/fmath/blob/master/fmath.hpp#L480
Hey, that's cool. I was a bit disappointed not finding this sort of
work in open space. It seems that this lacks threading support, but
that should be easy to implement by using OpenMP direc
On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
> On 11/08/2012 06:06 PM, Francesc Alted wrote:
>> On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
>>> On 11/07/2012 08:41 PM, Neal Becker wrote:
>>>> Would you expect numexpr without MKL to give a significant
On 11/8/12 7:55 PM, Dag Sverre Seljebotn wrote:
> On 11/08/2012 06:59 PM, Francesc Alted wrote:
>> On 11/8/12 6:38 PM, Dag Sverre Seljebotn wrote:
>>> On 11/08/2012 06:06 PM, Francesc Alted wrote:
>>>> On 11/8/12 1:41 PM, Dag Sverre Seljebotn wrote:
>>>>&
ad you in
some situations. So do not trust too much in memory profilers to be too
exact and rather focus on the big picture (i.e. my app is reclaiming a
lot of memory for a large amount o time? if yes, then start worrying,
but not before).
--
Francesc Alted
__
offset
> (this is running on a 64-bit machine).
Yes, looks like a 32-bit issue. Sometimes you can have 32-bit software
installed in 64-bit machines, so that might be your problem. What's the
equivalent of numpy.intp in your machine? Mine is:
In []: import numpy as np
In []: np.intp
O
d(s))...
Okay. I can reproduce that too (using 1.6.1). Could you please file a
ticket for this? Smells like a bug to me.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ser 0.04 s, sys: 0.04 s, total: 0.08 s
Wall time: 0.04 s
Out[]:
array([ 0.e+00, 5.e+00, 1.e+01, ...,
4.9850e+07, 4.9900e+07, 4.9950e+07])
Again, the computations are the same, but how you manage memory is critical.
--
Francesc Alted
_
On 11/23/12 8:00 PM, Chris Barker - NOAA Federal wrote:
> On Thu, Nov 22, 2012 at 6:20 AM, Francesc Alted wrote:
>> As Nathaniel said, there is not a difference in terms of *what* is
>> computed. However, the methods that you suggested actually differ on
>> *how* they are c
is is that the fancy indexing
would return a view, and not a different object, but NumPy containers
are not prepared for this.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
d indexing operation acts over a copy, not a view.
And yes, fancy indexing returning a copy is standard for all ndarrays.
Hope it is clearer now (although admittedly it is a bit strange at first
sight),
--
Francesc Alted
___
NumPy-Discussion
d be even noticeable.
Can you tell us which difference in performance are you seeing for an
AVX-aligned array and other that is not AVX-aligned? Just curious.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 12/20/12 9:53 AM, Henry Gomersall wrote:
> On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
>> The only scenario that I see that this would create unaligned arrays
>> is
>> for machines having AVX. But provided that the Intel architecture is
>> makin
On 12/20/12 7:35 PM, Henry Gomersall wrote:
> On Thu, 2012-12-20 at 15:23 +0100, Francesc Alted wrote:
>> On 12/20/12 9:53 AM, Henry Gomersall wrote:
>>> On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
>>>> The only scenario that I see that this would crea
On 12/21/12 11:58 AM, Henry Gomersall wrote:
> On Fri, 2012-12-21 at 11:34 +0100, Francesc Alted wrote:
>>> Also this convolution code:
>>> https://github.com/hgomersall/SSE-convolution/blob/master/convolve.c
>>>
>>> Shows a small but repeatable speed-
On 12/21/12 1:35 PM, Dag Sverre Seljebotn wrote:
> On 12/20/2012 03:23 PM, Francesc Alted wrote:
>> On 12/20/12 9:53 AM, Henry Gomersall wrote:
>>> On Wed, 2012-12-19 at 19:03 +0100, Francesc Alted wrote:
>>>> The only scenario that I see that this would create una
Exciting stuff. Thanks a lot to you and everybody implied in the release
for an amazing job.
Francesc
El 10/02/2013 2:25, "Ondřej Čertík" va escriure:
> Hi,
>
> I'm pleased to announce the availability of the final release of
> NumPy 1.7.0.
>
> Sources and binary installers can be found at
> htt
).
Well, pip needs to compile the libraries prior to install them, so
compile messages are meaningful. Another question would be to reduce the
amount of compile messages by default in NumPy, but I don't think this
is realistic (and even not desirable).
--
Francesc Alted
_
On 2/12/13 3:18 PM, Daπid wrote:
> On 12 February 2013 14:58, Francesc Alted wrote:
>> Yes, I think that's expected. Just to make sure, can you send some
>> excerpts of the errors that you are getting?
> Actually the errors are at the beginning of the process, so they are
&
=['a', 'b'], formats=['u1', 'u8']),
>> align=True)
>>
>> In [3]: dt.itemsize
>> Out[3]: 16
> Thanks! That's what I get for not checking before posting.
>
> Consider this my vote to make `aligned=True` the default
uate('sum(baligned)')
100 loops, best of 3: 2.16 ms per loop
In [17]: %timeit numexpr.evaluate('sum(bpacked)')
100 loops, best of 3: 2.08 ms per loop
Again, the unaligned case is (sligthly better). In this case numexpr is
a bit slower that NumPy because sum() is not para
On 3/7/13 6:47 PM, Francesc Alted wrote:
> On 3/6/13 7:42 PM, Kurt Smith wrote:
>> And regarding performance, doing simple timings shows a 30%-ish
>> slowdown for unaligned operations:
>>
>> In [36]: %timeit packed_arr['b']**2
>> 100 loops, best of
recently the
> overhead, but we can do more to lower it.
Yeah. I was mainly curious about how different packages handle
unaligned arrays.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
age of it. Many thanks to
Mark Wiebe for such an important contribution!
For some benchmarks on the new virtual machine, see:
http://code.google.com/p/numexpr/wiki/NewVM
Also, Gaëtan de Menten contributed important bug fixes, code cleanup
as well as speed enhancements. Francesc Alted contribute
s Guide.
>
> Is there any plan to implement the reduction like enhancements that
> ufuncs provide: namely reduce_at, accumulate, reduce ? It is entirely
> possible that they are already in there but I could not figure out how
> to use them. If they aren't it would be great to have
s, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
g [numpy-discussion-boun...@scipy.org]
> On Behalf Of Francesc Alted [fal...@gmail.com]
> Sent: 08 January 2012 12:49
> To: Discussion of Numerical Python; numexpr
> Subject: [Numpy-discussion] ANN: Numexpr 2.0.1 released
>
> ==
> Announcing Numexpr 2.0.1
>
he second part of the tutorial:
https://github.com/FrancescAlted/carray/blob/master/doc/tutorial.rst
Hope it helps,
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Feb 10, 2012, at 4:50 PM, Francesc Alted wrote:
> https://github.com/FrancescAlted/carry
Hmm, this should be:
https://github.com/FrancescAlted/carray
Blame my (too) smart spell corrector.
-- Francesc Alted
___
NumPy-Discussion mailing l
On Feb 12, 2012, at 12:07 AM, Ralf Gommers wrote:
> On Sat, Feb 11, 2012 at 11:06 PM, Fernando Perez wrote:
> On Sat, Feb 11, 2012 at 11:11 AM, Travis Oliphant wrote:
> > I propose to give Francesc Alted commit rights to the NumPy project.
>
> +1.
Thanks for the kind invita
r Cython wrapper just assumed that the indices where
integers, so this is probably the reason why it is that much faster.
This is not to say that indexing in NumPy could not be accelerated, but it
won't be trivial, IMO.
-- Francesc Alted
___
N
arrays
are being operated?), the former would give more flexibility. I know, this
will introduce more complexity in the code base, but anyway, I think that would
be a nice thing to support for NumPy 2.0.
Just a thought,
-- Francesc Alted
___
NumPy-
somebody knows about
him. If so, please tell me.
Thanks!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
n judge by looking at the *results*.
My two cents,
Disclaimer: As my e-mail address makes clear, I'm a Continuum guy.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
. But I
remember this period (2005) as one of the most dramatic examples on how the
capacity and dedication of a single individual can shape the world.
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
memory, I need to read
> data skipping some records (I am reading data recorded at high frequency, so
> basically I want to read subsampling).
[clip]
You can do a fid.seek(offset) prior to np.fromfile() and the it will
read from offset. See the docstrings for `file.seek()` on how to use
On 3/13/13 3:53 PM, Francesc Alted wrote:
> On 3/13/13 2:45 PM, Andrea Cimatoribus wrote:
>> Hi everybody, I hope this has not been discussed before, I couldn't
>> find a solution elsewhere.
>> I need to read some binary data, and I am using numpy.fromfile to do
>>
why we decided to go with
attoseconds.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
s discussion:
https://github.com/numpy/numpy/blob/master/doc/neps/datetime-proposal.rst#why-the-origin-metadata-disappeared
This is just an historical note, not that we can't change that again.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 4/4/13 7:01 PM, Chris Barker - NOAA Federal wrote:
> Francesc Alted wrote:
>> When Ivan and me were discussing that, I remember us deciding that such
>> a small units would be useful mainly for the timedelta datatype, which
>> is a relative, not absolute time. We did not w
On 4/4/13 8:56 PM, Chris Barker - NOAA Federal wrote:
> On Thu, Apr 4, 2013 at 10:54 AM, Francesc Alted wrote:
>
>> That makes a difference. This can be specially important for creating
>> user-defined time origins:
>>
>> In []: np.array(int(1.5e9), dtype='dat
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ur computations. I
have used this feature extensively for optimizing parts of the Blosc
compressor, and I cannot be more happier (to the point that, if it were
not for Valgrind, I could not figure out many interesting memory access
optimizati
://groups.google.es/group/blosc
Licenses
Both Blosc and its Python wrapper are distributed using the MIT license.
See:
https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES
for more details.
--
Francesc Alted
___
NumPy-Discussion mailing
list for Blosc at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
Licenses
Both Blosc and its Python wrapper are distributed using the MIT license.
See:
https://github.com/FrancescAlted/python-blosc/blob/master/LICENSES
for more details.
Enjoy!
--
Francesc Alted
_nuevo)/numero_experimentos)
>
> desviacion_standard = np.append (desviacion_standard,
> sum(std_dev_size_medio_intuitivo)/numero_experimentos)
>
> desviacion_standard_nuevo=np.append (desviacion_standard_nuevo,
> sum(std_dev_size_medio_nuevo)/numero_experimentos)
>
> tiempos=np.append(tiempos, time.clock()-empieza)
>
> componente_y=np.append(componente_y, sum(comp_y)/numero_experimentos)
> componente_x=np.append(componente_x, sum(comp_x)/numero_experimentos)
>
> anisotropia_macroscopica_porcentual=100*(1-(componente_y/componente_x))
>
> I tryed with gc and gc.collect() and 'del'command for deleting arrays
> after his use and nothing work!
>
> What am I doing wrong? Why the memory becomes full while running (starts
> with 10% of RAM used and in 1-2hour is totally full used)?
>
> Please help me, I'm totally stuck!
> Thanks a lot!
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
mexpr?
=
The project is hosted at Google code in:
http://code.google.com/p/numexpr/
You can get the packages from PyPI as well:
http://pypi.python.org/pypi/numexpr
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. yo
tly as fast as weave (so I guess there were
> some performance enhancements in numexpr as well).
Err no, there have not been performance improvements in numexpr since
2.0 (that I am aware of). Maybe you are running in a multi-core machine
now and you are seeing better speedup because of
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
pr
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
-- Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ee:
https://github.com/ContinuumIO/python-blosc/blob/master/LICENSES
for more details.
--
Francesc Alted
Continuum Analytics, Inc.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/list
ompressor:
http://www.blosc.org
User's mail list:
blaze-...@continuum.io
Enjoy!
Francesc Alted
Continuum Analytics, Inc.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ces?
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
x27;t check. I have
>> tried to grep it tring all possible combinations of "def ndarray",
>> "self.sort", etc. Where is it?
>>
>>
>> /David.
>>
>>
>> ___
>> NumPy-Discussion mailing list
>&
ges from PyPI as well (but not for RC releases):
http://pypi.python.org/pypi/numexpr
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
-- Francesc Alted
___
NumPy-Discussion ma
its already included in these PRs.
> I'm probably still going to add gh-4284 after some though tomorrow.
>
> Cheers,
> Julian
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http:/
tions we could change it if thats
> enough.
> It would bump some temporary arrays of nditer from 32kb to 128kb, I
> think that would still be fine, but getting to the point where we should
> move them onto the heap.
>
> On 28.02.2014 12:41, Francesc Alted wrote:
>> Hi Julia
al amount of arguments it got.
> So I'm more worried about running out of stack space, though the limit
> is usually 8mb so taking 128kb for a short while should be ok.
>
> On 28.02.2014 13:32, Francesc Alted wrote:
> > Well, what numexpr is using is basically
bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
===
Announcing Numexpr 2.4 RC2
===
Numexpr is a fast numerical expression evaluator for NumPy. With it,
expressions that operate on arrays (like "3*a+4*b") are accelerated
and use less memory than doing the same calculation in Python.
It wears mu
;-).
>
>
> no -- it's your high tolerance for _reading_ emails...
>
> Far too many of us have a high tolerance for writing them!
Ha ha, very true!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy data!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
El 17/04/14 19:28, Julian Taylor ha escrit:
> On 17.04.2014 18:06, Francesc Alted wrote:
>
>> In [4]: x_unaligned = np.zeros(shape,
>> dtype=[('y1',np.int8),('x',np.float64),('y2',np.int8,(7,))])['x']
> on arrays of this size you won
1 - 100 of 501 matches
Mail list logo