A Friday 29 October 2010 12:18:20 Pauli Virtanen escrigué:
Fri, 29 Oct 2010 09:54:23 +0200, Francesc Alted wrote:
[clip]
My vote is +1 for deprecating ``array([scalar])`` as a scalar index
for NumPy 2.0.
I'd be -0 on this, since 1-element Numpy arrays function like scalars
in several
A Friday 29 October 2010 12:59:04 Pauli Virtanen escrigué:
pe, 2010-10-29 kello 12:48 +0200, Francesc Alted kirjoitti:
A Friday 29 October 2010 12:18:20 Pauli Virtanen escrigué:
Fri, 29 Oct 2010 09:54:23 +0200, Francesc Alted wrote:
[clip]
My vote is +1 for deprecating ``array
is unnecessarily
complicated.
So I find the current behaviour prone to introduce errors in apps and
I'm wondering why exactly np.array([1]) should work as an index at all.
It would not be better if that would raise a ``TypeError``?
Thanks,
--
Francesc Alted
get the packages from PyPI as well:
http://pypi.python.org/pypi
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy
)).itemsize
4
Cheers,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2010/10/2 Robert Kern robert.k...@gmail.com
On Fri, Oct 1, 2010 at 02:13, Francesc Alted fal...@pytables.org wrote:
A Thursday 30 September 2010 18:20:16 Robert Kern escrigué:
On Wed, Sep 29, 2010 at 03:17, Francesc Alted fal...@pytables.org
wrote:
Hi,
I'm going to give a seminar
A Thursday 30 September 2010 18:20:16 Robert Kern escrigué:
On Wed, Sep 29, 2010 at 03:17, Francesc Alted fal...@pytables.org
wrote:
Hi,
I'm going to give a seminar about serialization, and I'd like to
describe the .npy format. I noticed that there is a variant of it
called .npz
://groups.google.es/group/blosc
**Enjoy data!**
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
that this is because you don't want to loose the
possibility to memmap saved arrays, but can someone confirm this?
Thanks,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
have). So, a matter of laziness :-)
Thanks,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
, ctypes is very powerful indeed. Thanks!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
it seems to
work pretty well.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to the
HDF5 overhead, probably a compressed memmap approach might be faster
yet, but much more difficult to manage). And last but not least, this
does not have the limitation of virtual memory size of memmaped
solutions, which I find quite uncomfortable.
--
Francesc Alted
://blosc.pytables.org
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
bench/concat.py carray 100 1000 3 1
problem size: (100) x 1000 = 10^9
time for concat: 1.751s
size of the final container: 409.633 MB
Exactly. This is another scenario where the carray concept can be
really useful.
--
Francesc Alted
row[:], row.nrow
...: break # breaks iterator when the first element fulfills
the condition
...:
(4.0,) 4 # element and index of the first element
Hope that helps,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion
or more), carray would be
in general faster than a pure ndarray approach for most of cases. But
indeed, benchmarking is the best way to tell.
Cheers,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
of fun.
Cheers!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
into this issue?
I've made a patch to solve this some time ago:
http://projects.scipy.org/numpy/ticket/993
but it did not make into the repo yet.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
in
NumPy/SciPy list (and the PyTables list can certainly also be used).
Luck!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
://pypi.python.org/pypi
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
the enhancements that PyTables needed (mainly support for booleans and
strided and unaligned data), it does not make sense to have different
Numexpr's anymore.
Cheers,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
2010/8/1 Christoph Gohlke cgoh...@uci.edu
Solid release as usual. Works well with the MKL.
Btw, numexpr-1.4.tar.gz is missing the win32/pthread.h file.
Mmh, not so solid ;-) Fixed. Thanks for reporting!
--
Francesc Alted
___
NumPy-Discussion
data directly. You have to convert to HDF5
first.
For further questions about this, please use the PyTables list over here:
http://lists.sourceforge.net/lists/listinfo/pytables-users
Cheers,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy
recommend to use in NumPy extensions?
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Tuesday 27 July 2010 15:20:47 Charles R Harris escrigué:
On Tue, Jul 27, 2010 at 7:08 AM, Francesc Alted fal...@pytables.org wrote:
Hi,
I'm a bit confused on which datatype should I use when referring to NumPy
ndarray lengths. In one hand I'd use `size_t` that is the canonical way
terminate if index is changed from int to size_t.
Ok, I'm not going to break Python/NumPy conventions so you convinced me: I'll
use `npy_intp` then.
Thanks!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
in interesting alternatives section:
http://shootout.alioth.debian.org/u32/performance.php?test=spectralnorm#about
I suppose that, provided that Matlab also have a JIT and supports Intel's MKL,
it could beat this mark too. Any Matlab user would accept the challenge?
--
Francesc Alted
. you may
have.
**Enjoy data!**
-- The PyTables Team
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Thursday 01 July 2010 21:10:42 Francesc Alted escrigué:
http://www.pytables.org/download/preliminary
Mmh, that should read:
http://www.pytables.org/download/stable
Sorry for the typo!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy
would be to implement such a special functions in
terms of numexpr expressions so that the evaluation itself can be faster.
Admittedly, that would take a bit more time.
Anyway, if someone comes with patches for implementing this, I'd glad to
commit them.
--
Francesc Alted
A Monday 28 June 2010 10:22:31 Pauli Virtanen escrigué:
ma, 2010-06-28 kello 09:48 +0200, Francesc Alted kirjoitti:
[clip]
But again, the nice thing would be to implement such a special functions
in terms of numexpr expressions so that the evaluation itself can be
faster. Admittedly
).
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2010/6/26 Pauli Virtanen p...@iki.fi
Hi,
la, 2010-06-26 kello 14:24 +0200, Francesc Alted kirjoitti:
[clip]
Yeah, you need to explicitly code the support for new functions in
numexpr.
But another possibility, more doable, would be to code the scipy.special
functions by using numexpr
')
In [27]: a.byteswap()
Out[27]: array([ 0, 256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304],
dtype=int16)
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
another wish into the bag ;-)
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
, I'd
say chances are that performance for the strided scenario *might* benefit from
using copy-in/copy-out. Mmh, that's worth a try...
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
]
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Yeah, damn you! ;-)
A Wednesday 09 June 2010 10:11:33 Robert Elsner escrigué:
Hah beat you to it one minute ;)
Am Mittwoch, den 09.06.2010, 10:08 +0200 schrieb Francesc Alted:
A Wednesday 09 June 2010 10:00:50 V. Armando Solé escrigué:
Well, this seems to be quite close to what I need
to be fast always makes you ending with the wrong result :-/
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
of dtype would help you:
In [2]: s = np.dtype(S3)
In [4]: s.kind
Out[4]: 'S'
In [5]: i = np.dtype(i4)
In [6]: i.kind
Out[6]: 'i'
In [7]: f = np.dtype(f8)
In [8]: f.kind
Out[8]: 'f'
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion
http://pytables.org/moin/ComputingKernel).
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
is the summation. Any help is appreciated.
Both y[1:] and y[:-1] are views of the original y array, so you are not
wasting temporary space here. So, as I see this, the above idiom is as
efficient as it can get in terms of memory usage.
--
Francesc Alted
, Mac OSX or other UNICES.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Monday 17 May 2010 20:11:28 Keith Goodman escrigué:
On Mon, May 17, 2010 at 11:06 AM, Francesc Alted fal...@pytables.org
wrote:
A Sunday 16 May 2010 21:14:34 Davide Lasagna escrigué:
Hi all,
What is the fastest and lowest memory consumption way to compute this?
y = np.arange(2**24
A Friday 07 May 2010 08:18:44 Martin Raspaud escrigué:
Francesc Alted skrev:
Hi Martin,
[...]
and the output for my machine:
result_array1: [4 2 4 ..., 1 3 4] 1.819
result_array2: [4 2 4 ..., 1 3 4] 0.308
which is a 6x speed-up. I suppose this should be pretty close of what
)
#
and the output for my machine:
result_array1: [4 2 4 ..., 1 3 4] 1.819
result_array2: [4 2 4 ..., 1 3 4] 0.308
which is a 6x speed-up. I suppose this should be pretty close of what you can
get with C.
--
Francesc Alted
indicate binary incompatibility
I'm using current stable Cython 12.1. Is the warning above intended or I'm
doing something wrong?
Thanks,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
, I'll have to manage with that then.
Thanks,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
.
Thanks,
--
Francesc Alted
Index: numpy/core/setup_common.py
===
--- numpy/core/setup_common.py (revision 8300)
+++ numpy/core/setup_common.py (working copy)
@@ -243,5 +243,9 @@
if saw is not None:
raise ValueError
though: is a fortran compiler really
necessary for compiling just numpy? If so, why?
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
be a great thing to deliver, IMO.
Thanks,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Wednesday 24 March 2010 12:00:36 David Cournapeau escrigué:
On Wed, Mar 24, 2010 at 6:50 PM, Francesc Alted fal...@pytables.org wrote:
Also, I have read the draft and I cannot see references to 64-bit binary
packages. With the advent of Windows 7 and Mac OSX Snow Leopard, 64-bit
are way
stage but not for bdist? What is more, why the need for a compiler
for bdist if numpy is already built? I feel that I'm almost there, but some
piece still resists...
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
program
is already very efficient in how it handles data, so chances are that you
still get a good speed-up. I'd glad to hear you back on your experience.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
it,
and this part was at the top.
So my follow up: why is this desirable/necessary? (I find it
surprising.)
IIRC, it behaved that way in Numeric.
This does not mean that this behaviour is desirable. I find it inconsistent
and misleading so +1 for fixing it.
--
Francesc Alted
of performance out of their computers. And,
although I tried to be as language-agnostic as I could, there can be seen some
Python references here and there :-).
Well, sorry about this semi-OT but I could not resist :-)
--
Francesc Alted
___
NumPy-Discussion
criticism, I really appreciate it!
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
...
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Thursday 11 March 2010 10:36:42 Gael Varoquaux escrigué:
On Thu, Mar 11, 2010 at 10:04:36AM +0100, Francesc Alted wrote:
As far as I know, memmap files (or better, the underlying OS) *use* all
available RAM for loading data until RAM is exhausted and then start to
use SWAP, so the memory
be really great for me.
We can nail the details off-list.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
-
From: numpy-discussion-boun...@scipy.org on behalf of Francesc Alted
Sent: Thu 04-Mar-10 15:12
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
What kind of calculations are you doing with this module? Can you please
send some examples
Gael,
On Fri, Mar 05, 2010 at 10:51:12AM +0100, Gael Varoquaux wrote:
On Fri, Mar 05, 2010 at 09:53:02AM +0100, Francesc Alted wrote:
Yeah, 10% of improvement by using multi-cores is an expected figure for
memory bound problems. This is something people must know: if their
computations
A Friday 05 March 2010 14:46:00 Gael Varoquaux escrigué:
On Fri, Mar 05, 2010 at 08:14:51AM -0500, Francesc Alted wrote:
FWIW, I observe very good speedups on my problems (pretty much linear
in the number of CPUs), and I have data parallel problems on fairly
large data (~100Mo a piece
://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
Francesc Alted
___
NumPy-Discussion
this for computing a polynomial on a
certain range. Here it is the output (for a dual-core processor):
Serial computation...
1000 0
Time elapsed in serial computation: 3.438
333 0
334 1
333 2
Time elapsed in parallel computation: 2.271 with 3 threads
Speed-up: 1.51x
--
Francesc
.
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
**Enjoy data!**
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
the need for transposing.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the
ABI changes we think we will need until NumPy 3.0 (hope that David and the
other core developers can figure out a good way to do this).
To quote an old war poster, let's keep calm and carry on.
Exactly :-)
--
Francesc Alted
___
NumPy-Discussion
not prevented the 2.x series to evolve.
How this sounds?
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
whatever) in a release for allowing wider testing and
adoption, will almost certainly result in a release that takes much longer to
spread widely, and what is worst, generating a large frustration among users.
My 2 cts,
--
Francesc Alted
___
NumPy
following this discussion with utter interest, and I also think that
the arguments that favors a stable ABI in NumPy are *very* compelling. So +1
for *not* changing the ABI in .X releases.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy
, kudos, etc. you may
have.
**Enjoy data!**
-- The PyTables Team
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Thursday 17 December 2009 15:16:29 Pierre GM escrigué:
All,
* What is the most efficient way to get a np.void object from a 0d
structured ndarray ?
I normally use `PyArray_GETITEM` C macro for general n-d structured arrays. I
suppose that this will work with 0-d arrays too.
--
Francesc
A Saturday 12 December 2009 12:59:16 Jasper van de Gronde escrigué:
Francesc Alted wrote:
...
Yeah, I think taking slices here is taking quite a lot of time:
In [58]: timeit E + Xi2[P/2,:]
10 loops, best of 3: 3.95 µs per loop
In [59]: timeit E + Xi2[P/2]
10 loops, best
A Monday 14 December 2009 17:09:13 Francesc Alted escrigué:
The things seems to be worst than 1.6x times slower for numpy, as matlab
orders arrays by column, while numpy order is by row. So, if we want to
compare pears with pears:
For Python 600x200:
Add a row: 0.113243 (1.132425e-05
A Monday 14 December 2009 18:20:32 Jasper van de Gronde escrigué:
Francesc Alted wrote:
A Monday 14 December 2009 17:09:13 Francesc Alted escrigué:
The things seems to be worst than 1.6x times slower for numpy, as matlab
orders arrays by column, while numpy order is by row. So, if we want
difficult art.
Well, I think it is not difficult, it is just that you are perhaps
benchmarking Python/NumPy machinery instead ;-) I'm curious whether Matlab
can do slicing much more faster than NumPy. Jasper?
--
Francesc Alted
___
NumPy-Discussion
A Friday 11 December 2009 17:36:54 Bruce Southey escrigué:
On 12/11/2009 10:03 AM, Francesc Alted wrote:
A Friday 11 December 2009 16:44:29 Dag Sverre Seljebotn escrigué:
Jasper van de Gronde wrote:
Dag Sverre Seljebotn wrote:
Jasper van de Gronde wrote:
I've attached a test file which
A Sunday 06 December 2009 11:47:23 Francesc Alted escrigué:
A Saturday 05 December 2009 11:16:55 Dag Sverre Seljebotn escrigué:
In [19]: t = np.dtype(i4,f4)
In [20]: t
Out[20]: dtype([('f0', 'i4'), ('f1', 'f4')])
In [21]: hash(t)
Out[21]: -9041335829180134223
In [22
types immutable if possible, and dtype certainly
feels like it.
Yes, I think you are right and force dtype to be immutable would be the best.
As a bonus, an immutable dtype would render this ticket:
http://projects.scipy.org/numpy/ticket/1127
without effect.
--
Francesc Alted
about that, because the
above seems quite useful.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
strange - I get the same hash in both cases, but I thought
I took into account names when I implemented the hashing protocol for
dtype. Which version of numpy on which os are you seeing this ?
numpy: 1.4.0.dev7072
python: 2.6.1
--
Francesc Alted
).
Cheers,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Unicode values internally as UCS2.
Ah! No changes for that matter. Much better then.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
for Python 3, right?
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Friday 27 November 2009 15:09:00 René Dudfield escrigué:
On Fri, Nov 27, 2009 at 1:49 PM, Francesc Alted fal...@pytables.org wrote:
Correct. But, in addition, we are going to need a new 'bytes' dtype for
NumPy for Python 3, right?
I think so. However, I think S is probably closest
upgrading to Py3 easier.
I think introducing a bytes_ scalar dtype can be somewhat confusing for Python
2 users. But if the 'S' typecode is to be deprecated also for NumPy for
Python 2, then it makes perfect sense to introduce bytes_ there too.
--
Francesc Alted
process's current address space.
This is usually the same than your available virtual memory, that is, your
amount of RAM + the amount of SWAP space.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
.html#ColsClassDescr
Cheers,
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
copies in this scenario.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
A Friday 16 October 2009 14:02:03 David Cournapeau escrigué:
On Fri, Oct 16, 2009 at 8:53 PM, Pauli Virtanen pav...@iki.fi wrote:
Fri, 16 Oct 2009 12:07:10 +0200, Francesc Alted wrote:
[clip]
IMO, NumPy can be improved for unaligned data handling. For example,
Numexpr is using
to `resize()` to specify that you don't want the memory initialized.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
speed-up from using MKL, as this operation is
bounded by memory speed.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
) and a 64-bit
platform. I suppose that you should file a bug better.
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
motherboards. It
could be a *bit* faster (at the expenses of packing less of it), but I'd say
not as much as 4x faster (100 GB/s vs 25 GB/s of Intel i7 in sequential
access), as you are suggesting. Maybe this is GPU cache bandwidth?
--
Francesc Alted
, and that may be not
what you want to measure.
In the case of Ruben, I think what he is seeing are cache effects. Maybe if
he does a loop, he would finally see the difference coming up (although this
may be not what he want, of course ;-)
--
Francesc Alted
.
Also, I don't see the point in requiring immutable buffers. Could you develop
this further?
--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to see. I think I'll change my mind if someone could perform a
vector-vector multiplication (a operation that is typically memory-bounded) in
double precision up to 5x times faster on a gtx280 nv card than in a Intel's
i7 CPU.
--
Francesc Alted
___
NumPy
201 - 300 of 465 matches
Mail list logo