Alan G Isaac wrote:
On Mon, 26 Mar 2007, Colin J. Williams apparently wrote:
One would expect the iteration over A to return row
vectors, represented by (1, n) matrices.
On 3/26/07, Alan G Isaac [EMAIL PROTECTED] wrote:
This is again simple assertion.
**Why**
Charles R Harris wrote:
On 3/26/07, *Travis Oliphant* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Charles R Harris wrote:
On 3/26/07, *Nils Wagner* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
mailto: [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED
I've finally made the changes to fix the scalar coercion model problems
in NumPy 1.0.1
Now, scalar coercion rules only apply when involved types are of the
same basic kind.
Thus,
array([1,2,3],int8)*10
returns an int8 array
but
array([1,2,3],int8)*10.0
returns a float64 array.
If you
Perry Greenfield wrote:
Great!
On Mar 26, 2007, at 4:52 PM, Travis Oliphant wrote:
I've finally made the changes to fix the scalar coercion model
problems
in NumPy 1.0.1
Now, scalar coercion rules only apply when involved types are of the
same basic kind.
Actually, the rule
I think that might be the simplest thing, dot overrides subtypes. BTW,
here is another ambiguity
In [6]: dot(array([[1]]),ones(2))
---
exceptions.ValueErrorTraceback (most
recent call
Charles R Harris wrote:
The rule 1-d is always a *row* vector only applies when converting
to a
matrix.
In this case, the dot operator does not convert to a matrix but
uses
rules for operating with mixed 2-d and 1-d arrays inherited from
Numeric.
I'm
Jan Strube wrote:
I'm having a difficult time understanding the following behavior:
import numpy as N
# create a new array 4 rows, 3 columns
x = N.random.random((4, 3))
# elementwise multiplication
x*x
newtype = N.dtype([('x', N.float64), ('y', N.float64), ('z', N.float64)])
# interpret
Every so often the idea of new operators comes up because of the need to
do both matrix-multiplication and element-by-element multiplication.
I think this is one area where the current Python approach is not as
nice because we have a limited set of operators to work with.
One thing I wonder is
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Charles R Harris apparently wrote:
Yes, that is what I am thinking. Given that there are only the two
possibilities, row or column, choose the only one that is compatible with
the multiplying matrix. The result will not always be a column vector,
Charles R Harris wrote:
On 3/24/07, *Travis Oliphant* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Charles R Harris apparently wrote:
Yes, that is what I am thinking. Given that there are only the two
possibilities
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Travis Oliphant apparently wrote:
My opinion is that a 1-d array in matrix-multiplication
should always be interpreted as a row vector. Is this not
what is currently done? If not, then it is a bug in my
mind.
N.__version__
Stefan van der Walt wrote:
On Thu, Mar 22, 2007 at 04:33:53PM -0700, Travis Oliphant wrote:
I would rather opt for changing the spline fitting algorithm than for
padding with zeros.
From what I understand, the splines used in ndimage have the implicit
mirror-symmetric boundary
James Turner wrote:
By the way, ringing at sharp edges is an intrinsic feature of higher-
order spline interpolation, right? I believe this kind of interpolant
is really intended for smooth (band-limited) data. I'm not sure why
the pre-filtering makes a difference though; I don't yet understand
Charles R Harris wrote:
All three shapes are both C_CONTIGUOUS and F_CONTIGUOUS. I think
ignoring all 1's in the shape would give the right results for
otherwise contiguous arrays because in those positions the index can
only take the value 0.
I've thought about this before too. But,
Stefan van der Walt wrote:
On Thu, Mar 22, 2007 at 02:41:52PM -0400, Anne Archibald wrote:
On 22/03/07, James Turner [EMAIL PROTECTED] wrote:
So, its not really a bug, its a undesired feature...
It is curable, though painful - you can pad the image out, given an
estimate of the
Bryan Cole wrote:
I'm not sure where best to post this, but I get a memory leak when using
code with both numpy and FFT(from Numeric) together:
e.g.
import numpy
import FFT
def test():
... while 1:
... data=numpy.random.random(2048)
... newdata =
Mark P. Miller wrote:
Robert: Just a thought on this topic:
Would it be possible for the Scipy folks to add a new module based
solely off your old mtrand code (pre-broadcast)? I have to say that the
mtrand code from numpy 0.9.8 has some excellent advantages over the core
python random
vinjvinj wrote:
So far my migration seems to be going well. I have one problem:
I've been using the scipy_base.insert and scipy_base.extract functions
and the behavior in numpy is not the same.
a = [0, 0, 0, 0]
mask = [0, 0, 0, 1]
c = [10]
numpy.insert(a, mask, c)
would change a so that
a =
vinjvinj wrote:
I'm in the process of migrating from Numeric to numpy. In some of my
code I have the following:
a = zeros(num_elements, PyObject)
b = zeros(num_elements, PyObject)
PyObject -- object
-Travis
___
Numpy-discussion mailing list
Mark P. Miller wrote:
Robert: Just a thought on this topic:
Would it be possible for the Scipy folks to add a new module based
solely off your old mtrand code (pre-broadcast)? I have to say that the
mtrand code from numpy 0.9.8 has some excellent advantages over the core
python random
Steffen Loeck wrote:
Fernando Perez wrote:
I recently got a report of a bug triggered only on 64-bit hardware,
and on a machine (in case it's relevant) that runs python 2.5. This
is with current numpy SVN which I just rebuilt a moment ago to
triple-check:
In [3]: a =
Steven H. Rogers wrote:
Travis Oliphant wrote:
I just wanted to point people to the online version of the PEP. I'm
still looking for comments and suggestions. The current version is here:
http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/doc/pep_buffer.txt
-Travis
Hi
Mark P. Miller wrote:
I've been using Numpy arrays for some work recently. Just for fun, I
compared some representative code using Numpy arrays and an object
comprised of nested lists to represent my arrays. To my surprise, the
array of nested lists outperformed Numpy in this particular
Mark P. Miller wrote:
Ops, this seems a bug with your numpy version:
In [46]:array1 = numpy.zeros((10,10),int)
In [47]:array1.itemset((5,5),9)
In [48]:array1
Out[48]:
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0,
A ticket was posted that emphasizes that the current behavior of NumPy
with regards to scalar coercion is different than numarray's behavior.
If we were pre 1.0, I would probably change the behavior to be in-line
with numarray. But, now I think it needs some discussion because we are
An update for those of you who did not get the chance to come to PyCon.
PyCon was very well attended this year and there were some excellent
discussions and presentations.
From PyCon I learned that Python 3000 is closer than I had previously
thought. What this means for me, is that I am
PEP: unassigned
Title: Revising the buffer protocol
Version: $Revision: $
Last-Modified: $Date: $
Author: Travis Oliphant [EMAIL PROTECTED]
Status: Draft
Type: Standards Track
Created: 28-Aug-2006
Python-Version: 3000
Abstract
This PEP proposes re-designing the buffer API (PyBufferProcs
Charles R Harris wrote:
On 2/27/07, *Travis Oliphant* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
PEP: unassigned
Title: Revising the buffer protocol
Version: $Revision: $
Last-Modified: $Date: $
Author: Travis Oliphant [EMAIL PROTECTED]
mailto:[EMAIL
Charles R Harris wrote:
The problem is that we aren't really specifying floating-point
standards, we are specifying float, double and long double as whatever
the compiler understands.
There are some platforms which don't follow the IEEE 754 standard.
This format
Toon Knapen wrote:
Hi all,
Is there detailed info on the installation process available.
I'm asking because in addition to installing numpy on linux-x86, I'm
also trying to install numpy on aix-power and windows-x86. So before
bombarding the ml with questions, I would like to get my hands on
Alec Mihailovs wrote:
I saw somewhere a comparison between numpy and Matlab/Octave. One of the
Matlab/Octave commands that is missing in Numpy is magic(n) for producing n-by-
n magic squares.
Yesterday I posted a magic_square module in the CheeseShop, containing this
missing magic(n) command
Christian Marquardt wrote:
As we are at it,
Andre Gosselin (the guy who wrote pycdf) also wrote an interface to HDF4
(not 5) named pyhdf. I'm using that with numpy as well (patch attached),
but I haven't tested it much - little more than just running the examples,
really (which appear to be ok).
[EMAIL PROTECTED] wrote:
I have a big problem with numpy, numarray and Numeric (all version)
If I'm using the script at the bottom, I obtain these results:
var1 before function is [3 4 5]
var2 before function is 1
var1 after function must be [3 4 5] is [ 9 12 15] -- problem
var2
Tom Denniston wrote:
I am trying to register a custom type to numpy. When I do so it works
and the ufuncs work but then when I invoke any ufunc twice the second
time my python interpretter segfaults. I think i know what the
problem is. In the select_types method in ufuncobject.c in
Reggie Dugard wrote:
On Wed, 2007-02-07 at 14:36 -0700, Travis Oliphant wrote:
Sturla Molden wrote:
def __new__(cls,...)
...
(H, edges) = numpy.histogramdd(..)
cls.__defaultedges = edges
def __array_finalize__(self, obj):
if not hasattr(self, 'edges'):
self.edges
Sturla Molden wrote:
Good point. I guess I thought the OP had tried that already. It turns
out it works fine, too.
The __array_finalize__ is useful if you want the attribute to be carried
around when arrays are created automatically internally (after math
operations for example).
I too
Zachary Pincus wrote:
Hello folks,
I recently was trying to write code to modify an array in-place (so
as not to invalidate any references to that array) via the standard
python idiom for lists, e.g.:
a[:] = numpy.flipud(a)
Now, flipud returns a view on 'a', so assigning that to 'a[:]'
Louis Wicker wrote:
Dear list:
I cannot seem to figure how to create arrays 2 GB on a Mac Pro
(using Intel chip and Tiger, 4.8). I have hand compiled both Python
2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB. I also
run out of space if I try and 3-6 several arrays of
Louis Wicker wrote:
Travis:
yes it does. Its the Woodcrest server chip
http://www.intel.com/business/xeon/?cid=cim:ggl%7Cxeon_us_woodcrest%7Ck6913%7Cs
which
supports 32 and 64 bit operations. For example the new Intel Fortran
compiler can grab more than 2 GB of memory (its a beta10
Christopher Barker wrote:
Zachary Pincus wrote:
Say a function that (despite Tim's pretty
reasonable 'don't do that' warning) will return true when two arrays
have overlapping memory?
I think it would be useful, even if it's not robust. I'd still like to
know if a given two arrays
What is the attitude of this group about the ndarray growing some class
methods?
I'm thinking that we should have several. For example all the fromXXX
functions should probably be classmethods
ndarray.frombuffer
ndarray.fromfile
etc.
-Travis
___
Sebastian Haase wrote:
Travis,
Could you explain what a possible downside of this would be !?
It seems that if you don't need to refer to a specific self object
that a class-method is what it should - is this not always right !?
I don't understand the last point. Classmethods would get
In SVN there is a new function may_share_memory(a,b) which will return
True if the memory foot-print of the two arrays over-lap.
may_share_memory(a, flipud(a))
True
This is based on another utility function byte_bounds that returns the
byte-boundaries of any object exporting the Python
Keith Goodman wrote:
On 1/30/07, Travis Oliphant [EMAIL PROTECTED] wrote:
I'm trying to help out the conversion to NumPy by offering patches to
various third-party packages that have used Numeric in the past.
Does anybody here have requests for which packages they would like to
see converted
Christopher Barker wrote:
Travis Oliphant wrote:
Most of these are probably the gtk-python extension which can use
Numeric
This strikes me as an excellent argument for including an n-d array in
the Python standard lib.
It is absolutely a good argument. If Python had the array
Fernando Perez wrote:
On 1/31/07, Travis Oliphant [EMAIL PROTECTED] wrote:
Sebastian Haase wrote:
Hi!
Do numpy memmap have a way of explicitly
flushing data to disk
and/or
closing the memmap.
There is a sync method that performs the flush. To close the memmap,
delete
Robert Kern wrote:
Note that this is a hack. It won't work for the ufuncs in scipy.special, for
example.
We should look into including __module__ information in ufuncs. This is how
regular functions are pickled.
This sounds like a good idea. Would it be possible to add a dict
object to
I've added a patch to the Porting to NumPy page so that RPy uses NumPy
instead of Numeric. It would be very helpful for unification if these
package authors would accept these patches. NumPy is far enough along
in development that I don't think there is any reason for new releases
of
Sebastian Haase wrote:
Hi!
Do numpy memmap have a way of explicitly
flushing data to disk
and/or
closing the memmap.
There is a sync method that performs the flush. To close the memmap,
delete it.
More detail:
The memmap sub-class has a _mmap attribute that is the Python
James A. Bednar wrote:
Hi,
Does anyone know whether it is possible to pickle and unpickle numpy
ufuncs?
Not directly. Ufuncs are a built-in type and do not have the required
__reduce__ method needed to be pickleable. It could be added, but
hasn't been.
I can't find anything about that
I just finished porting PyGame to use NumPy. It seemed to work fine. I
ran only a few demos though, and so haven't fleshed out all the details.
Please encourage library-writers to use NumPy when possible.
-Travis
Index: src/surfarray.c
Martin Wiechert wrote:
On Friday 26 January 2007 21:03, Robert Kern wrote:
Martin Wiechert wrote:
Hi gurus,
is it (in C) safe to deallocate an array of type NPY_OBJECT, which
carries NULL pointers?
Possibly, I'm not sure without doing some more code-diving. However, I
strongly
Fernando Perez wrote:
Hi all,
I'm puzzled by this behavior a colleague ran into:
In [38]: p1=N.poly1d([1.])
In [39]: a=N.array([p1],dtype='O')
In [40]: a
Out[40]: array([], shape=(1, 0), dtype=object)
In [42]: print a
[]
Stefan correctly identified the problem. The
BBands wrote:
If I have a NumPy array like so:
[[1, 12],
[2, 13],
[3, 14],
[4, 15],
[5, 16],
[6, 15],
[7, 14]]
How can I do an inplace diff, ending up with this?
[[1, 0],
[2, 1],
[3, 1],
[4, 1],
[5, 1],
[6, -1],
[7, -1]]
Probably can be done (but it's a bit tricky because you
Crider, Joseph A wrote:
I am still clueless as to why numpy.numarray will not build for numpy
1.0 and later. As I am currently not using much from either numpy or
scipy, but do want to be able to use some of the 1.0 features
(especially the order keyword for sort on record arrays), I've decided
Martin Spacek wrote:
Hello,
I just upgraded from numpy 1.0b5 to 1.0.1, and I noticed that a part of
my code that was using concatenate() was suddenly far slower. I
downgraded to 1.0, and the slowdown disappeared. Here's the code
and the profiler results for 1.0 and 1.0.1:
I have not
Neal Becker wrote:
I believe we are converging, and this is pretty much the same design as I
advocated. It is similar to boost::ublas.
I'm grateful to hear that. It is nice when ideas come from several
different corners.
Storage is one concept.
Interpretation of the storage is another
I'm concerned about the direction that this PEP seems to be going. The
original proposal was borderline too complicated IMO, and now it seems
headed in the direction of more complexity.
Well at least people are talking about what they would like to see.
But, I think we should reign in
Christopher Barker wrote:
Now, it is exposed in the concept of an array iterator. Anybody
can take advantage of it as it there is a C-API call to get an array
iterator from the array
Is this iterator as efficient as incrementing the index for contiguous
arrays? i.e. is there any point
Timothy Hochberg wrote:
On 1/11/07, *Christopher Barker* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
[CHOP]
I'd still like to know if anyone knows how to efficiently loop through
all the elements of a non-contiguous array in C.
First, it's not that important if the
Christian Marquardt wrote:
Very nice!
one-line summary not using variable names or the function name
I would personally prefer that the first name contains the function (but
nor arguments) - that way, when you are scrolling back in your terminal,
or have a version printed out, you
Alan G Isaac wrote:
On Mon, 8 Jan 2007, Sebastian Haase apparently wrote:
Please explain again what the original decision was based
on.
I think the real questions are:
what do the numpy developers want in the future,
and what is the right path from here to there?
I remember
Stefan van der Walt wrote:
On Wed, Jan 03, 2007 at 05:12:40PM -0600, eric jones wrote:
Thanks for the update. For now, I'll try doing what I need to by
sub-classing float. But, I'm gonna miss __array_finalize__ :-).
Looks like r3493 is intended to fix this. The 'view' method
Tim Hochberg wrote:
A. M. Archibald wrote:
[SNIP]
Really it would be nice if what vectorize() returned were effectively
a ufunc, supporting all the various operations we might want from a
ufunc (albeit inefficiently). This should not be difficult, but I am
not up to writing it this evening.
purpose is to get feedback and criticisms from this community before
display before the larger Python community.
-Travis
PEP: unassigned
Title: Extending the buffer protocol to include the array interface
Version: $Revision: $
Last-Modified: $Date: $
Author: Travis Oliphant [EMAIL PROTECTED
purpose is to get feedback and criticisms from this community before
display before the larger Python community.
-Travis
PEP: unassigned
Title: Extending the buffer protocol to include the array interface
Version: $Revision: $
Last-Modified: $Date: $
Author: Travis Oliphant [EMAIL PROTECTED]
Status
Christopher Barker wrote:
Travis,
First, thanks for doing this -- Python really needs it!
While this approach
works, it requires attribute lookups which can be expensive when
sharing many small arrays.
Ah, I do like reducing that overhead -- I know I use arrays a lot
David Cournapeau wrote:
On 12/22/06, Robert Kern [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
I've been thinking about that a bit. One solution is to have a small
python program that takes all the pieces and writes one big build file,
I think something like that happens now.
Christopher Barker wrote:
It can be a pain to build this kind of thing on OS-X, as Apple has not
supported a Fortran compiler yet, but it can (and has) been done. IN
fact, the Mac is a great target for pre-built binaries as there is only
a small variety of hardware to support, and Apple
Sven Schreiber wrote:
Robert Kern schrieb:
Pierre GM wrote:
So, to put it pointedly (if that's the right word...?):
Numpy should not get small functions from scipy - because the size of
scipy doesn't matter - because scipy's modules will be installable as
add-ons separately (and
Mark Janikas wrote:
Thanks for all the input so far. The only thing that seems odd about
the omission of probability or quantile functions in NumPy is that all
the random number generators are present in RandomArray.
A big part of the issue is that getting many of those pdfs into NumPy
would
My question is then: is there any plan to change this ? If not, is
this
for some reasons I don't see, or is this just because of lack of
manpower ?
I raised the possibility of breaking up the files before and Travis
was agreeable to the idea. It is still in the back of my
David Cournapeau wrote:
en I went back to home, I started taking a close look a numpy/core C
sources, with the help of the numpy ebook. The huge source files make it
really difficult for me to follow some things: I was wondering if there
is some rationale behind it, or if this is just a
Francesc Altet wrote:
seems to tell us that memmove/memcopy are not called at all, but
instead the DOUBLE_copyswap function. This is in fact an apparence,
because if we look at the code of DOUBLE_copyswap (found in
arraytypes.inc.src):
@[EMAIL PROTECTED] (void *dst, void *src, int swap, void
Gennan Chen wrote:
Hi!
I have problem with this function call under FC6 X86_64 for my own
numpy extension
printf(\n %d %d %d,
PyArray_DIM(imgi,0),PyArray_DIM(imgi,1),PyArray_DIM(imgi,2))
it gave me
166 256 256
if I tried:
int *dim;
dim = PyArray_DIMS(imgi)
printf(\n %d %d %d,
David Cournapeau wrote:
Robert Kern wrote:
Looking at the code, it's certainly not surprising that the current
implementation of clip() is slow. It is a direct numpy C API translation of
the
following (taken from numarray, but it is the same in Numeric):
def clip(m, m_min, m_max):
Robert Kern wrote:
David Cournapeau wrote:
Basically, at least from those figures, both versions are pretty
similar, and not worth improving much anyway for matplotlib. There is
something funny with numpy version, though.
Looking at the code, it's certainly not surprising that
Wei wrote:
Hi,
I just got my new intel core duo laptop. So I downloaded the new
cygwin (including everything) but couldn’t get the numarray or numpy
modules installed correctly. I always got the following error. Can
some one help?
Many thanks!
Wei
python setup.py install
Using
601 - 678 of 678 matches
Mail list logo