On 3/24/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote:
I mentioned in another thread Travis started on the scipy list that I
would find it useful if there were a function like dot() that could
multiply more than just two things.
Here's a
On 3/24/07, Charles R Harris [EMAIL PROTECTED] wrote:
On 3/23/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 23/03/07, Charles R Harris [EMAIL PROTECTED] wrote:
Anyone,
What is the easiest way to detect in python/C if an object is a subclass
of
ndarray?
Um, how about
On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote:
Nice, but how does that fare on things like mdot(a,(b,c),d) ? I'm
pretty sure it doesn't handle it.
I think an mdot that can only multiply things left to right comes up
short compared to an infix operator that can easily use parentheses to
but how about the things like
a = dot(array([8]), ones([1000,1000], array([15])))?
it will be much faster if we will dot 8 x 15 at first, and than the
result to the big array.
D.
Anne Archibald wrote:
On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote:
Nice, but how does that fare on things
Anne Archibald wrote:
P.S. reduce isn't even a numpy thing, it's one of python's
much-neglected lispy functions.
It looks like reduce(), map(), and filter() are going away for Python
3.0 since GvR believes that they are redundant and list comprehensions
and generator expressions are more
On Fri, 23 Mar 2007, Charles R Harris apparently wrote:
the following gives the wrong result:
In [15]: I = matrix(eye(2))
In [16]: I*ones(2)
Out[16]: matrix([[ 1., 1.]])
where the output should be a column vector.
Why should this output a column?
I would prefer an exception.
Add the axis
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Steven H. Rogers apparently wrote:
It looks like reduce(), map(), and filter() are going away for Python
3.0 since GvR believes that they are redundant and list comprehensions
and generator expressions are more readable alternatives. lambda was on
On 3/24/07, Alan G Isaac [EMAIL PROTECTED] wrote:
On Fri, 23 Mar 2007, Charles R Harris apparently wrote:
the following gives the wrong result:
In [15]: I = matrix(eye(2))
In [16]: I*ones(2)
Out[16]: matrix([[ 1., 1.]])
where the output should be a column vector.
Why should this output a
On Fri, 23 Mar 2007, Charles R Harris apparently wrote:
the following gives the wrong result:
In [15]: I = matrix(eye(2))
In [16]: I*ones(2)
Out[16]: matrix([[ 1., 1.]])
where the output should be a column vector.
On 3/24/07, Alan G Isaac [EMAIL PROTECTED] wrote:
Why should this
On 3/24/07, Alan G Isaac [EMAIL PROTECTED] wrote:
On Fri, 23 Mar 2007, Charles R Harris apparently wrote:
the following gives the wrong result:
In [15]: I = matrix(eye(2))
In [16]: I*ones(2)
Out[16]: matrix([[ 1., 1.]])
where the output should be a column vector.
On 3/24/07, Alan G
On 3/24/07, Alan G Isaac [EMAIL PROTECTED] wrote:
On Sat, 24 Mar 2007, Charles R Harris apparently wrote:
I think it is reasonable to raise an exception in this
case, but that is not how numpy currently works, so it is
a larger policy decision that I can't make on my own. For
the case under
On 3/24/07, Bill Spotz [EMAIL PROTECTED] wrote:
No, I don't consider native byte order. What was your solution?
I think there is only one solution:
If somebody requests INPLACE handling but provides data of
wrong byteorder, I have to throw an exception -- in SWIG terms, the
typecheck returns
Charles R Harris wrote:
On 3/24/07, *Alan G Isaac* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Fri, 23 Mar 2007, Charles R Harris apparently wrote:
the following gives the wrong result:
In [15]: I = matrix(eye(2))
In [16]: I*ones(2)
Out[16]:
On 3/24/07, Steven H. Rogers [EMAIL PROTECTED] wrote:
Anne Archibald wrote:
P.S. reduce isn't even a numpy thing, it's one of python's
much-neglected lispy functions.
It looks like reduce(), map(), and filter() are going away for Python
3.0 since GvR believes that they are redundant
On Mar 24, 2007, at 2:52 PM, Bill Baxter wrote:
On 3/24/07, Steven H. Rogers [EMAIL PROTECTED] wrote:
Anne Archibald wrote:
P.S. reduce isn't even a numpy thing, it's one of python's
much-neglected lispy functions.
It looks like reduce(), map(), and filter() are going away for Python
On 3/24/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote:
Nice, but how does that fare on things like mdot(a,(b,c),d) ? I'm
pretty sure it doesn't handle it.
I think an mdot that can only multiply things left to right comes up
short compared
On 3/24/07, Colin J. Williams [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
On 3/24/07, *Alan G Isaac* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Fri, 23 Mar 2007, Charles R Harris apparently wrote:
the following gives the wrong result:
In [15]: I =
On 3/25/07, Perry Greenfield [EMAIL PROTECTED] wrote:
On Mar 24, 2007, at 2:52 PM, Bill Baxter wrote:
On 3/24/07, Steven H. Rogers [EMAIL PROTECTED] wrote:
Anne Archibald wrote:
P.S. reduce isn't even a numpy thing, it's one of python's
much-neglected lispy functions.
It looks
Perry Greenfield wrote:
On Mar 24, 2007, at 2:52 PM, Bill Baxter wrote:
On 3/24/07, Steven H. Rogers [EMAIL PROTECTED] wrote:
Anne Archibald wrote:
P.S. reduce isn't even a numpy thing, it's one of python's
much-neglected lispy functions.
It looks like reduce(), map(), and filter() are
On Sun, 25 Mar 2007, Bill Baxter apparently wrote:
So if one just
changes the example to
reduce(lambda s, a: s * a.myattr, data, 1)
How does one write that in a simplified way using generator
expressions without calling on reduce?
Eliminating the expressiveness of ``reduce`` has in
On 3/25/07, Steven H. Rogers [EMAIL PROTECTED] wrote:
The generator expression PEP doesn't say this, but the Python 3000
planning PEP (http://www.python.org/dev/peps/pep-3100/) has map() and
filter() on the 'to-be-removed' list with a parenthetic comment that
they can stay. Removal of
Every so often the idea of new operators comes up because of the need to
do both matrix-multiplication and element-by-element multiplication.
I think this is one area where the current Python approach is not as
nice because we have a limited set of operators to work with.
One thing I wonder is
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Charles R Harris apparently wrote:
Yes, that is what I am thinking. Given that there are only the two
possibilities, row or column, choose the only one that is compatible with
the multiplying matrix. The result will not always be a column vector,
Hello folks,
Hmm, this is worrisome. There really shouldn't be ringing on
continuous-tone images like Lena -- right? (And at no step in an
image like that should gaussian filtering be necessary if you're
doing spline interpolation -- also right?)
That's hard to say. Just because it's
On Sat, 24 Mar 2007, Travis Oliphant apparently wrote:
My opinion is that a 1-d array in matrix-multiplication
should always be interpreted as a row vector. Is this not
what is currently done? If not, then it is a bug in my
mind.
N.__version__
'1.0'
I
matrix([[ 1., 0.],
[
Charles R Harris wrote:
On 3/24/07, *Travis Oliphant* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Charles R Harris apparently wrote:
Yes, that is what I am thinking. Given that there are only the two
possibilities,
Alan G Isaac wrote:
On Sat, 24 Mar 2007, Travis Oliphant apparently wrote:
My opinion is that a 1-d array in matrix-multiplication
should always be interpreted as a row vector. Is this not
what is currently done? If not, then it is a bug in my
mind.
N.__version__
On Sat, 24 Mar 2007, Travis Oliphant apparently wrote:
I'd be fine with an error raised on matrix multiplication
(as long as dot is not changed). In other words, I'd
like to see 1-d arrays always interpreted the same way (as
row vectors) when used in matrix multiplication.
My
On 24/03/07, Travis Oliphant [EMAIL PROTECTED] wrote:
My opinion is that a 1-d array in matrix-multiplication should always be
interpreted as a row vector. Is this not what is currently done? If
not, then it is a bug in my mind.
An alternative approach, in line with the usual usage, is
Hi,
I followed the discussion on the scipy ML, and I would advocate it as well.
I miss the dichotomy that is present in Matlab, and to have a similar degree
of freedom, it would be good to have it in the upcoming major release of
Python.
Matthieu
2007/3/24, Travis Oliphant [EMAIL PROTECTED]:
On 3/24/07, Travis Oliphant [EMAIL PROTECTED] wrote:
Every so often the idea of new operators comes up because of the need to
do both matrix-multiplication and element-by-element multiplication.
I think this is one area where the current Python approach is not as
nice because we have a limited
Bill Baxter wrote:
On 3/24/07, Anne Archibald [EMAIL PROTECTED] wrote:
You could do this, and for your
own code maybe it's worth it, but I think it would be confusing in the
library.
Could be. Doesn't seem so confusing to me as long as it's documented
clearly in the docstring, but YMMV.
On 3/25/07, Alan G Isaac [EMAIL PROTECTED] wrote:
On Sun, 25 Mar 2007, Bill Baxter apparently wrote:
So if one just
changes the example to
reduce(lambda s, a: s * a.myattr, data, 1)
How does one write that in a simplified way using generator
expressions without calling on reduce?
Bill Baxter wrote:
I think it's fine for filter()/reduce()/map() to be taken out of
builtins and moved to a standard module, but it's not clear that
that's what they're going to do. That py3K web page just says remove
reduce()... done.
On Sat, Mar 24, 2007 at 01:41:21AM -0400, James Turner wrote:
That's hard to say. Just because it's mainly a continuous-tone image
doesn't necessarily mean it is well sampled everywhere. This depends
both on the subject and the camera optics. Unlike the data I usually
work with, I think
On 24/03/07, Charles R Harris [EMAIL PROTECTED] wrote:
Yes indeed, this is an old complaint. Just having an infix operator would be
an improvement:
A dot B dot C
Not that I am suggesting dot in this regard ;) In particular, it wouldn't
parse without spaces. What about division? Matlab has
On 3/25/07, Robert Kern [EMAIL PROTECTED] wrote:
Bill Baxter wrote:
I think it's fine for filter()/reduce()/map() to be taken out of
builtins and moved to a standard module, but it's not clear that
that's what they're going to do. That py3K web page just says remove
reduce()... done.
On 3/25/07, Robert Kern [EMAIL PROTECTED] wrote:
Bill Baxter wrote:
I don't know. Given our previous history with convenience functions with
different calling semantics (anyone remember rand()?), I think it probably
will
confuse some people.
I'd really like to see it on a cookbook page,
On Sat, 24 Mar 2007, Anne Archibald apparently wrote:
Note that taking a vector and left-multiplying it by
a matrix is a very natural operation that won't work any
more if you treat all vectors as if they were row vectors.
Can you be more specific on this naturalness?
What is the cost of
On Sat, Mar 24, 2007 at 03:25:38PM -0700, Zachary Pincus wrote:
If Lena is converted to floating-point before the rotation is
applied, and then the intensity range is clipped to [0,255] and
converted back to uint8 before saving, everything looks fine.
Thanks, Zachary! I can confirm that.
On 3/24/07, Alan G Isaac [EMAIL PROTECTED] wrote:
On Sat, 24 Mar 2007, Anne Archibald apparently wrote:
Note that taking a vector and left-multiplying it by
a matrix is a very natural operation that won't work any
more if you treat all vectors as if they were row vectors.
Can you be more
Hi Zach,
Based on my reading of the two excellent Unser papers (both the one
that ndimage's spline code is based on, and the one that Travis
linked to), it seems like a major point of using splines for
interpolation is *better* behavior in the case of non-band-limited
data than the
In [10]: isscalar('hello world')
Out[10]: True
Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris wrote:
In [10]: isscalar('hello world')
Out[10]: True
As far as numpy is concerned, they are.
from numpy import *
array('hello world')
array('hello world',
dtype='|S11')
--
Robert Kern
I have come to believe that the whole world is an enigma, a harmless enigma
that
Charles R Harris wrote:
In [10]: isscalar('hello world')
Out[10]: True
I would say that, intrinsically, yes, strings are constructed as sequences of
other things. However, essentially every use case I have for *testing* whether
or not something is a sequence (or inversely, a scalar), I want
Hi Stéfan,
Agreed, but the aliasing effects isn't not the problem here, as it
should be visible in the input image as well.
It's a bit academic now that Zach seems to have found the answer, but
I don't think this is true. Aliasing is *present* in the input image,
but is simply manifested as
On 3/24/07, Robert Kern [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
In [10]: isscalar('hello world')
Out[10]: True
I would say that, intrinsically, yes, strings are constructed as sequences
of
other things. However, essentially every use case I have for *testing*
whether
or not
On 3/24/07, Robert Kern [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
In [10]: isscalar('hello world')
Out[10]: True
As far as numpy is concerned, they are.
from numpy import *
array('hello world')
array('hello world',
dtype='|S11')
Dunno, by that reasoning objects could also
48 matches
Mail list logo