2008/6/26 Stéfan van der Walt [EMAIL PROTECTED]:
Hi all,
We are busy writing several documents that address general NumPy
topics, such as indexing, broadcasting, testing, etc. I would like
for those documents to be available inside of NumPy, so that they
could be accessed as docstrings:
2008/6/24 Bob Dowling [EMAIL PROTECTED]:
There is not supposed to be a one-to-one correspondence between the
functions in numpy and the methods on an ndarray. There is some
duplication between the two, but that is not a reason to make more
duplication.
I would make a plea for consistency, to
2008/6/23 Alan McIntyre [EMAIL PROTECTED]:
On Mon, Jun 23, 2008 at 3:21 PM, Stéfan van der Walt [EMAIL PROTECTED]
wrote:
Another alternative is to replace +SKIP with something like +IGNORE.
That way, the statement is still executed, we just don't care about
its outcome. If we skip the line
2008/6/23 Michael McNeil Forbes [EMAIL PROTECTED]:
On 23 Jun 2008, at 12:37 PM, Alan McIntyre wrote:
Ugh. That just seems like a lot of unreadable ugliness to me. If
this comment magic is the only way to make that stuff execute
properly
under doctest, I think I'd rather just skip it in
2008/6/23 Stéfan van der Walt [EMAIL PROTECTED]:
2008/6/24 Stéfan van der Walt [EMAIL PROTECTED]:
It should be fairly easy to execute the example code, just to make
sure it runs. We can always work out a scheme to test its validity
later.
Mike Hansen just explained to me that the Sage
2008/6/23 Michael Abshoff [EMAIL PROTECTED]:
Charles R Harris wrote:
On Mon, Jun 23, 2008 at 5:58 PM, Michael Abshoff
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
Stéfan van der Walt wrote:
2008/6/24 Stéfan van der Walt [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]:
2008/6/21 Dag Sverre Seljebotn [EMAIL PROTECTED]:
Dag wrote:
General feedback is welcome; in particular, I need more opinions about
what syntax people would like. We seem unable to find something that we
really like; this is the current best candidate (cdef is the way you
declare types on
2008/6/21 Robert Kern [EMAIL PROTECTED]:
On Sat, Jun 21, 2008 at 17:08, Anne Archibald [EMAIL PROTECTED] wrote:
My suggestion is this: allow negative indices, accepting the cost in
tight loops. (If bounds checking is enabled, the cost will be
negligible anyway.) Provide a #pragma allowing
2008/6/18 Charles R Harris [EMAIL PROTECTED]:
On Wed, Jun 18, 2008 at 11:48 AM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/6/18 Stéfan van der Walt [EMAIL PROTECTED]:
2008/6/18 Anne Archibald [EMAIL PROTECTED]:
In [7]: x.take(x.argsort())
Out[7]: array([ 0. , 0.1, 0.2, 0.3
2008/6/18 bevan [EMAIL PROTECTED]:
Hello,
I am looking for some pointers that will hopefully get me a round an issue I
have hit.
I have a timeseries of river flow and would like to carry out some analysis on
the recession periods. That is anytime the values are decreasing. I would
like
2008/6/18 Thomas J. Duck [EMAIL PROTECTED]:
I have found what I think is some strange behavior for argsort
() and take(). First, here is an example that works as expected:
x = numpy.array([1,0,3,2])
x.argsort()
array([1, 0, 3, 2])
argsort() returns the original
2008/6/17 Alan McIntyre [EMAIL PROTECTED]:
On Tue, Jun 17, 2008 at 8:15 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
Uh, I assumed NumpyTestCase was public and used it. I'm presumably not
alone, so perhaps a deprecation warning would be good. What
backward-compatible class should I use
2008/6/18 Stéfan van der Walt [EMAIL PROTECTED]:
2008/6/18 Anne Archibald [EMAIL PROTECTED]:
In [7]: x.take(x.argsort())
Out[7]: array([ 0. , 0.1, 0.2, 0.3])
If you would like to think of it more mathematically, when you feed
np.argsort() an array that represents a permutation
2008/6/18 Stéfan van der Walt [EMAIL PROTECTED]:
2008/6/18 Anne Archibald [EMAIL PROTECTED]:
Well, probably. But more so for those that are used widely throughout
numpy itself, since many of us learn how to write code using numpy by
reading numpy source. (Yes, this means that internal
2008/6/17 Alan McIntyre [EMAIL PROTECTED]:
On Tue, Jun 17, 2008 at 9:26 AM, David Huard [EMAIL PROTECTED] wrote:
I noticed that NumpyTest and NumpyTestCase disappeared, and now I am
wondering whether these classes part of the public interface or were they
reserved for internal usage ?
In the
2008/6/16 Chandler Latour [EMAIL PROTECTED]:
I believe I'm bound to python.
In terms of forcing the regression through the origin, the purpose is partly
for visualization but it also should fit the data. It would not make sense
to model the data with an initial value other than 0.
Polyfit is
2008/6/10 Simon Palmer [EMAIL PROTECTED]:
Hi I have a problem which involves the creation of a large square matrix
which is zero across its diagonal and symmetrical about the diagonal i.e.
m[i,j] = m[j,i] and m[i,i] = 0. So, in fact, it is a large triangular
matrix. I was wondering whether
2008/6/9 Keith Goodman [EMAIL PROTECTED]:
Does anyone have a function that converts ranks into a Gaussian?
I have an array x:
import numpy as np
x = np.random.rand(5)
I rank it:
x = x.argsort().argsort()
x_ranked = x.argsort().argsort()
x_ranked
array([3, 1, 4, 2, 0])
I would like
2008/6/7 Keith Goodman [EMAIL PROTECTED]:
On Fri, Jun 6, 2008 at 10:46 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/6/6 Keith Goodman [EMAIL PROTECTED]:
I'd like to shift the columns of a 2d array one column to the right.
Is there a way to do that without making a copy?
This doesn't
2008/6/7 Robert Kern [EMAIL PROTECTED]:
On Sat, Jun 7, 2008 at 14:37, Ondrej Certik [EMAIL PROTECTED] wrote:
Hi,
what is the current plan with array and matrix with regard of calculating
sin(A)
? I.e. elementwise vs sin(A) = Q*sin(D)*Q^T? Is the current approach
(elementwise for array and
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
So, I have three questions about this:
1) Why is mmap being called in the first place? I've written to Travis
Oliphant, and he's explained that numpy.inner does NOT directly do any
memory
mapping and shouldn't call mmap. Instead, it should just
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
Anne, thanks so much for your help. I still a little confused. If your
scenario about the the memory allocation is working is right, does that mean
that even if I put a lot of ram on the machine, e.g. 16GB, I still can't
request it in blocks larger
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
Try
In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4
which is the size in bytes of the integer needed to hold a pointer. The
output above is for 32 bit python/numpy.
Chuck
Check, the answer is 4, as you got for the 32-bit. What would the
2008/6/3 Robert Kern [EMAIL PROTECTED]:
Python does not copy data when you assign something to a new variable.
Python simply points the new name to the same object. If you modify
the object using the new name, all of the other names pointing to that
object will see the changes. If you want a
Hi,
I thought it might be useful to summarize the different ways to use
numpy's indexing, slice and fancy. The document so far is here:
http://www.scipy.org/Cookbook/Indexing
While writing it I ran into some puzzling issues. The first of them
is, how is Boolean indexing supposed to work when
2008/5/31 Pauli Virtanen [EMAIL PROTECTED]:
The reason for the strange behavior of slice assignment is that when the
left and right sides in a slice assignment are overlapping views of the
same array, the result is currently effectively undefined. Same is true
for ndarrays:
import numpy
a
2008/5/29 Jarrod Millman [EMAIL PROTECTED]:
On Thu, May 29, 2008 at 3:28 PM, Stéfan van der Walt [EMAIL PROTECTED]
wrote:
The NumPy documentation project has taken another leap forward! Pauli
Virtanen has, in a week of superhuman coding, produced a web
application that enhances the
2008/5/28 Charles R Harris [EMAIL PROTECTED]:
Hi All,
Currently we have:
In [2]: ones(1,dtype=int8) ones(1,dtype=uint8)
Out[2]: array([2], dtype=int16)
In [4]: ones(1,dtype=int64) ones(1,dtype=uint64)
Out[4]: array([2], dtype=object)
Note the increased size in the first case and the
2008/5/28 Alan McIntyre [EMAIL PROTECTED]:
On Wed, May 28, 2008 at 3:34 PM, Charles R Harris I wonder if this
is something that ought to be looked at for all
functions with an out parameter? ndarray.compress also had problems
with array type mismatch (#789); I can't imagine that it's safe to
2008/5/27 Robert Kern [EMAIL PROTECTED]:
Can we make it so that dtype('c') is preserved instead of displaying
'|S1'? It does not behave the same as dtype('|S1') although it
compares equal to it.
It seems alarming to me that they should compare equal but behave
differently. Is it possible to
2008/5/24 Jarrod Millman [EMAIL PROTECTED]:
On Thu, May 22, 2008 at 11:25 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Fri, May 23, 2008 at 12:10 AM, Charles R Harris
[EMAIL PROTECTED] wrote:
The python 2.6 buildbots are showing 5 failures that are being hidden by
valgrind.
They seem
2008/5/24 Charles R Harris [EMAIL PROTECTED]:
I take that back, I got confused looking through the output. The errors are
the same and only seem to happen when valgrind runs the tests.
Sounds like maybe valgrind is not IEEE clean:
As of version 3.0.0, Valgrind has the following limitations
2008/5/24 Jarrod Millman [EMAIL PROTECTED]:
On Fri, May 23, 2008 at 11:16 PM, David Cournapeau
[EMAIL PROTECTED] wrote:
Where do those errors appear ? I don't see them on the builbot. Are they
2.6 specific ? If yes, I would say ignore them, because 2.6 is not
released yet, and is scheduled
2008/5/21 Vincent Schut [EMAIL PROTECTED]:
Christopher Barker wrote:
Also, if you image data is rgb, usually, that's a (width, height, 3)
array: rgbrgbrgbrgb... in memory. If you have a (3, width, height)
array, then that's rrr... Some image libs
may give you
2008/5/21 LB [EMAIL PROTECTED]:
This is really a great news, and it seems very promising according to
the first pages of the Wiki that I've seen.
It's perhaps not the right place to say this, but I was wondering what
you would thinking about adding labels or category to the descriptions
of
2008/5/20 Vasileios Gkinis [EMAIL PROTECTED]:
I have a question concerning nan in NumPy.
Lets say i have an array of sample measurements
a = array((2,4,nan))
in NumPy calculating the mean of the elements in array a looks like:
a = array((2,4,nan))
a
array([ 2., 4., NaN])
mean(a)
nan
2008/5/20 Thomas Hrabe [EMAIL PROTECTED]:
given a 3d array
a =
numpy.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]],[[19,20,21],[22,23,24]]])
a.shape
returns (4,2,3)
so I assume the first digit is the 3rd dimension, second is 2nd dim and
third is the first.
how is
2008/5/19 Orest Kozyar [EMAIL PROTECTED]:
Given a slice, such as s_[..., :-2:], is it possible to take the
complement of this slice? Specifically, s_[..., ::-2]. I have a
series of 2D arrays that I need to split into two subarrays via
slicing where the members of the second array are all the
2008/5/19 Orest Kozyar [EMAIL PROTECTED]:
If you don't mind fancy indexing, you can convert your index arrays
into boolean form:
complement = A==A
complement[idx] = False
This actually would work perfectly for my purposes. I don't really
need super-fancy indexing.
Heh. Actually fancy
2008/5/19 James Snyder [EMAIL PROTECTED]:
First off, I know that optimization is evil, and I should make sure
that everything works as expected prior to bothering with squeezing
out extra performance, but the situation is that this particular block
of code works, but it is about half as fast
2008/5/18 Matt Crane [EMAIL PROTECTED]:
On Sun, May 18, 2008 at 8:52 PM, Robert Kern [EMAIL PROTECTED] wrote:
Are there repeats?
No, no repeats in the first column.
I'm going to go get a cup of coffee before I forget to leave out any
potentially vital information again. It's going to be a
2008/5/17 Zoho Vignochi [EMAIL PROTECTED]:
hello:
I am writing my own version of a dot product. Simple enough this way:
def dot_r(a, b):
return sum( x*y for (x,y) in izip(a, b) )
However if both a and b are complex we need:
def dot_c(a, b):
return sum( x*y for (x,y) in
2008/5/17 Charles R Harris [EMAIL PROTECTED]:
On Sat, May 17, 2008 at 9:52 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Fri, 16 May 2008, Anne Archibald apparently wrote:
storing actual python objects in an array is probably not
a good idea
I have been wondering what people use object
if there would be any speed up for
this, compared to
for n in self.flat:
n.update(input_vector)
From the response, the answer seems to be no, and that I should stick with
the python loops for clarity. But also, the words of Anne Archibald, makes
me think that I have made a bad choice
2008/5/16 David Cournapeau [EMAIL PROTECTED]:
Sebastian Haase wrote:
Hi,
can someone comment on these timing numbers ?
http://narray.rubyforge.org/bench.html.en
Is the current numpy faster ?
It is hard to know without getting the same machine or having the
benchmark sources. But except
2008/5/16 Stuart Brorson [EMAIL PROTECTED]:
Hi --
Sorry to be a pest with corner cases, but I found another one.
In this case, if you try to take the arccos of numpy.inf in the
context of a complex array, you get a bogus return (IMO). Like this:
In [147]: R = numpy.array([1, numpy.inf])
2008/5/16 Brian Blais [EMAIL PROTECTED]:
I have a custom array, which contains custom objects (I give a stripped down
example below), and I want to loop over all of the elements of the array and
call a method of the object. I can do it like:
a=MyArray((5,5),MyObject,10)
for obj in
2008/5/15 Francesc Alted [EMAIL PROTECTED]:
I don't need to say that this procedure was not used for small or
trivial changes (that were fixed directly), but only when the issue was
important enough to deserve the attention of the mate.
I think here's the rub: when I hear patch review system
2008/5/12 Jarrod Millman [EMAIL PROTECTED]:
On Mon, May 12, 2008 at 12:05 PM, Eric Firing [EMAIL PROTECTED] wrote:
As one who pushed for the MA transition, I appreciate your suggestion.
It may have one unintended consequence, however, that may cause more
trouble than it saves: it may lead
Hi,
Suppose I have a C function,
double logsum(double a, double b);
What is needed to produce a ufunc object whose elementwise operation
is done by calling this function?
Also, is there a way to take a python function and automatically make
a ufunc out of it? (No, vectorize doesn't implement
2008/5/11 Alan McIntyre [EMAIL PROTECTED]:
On Sun, May 11, 2008 at 2:04 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
Also, is there a way to take a python function and automatically make
a ufunc out of it? (No, vectorize doesn't implement reduce(),
accumulate(), reduceat(), or outer
2008/5/11 Robert Kern [EMAIL PROTECTED]:
Basically, you need 3 arrays: functions implementing the type-specific
inner loops, void* extra data to pass to these functions, and an array
of arrays containing the type signatures of the ufunc. In numpy, we
already have generic implementations of
2008/5/11 Robert Kern [EMAIL PROTECTED]:
Perhaps, but ufuncs only allow 0 or 1 for this value, currently.
That's a shame, minus infinity is the identity for maximum too.
Also, I was wrong about using PyUFunc_ff_f. Instead, use PyUFunc_ff_f_As_dd_d.
Hmm. Well, I tried implementing logsum(),
2008/5/10 Jarrod Millman [EMAIL PROTECTED]:
On Sat, May 10, 2008 at 8:14 AM, Keith Goodman [EMAIL PROTECTED] wrote:
If these are backed out, will some kind of deprecation
warning be added for scalar indexing, as Travis suggested?
Robert's request seems in accord with this.
Shouldn't a
2008/5/10 Nathan Bell [EMAIL PROTECTED]:
On Sat, May 10, 2008 at 3:05 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
I don't expect my opinion to prevail, but the point is that we do not
even have enough consensus to agree on a recommendation to go in the
DeprecationWarning. Alas.
Would you
2008/5/10 Timothy Hochberg [EMAIL PROTECTED]:
On Sat, May 10, 2008 at 1:37 PM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/5/10 Nathan Bell [EMAIL PROTECTED]:
On Sat, May 10, 2008 at 3:05 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
I don't expect my opinion to prevail, but the point
2008/5/9 Travis Oliphant [EMAIL PROTECTED]:
After Nathan Bell's recent complaints, I'm a bit more uncomfortable with
the matrix change to scalar indexing. It does and will break code in
possibly hard-to-track down ways. Also, Nathan has been a *huge*
contributor to the Sparse matrix in
2008/5/9 Eric Firing [EMAIL PROTECTED]:
Stefan, (and Jarrod and Pierre)
(Context for anyone new to the thread: the subject is slightly
misleading, because the bug is/was present in both oldnumeric.ma and
numpy.ma; the discussion of fix pertains to the latter only.)
Regarding your
2008/5/9 Eric Firing [EMAIL PROTECTED]:
It seems like some strategic re-thinking may be needed in the long run,
if not immediately. There is a wide range of combinations of arguments
that will trigger invalid results, whether Inf or NaN. The only way to
trap and mask all of these is to use
2008/5/8 David Cournapeau [EMAIL PROTECTED]:
On Thu, May 8, 2008 at 10:20 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
Floating point numbers are essentially logs to base 2, i.e., integer
exponent and mantissa between 1 and 2. What does using the log buy you?
Precision, of
2008/5/8 Charles R Harris [EMAIL PROTECTED]:
What realistic probability is in the range exp(-1000) ?
Well, I ran into it while doing a maximum-likelihood fit - my early
guesses had exceedingly low probabilities, but I needed to know which
way the probabilities were increasing.
Anne
2008/5/8 Charles R Harris [EMAIL PROTECTED]:
On Thu, May 8, 2008 at 10:56 AM, Robert Kern [EMAIL PROTECTED] wrote:
When you're running an optimizer over a PDF, you will be stuck in the
region of exp(-1000) for a substantial amount of time before you get
to the peak. If you don't use the
2008/5/8 Charles R Harris [EMAIL PROTECTED]:
David, what you are using is a log(log(x)) representation internally. IEEE
is *not* linear, it is logarithmic.
As Robert Kern says, yes, this is exactly what the OP and all the rest
of us want.
But it's a strange thing to say that IEEE is
2008/5/8 T J [EMAIL PROTECTED]:
On 5/8/08, Anne Archibald [EMAIL PROTECTED] wrote:
Is logarray really the way to handle it, though? it seems like you
could probably get away with providing a logsum ufunc that did the
right thing. I mean, what operations does one want to do on logarrays
Hi,
I frequently use functions like np.add.reduce and np.add.outer, but
their docstrings are totally uninformative. Would it be possible to
provide proper docstrings for these ufunc methods? They need not be
specific to np.add; just an explanation of what arguments to give (for
example) reduce()
2008/5/8 Robert Kern [EMAIL PROTECTED]:
On Thu, May 8, 2008 at 5:28 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
Hi,
I frequently use functions like np.add.reduce and np.add.outer, but
their docstrings are totally uninformative. Would it be possible to
provide proper docstrings
2008/5/8 Robert Kern [EMAIL PROTECTED]:
On Thu, May 8, 2008 at 9:52 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
Thanks! Done add, reduce, outer, and reduceat. What about __call__?
If anyone knows enough to explicitly request a docstring from
__call__, they already know what it does.
How
2008/5/8 Robert Kern [EMAIL PROTECTED]:
On Thu, May 8, 2008 at 10:39 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/5/8 Robert Kern [EMAIL PROTECTED]:
If anyone knows enough to explicitly request a docstring from
__call__, they already know what it does.
How exactly are they to find out
2008/5/7 Jarrod Millman [EMAIL PROTECTED]:
I have just created the 1.1.x branch:
http://projects.scipy.org/scipy/numpy/changeset/5134
In about 24 hours I will tag the 1.1.0 release from the branch. At
this point only critical bug fixes should be applied to the branch.
The trunk is now
2008/5/7 Eric Firing [EMAIL PROTECTED]:
Charles Doutriaux wrote:
The following code works with numpy.ma but not numpy.oldnumeric.ma,
No, this is a bug in numpy.ma also; power is broken:
While it's tempting to just call power() and mask out any NaNs that
result, that's going to be a problem
2008/5/7 Pierre GM [EMAIL PROTECTED]:
All,
Yes, there is a problem with ma.power: masking negative data should be
restricted to the case of an exponent between -1. and 1. only, don't you
think ?
No, there's a problem with any fractional exponent (with even
denominator): x**(3/2) ==
2008/5/7 Jonathan Wright [EMAIL PROTECTED]:
Is there a rule against squaring away the negatives?
def not_your_normal_pow( x, y ): return exp( log( power( x, 2) ) * y / 2 )
Which still needs some work for x==0.
Well, it means (-1.)**(3.) becomes 1., which is probably not what the
user
2008/5/7 Charles R Harris [EMAIL PROTECTED]:
On Wed, May 7, 2008 at 11:44 AM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/5/7 Jarrod Millman [EMAIL PROTECTED]:
I have just created the 1.1.x branch:
http://projects.scipy.org/scipy/numpy/changeset/5134
In about 24 hours I will tag
2008/5/7 Charles R Harris [EMAIL PROTECTED]:
On Wed, May 7, 2008 at 5:31 PM, Anne Archibald [EMAIL PROTECTED]
wrote:
Ah. Good point. I did find a bug - x[:,0] doesn't do what you'd
expect. Best not release without either backing out my change. I'm
still trying to track down what's up
2008/5/7 Charles R Harris [EMAIL PROTECTED]:
Heh, I just added some tests to 1.2 before closing ticket #707. They should
probably be merged with yours.
Seems a shame:
Ran 1000 tests in 5.329s
Such a nice round number!
Anne
___
Numpy-discussion
2008/5/5 David Cournapeau [EMAIL PROTECTED]:
Basically, what I have in mind is, in a first step (for numpy 1.2):
- define functions to allocate on a given alignement
- make PyMemData_NEW 16 byte aligned by default (to be compatible
with SSE and co).
The problem was, and still is,
2008/5/6 Andy Cheesman [EMAIL PROTECTED]:
I was wondering if anyone could shed some light on how to distinguish an
empty array of a given shape and an zeros array of the same dimensions.
An empty array, that is, an array returned by the function empty(),
just means an uninitialized array.
2008/5/6 Charles R Harris [EMAIL PROTECTED]:
On Tue, May 6, 2008 at 6:40 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Tue, 6 May 2008, Jarrod Millman apparently wrote:
open tickets that I would like everyone to take a brief
look at:
http://projects.scipy.org/scipy/numpy/ticket/760
2008/5/6 Eleanor [EMAIL PROTECTED]:
a = numpy.array([[1,2,6], [2,2,8], [2,1,7],[1,1,5]])
a
array([[1, 2, 6],
[2, 2, 8],
[2, 1, 7],
[1, 1, 5]])
indices = numpy.lexsort(a.T)
a.T.take(indices,axis=-1).T
array([[1, 1, 5],
[1, 2, 6],
[2, 1, 7],
2008/5/5 Robert Kern [EMAIL PROTECTED]:
On Mon, May 5, 2008 at 7:44 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
In numpy, we can always replace realloc by malloc/free, because we know
the size of the old block: would deprecating PyMemData_RENEW and
replacing them by
2008/5/2 Chris.Barker [EMAIL PROTECTED]:
Hi all,
I have a n X m X 3 array, and I a n X M array. I want to assign the
values in the n X m to all three of the slices in the bigger array:
A1 = np.zeros((5,4,3))
A2 = np.ones((5,4))
A1[:,:,0] = A2
A1[:,:,1] = A2
A1[:,:,2] = A2
2008/5/2 Rich Shepard [EMAIL PROTECTED]:
What will work (I call it a pi curve) is a matched pair of sigmoid curves,
the ascending curve on the left and the descending curve on the right. Using
the Boltzmann function for these I can calculate and plot each individually,
but I'm having
2008/5/2 Rich Shepard [EMAIL PROTECTED]:
On Fri, 2 May 2008, Anne Archibald wrote:
It's better not to work point-by-point, appending things, when working
with numpy. Ideally you could find a formula which just produced the right
curve, and then you'd apply it to the input vector and get
2008/4/29 Alan G Isaac [EMAIL PROTECTED]:
As I was looking at Bill's conjugate gradient posting,
I found myself wondering if there would be a payoff
to an output argument for ``numpy.outer``. (It is fairly
natural to repeatedly recreate the outer product of
the adjusted residuals, which
2008/4/30 a g [EMAIL PROTECTED]:
Hi. This is a very basic question, sorry if it's irritating. If i
didn't find the answer written already somewhere on the site, please
point me to it. That'd be great.
OK: how do i iterate over an axis other than 0?
I have a 3D array of data[year,
2008/4/30 Christopher Barker [EMAIL PROTECTED]:
a g wrote:
OK: how do i iterate over an axis other than 0?
This ties in nicely with some of the discussion about interating over
matrices. It ahs been suggested that it would be nice to have iterators
for matrices, so you could do:
for
2008/4/30 Charles R Harris [EMAIL PROTECTED]:
Some operations on stacks of small matrices are easy to get, for instance,
+,-,*,/, and matrix multiply. The last is the interesting one. If A and B
are stacks of matrices with the same number of dimensions with the matrices
stored in the last two
2008/5/1 Travis E. Oliphant [EMAIL PROTECTED]:
Stéfan van der Walt wrote:
2008/4/30 Christopher Barker [EMAIL PROTECTED]:
Stéfan van der Walt wrote:
That's the way, or just rgba_image.view(numpy.int32).
ah -- interestingly, I tried:
rgba_image.view(dtype=numpy.int32)
On 29/04/2008, Travis E. Oliphant [EMAIL PROTECTED] wrote:
I'm quite persuaded now that a[i] should return a 1-d object for
arrays.In addition to the places Chuck identified, there are at
least 2 other places where special code was written to work-around the
expectation that item
On 29/04/2008, Keith Goodman [EMAIL PROTECTED] wrote:
On Tue, Apr 29, 2008 at 11:21 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Tue, 29 Apr 2008, Keith Goodman apparently wrote:
I often use x[i,:] and x[:,i] where x is a matrix and i is
a scalar. I hope this continues to return a
On 30/04/2008, eli bressert [EMAIL PROTECTED] wrote:
I'm writing a quick script to import a fits (astronomy) image that has
very low values for each pixel. Mostly on the order of 10^-9. I have
written a python script that attempts to take low values and put them
in integer format. I
Timothy Hochberg has proposed a generalization of the matrix mechanism
to support manipulating arrays of linear algebra objects. For example,
one might have an array of matrices one wants to apply to an array of
vectors, to yield an array of vectors:
In [88]: A =
On 30/04/2008, Keith Goodman [EMAIL PROTECTED] wrote:
On Tue, Apr 29, 2008 at 2:18 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
It is actually pretty unreasonable to hope that
A[0]
and
A[[1,2,3]]
or
A[[True,False,True]]
should return objects of the same
On 25/04/2008, Stéfan van der Walt [EMAIL PROTECTED] wrote:
2008/4/25 Alan G Isaac [EMAIL PROTECTED]:
I must have misunderstood:
I thought the agreement was to
provisionally return a 1d array for x[0],
while we hashed through the other proposals.
The agreement was:
a) That
On 24/04/2008, Rich Shepard [EMAIL PROTECTED] wrote:
Thanks to several of you I produced test code using the normal density
function, and it does not do what we need. Neither does the Gaussian
function using fwhm that I've tried. The latter comes closer, but the ends
do not reach y=0
On 25/04/2008, Charles R Harris [EMAIL PROTECTED] wrote:
On Fri, Apr 25, 2008 at 12:02 PM, Alan G Isaac [EMAIL PROTECTED] wrote:
I think we have discovered that there is a basic conflict
between two behaviors:
x[0] == x[0,:]
vs.
x[0][0] == x[0,0]
To my
Hi,
In the upcoming release of numpy. numpy.core.ma ceases to exist. One
must use numpy.ma (for the new interface) or numpy.oldnumeric.ma (for
the old interface). This has the unfortunate effect of breaking
matplotlib(which does from numpy.core.ma import *) - I cannot even
import pylab with a
On 23/04/2008, Alan Isaac [EMAIL PROTECTED] wrote:
On Wed, 23 Apr 2008, Sebastian Haase wrote:
What used to be referred to a the 1.1 version, that can
break more stuff, to allow for a cleaner design, will now
be 1.2
So ... fixing x[0][0] for matrices should wait
until 1.2. Is that
On 23/04/2008, Zachary Pincus [EMAIL PROTECTED] wrote:
Hi,
Thanks a ton for the advice, Robert! Taking an array slice (instead of
trying to set up the strides, etc. myself) is a slick way of getting
this result indeed.
It's worth mentioning that there was some discussion of adding an
On 18/04/2008, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, Apr 18, 2008 at 3:33 PM, Joe Harrington [EMAIL PROTECTED] wrote:
For that matter, is there a reason logical operations don't work on
arrays other than booleans? What about:
The keywords and, or, and not only work on bool
201 - 300 of 419 matches
Mail list logo