On Mon, Jul 20, 2009 at 1:11 PM, David Goldsmithd_l_goldsm...@yahoo.com wrote:
Just to be clear, in which namespace(s) are we talking about making (or
having made) the change: IIUC, the result you're talking about would be
inappropriate for ufunc.identity.
np.identity
np.matlib.identity
On Mon, Jul 20, 2009 at 1:44 PM, Citi, Lucalc...@essex.ac.uk wrote:
Just my 2 cents.
It is duplicated code.
But it is only 3 lines.
identity does not need to handle rectangular matrices and non-principal
diagonals,
therefore it can be reasonably faster (especially for small matrices, I
On Fri, Jul 17, 2009 at 6:21 AM, Gary Rubengru...@bigpond.net.au wrote:
In [1]: a=array([1,2,3])
In [2]: a[::-1]
Out[2]: array([3, 2, 1])
Johannes Bauer wrote:
Hello list,
I have a really simple newbie question: How can I mirror/flip a
numpy.ndarray? I.e. mirror switches the colums
On Thu, Jul 16, 2009 at 11:43 AM, Phillip M.
Feldmanpfeld...@verizon.net wrote:
numpy.array understands
V= array([[1,2,3,4],[4,3,2,1]])
but not
V= array([1,2,3,4],[4,3,2,1])
It would be more convenient if it could handle either form.
You could do something like this:
def myarray(*args):
On Sun, Jun 7, 2009 at 2:52 AM, Gabriel Beckersbeck...@orn.mpg.de wrote:
OK, perhaps I drank that beer too soon...
Now, numpy.test() hangs at:
test_pinv (test_defmatrix.TestProperties) ...
So perhaps something is wrong with ATLAS, even though the building went
fine, and make check and make
On Sat, Jul 11, 2009 at 3:20 AM, f0X_in_s0Xsidh...@gmail.com wrote:
Do argmin and argmax skip nan values? I cant find it anywhere in the numpy
documentation.
In any case, if I want to find max and min values in an array that may
contain nan values also, what would the be the most efficient
On Sat, Jul 11, 2009 at 7:09 AM, Keith Goodmankwgood...@gmail.com wrote:
On Sat, Jul 11, 2009 at 3:20 AM, f0X_in_s0Xsidh...@gmail.com wrote:
Do argmin and argmax skip nan values? I cant find it anywhere in the numpy
documentation.
In any case, if I want to find max and min values in an array
On Sat, Jul 11, 2009 at 11:00 AM, Citi, Lucalc...@essex.ac.uk wrote:
I have submitted Ticket #1167 with a patch
to speed up diag and eye.
On average the code is 3 times faster (but
up to 9!).
Wow! That's great.
With your speed ups it won't be long before I use np.diag(np.eye(10))
instead of
On Thu, Jul 9, 2009 at 7:08 PM, Chris Colbertsccolb...@gmail.com wrote:
say i have an Nx4 array of points and I want to dot every [n, :] 1x4
slice with a 4x4 matrix.
Currently I am using apply_along_axis in the following manner:
def func(slice, mat):
return np.dot(mat, slice)
On Mon, Jun 8, 2009 at 6:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Mon, Jun 08, 2009 at 09:02:12AM -0400, josef.p...@gmail.com wrote:
whats the actual shape of the array/data you run your PCA on.
50 000 dimensions, 820 datapoints.
Have you tried shuffling each time series,
On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbert sccolb...@gmail.com wrote:
I'll caution anyone from using Atlas from the repos in Ubuntu 9.04 as the
package is broken:
https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510
just build Atlas yourself, you get better performance AND
On Sat, Jun 6, 2009 at 12:01 AM, Fernando Perez fperez@gmail.com wrote:
def diag_indices(n,ndim=2):
Return the indices to index into a diagonal.
Examples
a = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]])
a
array([[ 1, 2, 3, 4],
[
On Sat, Jun 6, 2009 at 11:30 AM, Fernando Perez fperez@gmail.com wrote:
On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:
diag_indices() can be made more efficient, but these are fine.
Suggestion? Right now it's not obvious to me...
I'm interested in a more
On Sat, Jun 6, 2009 at 11:46 AM, Keith Goodman kwgood...@gmail.com wrote:
On Sat, Jun 6, 2009 at 11:30 AM, Fernando Perez fperez@gmail.com wrote:
On Sat, Jun 6, 2009 at 12:09 AM, Robert Kernrobert.k...@gmail.com wrote:
diag_indices() can be made more efficient, but these are fine
On Sat, Jun 6, 2009 at 2:01 PM, Robert Kern robert.k...@gmail.com wrote:
There is a neat trick for accessing the diagonal of an existing array
(a.flat[::a.shape[1]+1]), but it won't work to implement
diag_indices().
Perfect. That's 3x faster.
def fill_diag(arr, value):
if arr.ndim != 2:
On Fri, Jun 5, 2009 at 11:07 AM, Brian Blais bbl...@bryant.edu wrote:
Hello,
I have a vectorizing problem that I don't see an obvious way to solve. What
I have is a vector like:
obs=array([1,2,3,4,3,2,1,2,1,2,1,5,4,3,2])
and a matrix
T=zeros((6,6))
and what I want in T is a count of all of
On Fri, Jun 5, 2009 at 12:53 PM, josef.p...@gmail.com wrote:
On Fri, Jun 5, 2009 at 2:07 PM, Brian Blais bbl...@bryant.edu wrote:
Hello,
I have a vectorizing problem that I don't see an obvious way to solve. What
I have is a vector like:
obs=array([1,2,3,4,3,2,1,2,1,2,1,5,4,3,2])
and a
On Fri, Jun 5, 2009 at 1:01 PM, Keith Goodman kwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 12:53 PM, josef.p...@gmail.com wrote:
On Fri, Jun 5, 2009 at 2:07 PM, Brian Blais bbl...@bryant.edu wrote:
Hello,
I have a vectorizing problem that I don't see an obvious way to solve. What
I
On Fri, Jun 5, 2009 at 1:22 PM, Brent Pedersen bpede...@gmail.com wrote:
On Fri, Jun 5, 2009 at 1:05 PM, Keith Goodmankwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 1:01 PM, Keith Goodman kwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 12:53 PM, josef.p...@gmail.com wrote:
On Fri, Jun 5
On Fri, Jun 5, 2009 at 1:31 PM, josef.p...@gmail.com wrote:
On Fri, Jun 5, 2009 at 4:27 PM, Keith Goodman kwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 1:22 PM, Brent Pedersen bpede...@gmail.com wrote:
On Fri, Jun 5, 2009 at 1:05 PM, Keith Goodmankwgood...@gmail.com wrote:
On Fri, Jun 5
On Fri, Jun 5, 2009 at 1:22 PM, Brent Pedersen bpede...@gmail.com wrote:
On Fri, Jun 5, 2009 at 1:05 PM, Keith Goodmankwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 1:01 PM, Keith Goodman kwgood...@gmail.com wrote:
On Fri, Jun 5, 2009 at 12:53 PM, josef.p...@gmail.com wrote:
On Fri, Jun 5
On Fri, Jun 5, 2009 at 3:02 PM, Alan G Isaac ais...@american.edu wrote:
I think something close to this would be possible:
add dot as an array method.
A .dot(B) .dot(C)
is not as pretty as
A * B * C
but it is much better than
np.dot(np.dot(A,B),C)
I've noticed that
On Fri, Jun 5, 2009 at 5:19 PM, Christopher Barker
chris.bar...@noaa.gov wrote:
Robert Kern wrote:
x = np.array([1,2,3])
timeit x.sum()
10 loops, best of 3: 3.01 µs per loop
from numpy import sum
timeit sum(x)
10 loops, best of 3: 4.84 µs per loop
that is a VERY short array, so
On Sun, May 24, 2009 at 3:45 PM, David Warde-Farley d...@cs.toronto.edu wrote:
Anecdotally, it seems to me that lots of people (myself included) seem
to go through a phase early in their use of NumPy where they try to
use matrix(), but most seem to end up switching to using 2D arrays for
all
On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
Hello,
Multiplying a Python float to a numpy.array of objects works flawlessly
but not with a numpy.float64 .
I tried numpy version '1.0.4' on a 32 bit Linux and '1.2.1' on a 64
bit Linux: both raise the same
On Mon, Jun 1, 2009 at 9:55 AM, Michael Hearne mhea...@usgs.gov wrote:
A question (and possibly a bug):
What should be returned when I do:
numpy.nansum([])
In my copy of numpy 1.1.1, I get 0.0. This is what I would expect to
see.
However, this behavior seems to have changed in 1.3.0, in
On Mon, Jun 1, 2009 at 11:16 AM, josef.p...@gmail.com wrote:
On Mon, Jun 1, 2009 at 1:43 PM, Keith Goodman kwgood...@gmail.com wrote:
On Mon, Jun 1, 2009 at 9:55 AM, Michael Hearne mhea...@usgs.gov wrote:
A question (and possibly a bug):
What should be returned when I do:
numpy.nansum
On Mon, Jun 1, 2009 at 4:50 PM, josef.p...@gmail.com wrote:
On Mon, Jun 1, 2009 at 7:43 PM, josef.p...@gmail.com wrote:
On Mon, Jun 1, 2009 at 7:30 PM, Robert Kern robert.k...@gmail.com wrote:
On Mon, Jun 1, 2009 at 15:31, josef.p...@gmail.com wrote:
On Mon, Jun 1, 2009 at 4:06 PM, Alan G
On Wed, May 6, 2009 at 6:44 AM, Talbot, Gerry gerry.tal...@amd.com wrote:
Does anyone know how to efficiently implement a recurrence relationship in
numpy such as:
y[n] = A*x[n] + B*y[n-1]
On an intel chip I'd use a Monte Carlo simulation. On an amd chip I'd use:
x =
On Mon, May 4, 2009 at 2:02 PM, Ryan May rma...@gmail.com wrote:
On Mon, May 4, 2009 at 3:55 PM, David Warde-Farley d...@cs.toronto.edu
wrote:
Hi,
Is there a simple way to compare each element of an object array to a
single object? objarray == None, for example, gives me a single
False. I
On Wed, Apr 22, 2009 at 8:48 AM, Mathew Yeates myea...@jpl.nasa.gov wrote:
well, this isn't a perfect solution. polyfit is better because it
determines rank based on condition values. Finds the eigenvalues ...
etc. But, unless it can vectorized without Python looping, it's too slow
for me to
On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
Hi
I posted something about this earlier
Say I have 2 arrays X and Y with shapes (N,3) where N is large
I am doing the following
for row in range(N):
result=polyfit(X[row,:],Y[row,:],1,full=True) # fit 3 points with a line
This
On 4/21/09, josef.p...@gmail.com josef.p...@gmail.com wrote:
On Tue, Apr 21, 2009 at 6:23 PM, Keith Goodman kwgood...@gmail.com wrote:
On 4/21/09, Mathew Yeates myea...@jpl.nasa.gov wrote:
Hi
I posted something about this earlier
Say I have 2 arrays X and Y with shapes (N,3) where N
On Wed, Feb 25, 2009 at 3:21 PM, Anthony Kong
anthony.k...@macquarie.com wrote:
I trying to use scipy/numpy in a finanical context. I want to compute the
correlation coeff of two series (returns vs index returns). I tried two
appoarches
Firstly,
from scipy.linalg import lstsq
coeffs,a,b,c
On Thu, Feb 12, 2009 at 9:21 AM, Ralph Kube ralphk...@googlemail.com wrote:
The same happens on the ipython prompt:
0.145 * 0.005 = 28.996
N.int32(0.145 * 0.005) = 28
Any ideas how to deal with this?
Do you want the answer to be 29? N.int32 truncates. If you want to
round
On Thu, Feb 12, 2009 at 5:22 PM, A B python6...@gmail.com wrote:
Are there any routines to fill in the gaps in an array. The simplest
would be by carrying the last known observation forward.
0,0,10,8,0,0,7,0
0,0,10,8,8,8,7,7
Here's an obvious hack for 1d arrays:
def fill_forward(x, miss=0):
On Thu, Feb 12, 2009 at 5:52 PM, Keith Goodman kwgood...@gmail.com wrote:
On Thu, Feb 12, 2009 at 5:22 PM, A B python6...@gmail.com wrote:
Are there any routines to fill in the gaps in an array. The simplest
would be by carrying the last known observation forward.
0,0,10,8,0,0,7,0
On Thu, Feb 12, 2009 at 6:04 PM, Keith Goodman kwgood...@gmail.com wrote:
On Thu, Feb 12, 2009 at 5:52 PM, Keith Goodman kwgood...@gmail.com wrote:
On Thu, Feb 12, 2009 at 5:22 PM, A B python6...@gmail.com wrote:
Are there any routines to fill in the gaps in an array. The simplest
would
On Tue, Feb 10, 2009 at 11:29 AM, Mark Janikas mjani...@esri.com wrote:
I want to create an array that contains a column of permutations for each
simulation:
import numpy as NUM
import numpy.random as RAND
x = NUM.arange(4.)
res = NUM.zeros((4,100))
for sim in range(100):
res[:,sim]
On Tue, Feb 10, 2009 at 12:18 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Feb 10, 2009 at 11:29 AM, Mark Janikas mjani...@esri.com wrote:
I want to create an array that contains a column of permutations for each
simulation:
import numpy as NUM
import numpy.random as RAND
x
On Tue, Feb 10, 2009 at 12:28 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Feb 10, 2009 at 12:18 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Feb 10, 2009 at 11:29 AM, Mark Janikas mjani...@esri.com wrote:
I want to create an array that contains a column of permutations for each
On Tue, Feb 10, 2009 at 12:41 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Feb 10, 2009 at 12:28 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Feb 10, 2009 at 12:18 PM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Feb 10, 2009 at 11:29 AM, Mark Janikas mjani...@esri.com wrote
On Tue, Feb 10, 2009 at 1:41 PM, Mark Miller markperrymil...@gmail.com wrote:
Out of curiosity, why wouldn't numpy.apply_along_axis be a reasonable
approach here. Even more curious: why is it slower than the original
explicit loop?
I took a quick look at the apply_along_axis code. It is
: numpy-discussion-boun...@scipy.org
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of Keith Goodman
Sent: Tuesday, February 10, 2009 12:59 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Permutations in Simulations`
On Tue, Feb 10, 2009 at 12:41 PM, Keith Goodman
On Tue, Dec 30, 2008 at 10:10 AM, ctw lists.20.c...@xoxy.net wrote:
Hi!
I'm a bit stumped by the following: suppose I have several recarrays
with identical dtypes (identical field names, etc.) and would like to
combine them into one rec array, what would be the best way to do
that? I tried
On Tue, Dec 9, 2008 at 12:25 PM, Bab Tei [EMAIL PROTECTED] wrote:
I can exclude a list of items by using negative index in R (R-project) ie
myarray[-excludeindex]. As negative indexing in numpy (And python) behave
differently ,how can I exclude a list of item in numpy?
Here's a painful way
On Sun, Nov 2, 2008 at 10:24 AM, Abhimanyu Lad [EMAIL PROTECTED] wrote:
Is there a direct or indirect way in numpy to compute the sample ranks of a
given array, i.e. the equivalent of rank() in R.
I am looking for:
rank(array([6,8,4,1,9])) - array([2,3,1,0,4])
Is there some clever use of
On Fri, Sep 5, 2008 at 7:32 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
Ludwig wrote:
What are the relative merits of
sum(a[where(a0])
to
a[a 0].sum()
?
Second one is more OO, takes a few keystrokes less to type. Is there
any real difference if it came to very large arrays? Or is it
On Fri, Sep 5, 2008 at 9:08 AM, David Cournapeau [EMAIL PROTECTED] wrote:
On Fri, Sep 5, 2008 at 11:52 PM, Keith Goodman [EMAIL PROTECTED] wrote:
Here's another difference:
a = np.random.randn(10)
timeit np.sum(a[np.where(a0)])
100 loops, best of 3: 3.44 ms per loop
timeit a[a 0].sum
On Sat, Aug 30, 2008 at 6:24 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
Stéfan van der Walt wrote:
(np.sign(a) | 1) ...
Ah, that's nice. How about
idx = np.abs(a)min_value
a[idx] = min_value*(np.sign(a[idx]) | 1)
Or, since he asked to do it in one line,
min_value = 2
x
array([ 1, 2,
On Sat, Aug 30, 2008 at 7:29 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Sat, Aug 30, 2008 at 6:24 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
Stéfan van der Walt wrote:
(np.sign(a) | 1) ...
Ah, that's nice. How about
idx = np.abs(a)min_value
a[idx] = min_value*(np.sign(a[idx]) | 1
On Fri, Aug 29, 2008 at 10:19 AM, dmitrey [EMAIL PROTECTED] wrote:
hi all,
isn't it a bug
(latest numpy from svn, as well as my older version)
from numpy import array
print array((1,2,3)).fill(10)
None
Yeah, I do stuff like that too. fill works in place so it returns None.
x =
On Fri, Aug 29, 2008 at 10:42 AM, dmitrey [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
Yeah, I do stuff like that too. fill works in place so it returns None.
x = np.array([1,2])
x.fill(10)
x
array([10, 10])
x = x.fill(10) # -- Danger!
print x
None
Since result None is never
On Fri, Aug 29, 2008 at 10:51 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, Aug 29, 2008 at 10:42 AM, dmitrey [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
Yeah, I do stuff like that too. fill works in place so it returns None.
x = np.array([1,2])
x.fill(10)
x
array([10, 10
On Fri, Aug 29, 2008 at 2:49 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, Aug 29, 2008 at 2:35 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
HI all,
I need to do something I thought would be simple -- set all the values
of an array to some minimum. so I did this:
min_value = 2
On Fri, Aug 29, 2008 at 2:35 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
HI all,
I need to do something I thought would be simple -- set all the values
of an array to some minimum. so I did this:
min_value = 2
a = np.array((1, 2, 3, 4, 5,))
np.maximum(a, min_value)
array([2, 2,
On Fri, Aug 29, 2008 at 2:52 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, Aug 29, 2008 at 2:49 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, Aug 29, 2008 at 2:35 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
HI all,
I need to do something I thought would be simple -- set all
On Fri, Aug 29, 2008 at 3:08 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
Alan G Isaac wrote:
Does this do what you want?
idx = np.abs(a)min_value
a[idx] = min_value
yup, that's it. I had forgotten about that kind of indexing, even though
I used it for: a[a==0] = min_value
Keith
On Tue, Aug 26, 2008 at 11:23 AM, Ryan Neve [EMAIL PROTECTED] wrote:
Apologies in advance if this is an obvious answer, but I'm new to most of
this.
My overall goal is to produce a contour plot of some irregular time series
data.
I've imported the data from mySQL into three arrays x,y,and z
On Tue, Aug 26, 2008 at 11:47 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Tue, Aug 26, 2008 at 11:23 AM, Ryan Neve [EMAIL PROTECTED] wrote:
Apologies in advance if this is an obvious answer, but I'm new to most of
this.
My overall goal is to produce a contour plot of some irregular time
On Sat, Aug 23, 2008 at 7:04 PM, James A. Benson
[EMAIL PROTECTED] wrote:
On Sat, 23 Aug 2008, Stéfan van der Walt wrote:
2008/8/23 Travis E. Oliphant [EMAIL PROTECTED]:
By the way, as promised, the NumPy book is now available for download
and the source to the book is checked in to the
On Thu, Aug 21, 2008 at 10:40 AM, Prashant Saxena [EMAIL PROTECTED] wrote:
Hi,
numpy rocks!!!
import numpy
linsp = numpy.linspace
red = linsp(0, 255, 50)
green = linsp(125, 150, 50)
blue = linsp(175, 255, 50)
array's elements are float. How do I convert them into integer?
I need to
I get slightly different results when I repeat a calculation.
I've seen this problem before (it went away but has returned):
http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025724.html
A unit test is attached. It contains three tests:
In test1, I construct matrices x and y
On Thu, Aug 14, 2008 at 11:29 AM, Bruce Southey [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
I get slightly different results when I repeat a calculation.
I've seen this problem before (it went away but has returned):
http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025724
svd uses 1 for its defaults:
svd(a, full_matrices=1, compute_uv=1)
Anyone interested in changing 1 to True? It shouldn't break any code, right?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
On Thu, Jul 31, 2008 at 1:14 AM, Pauli Virtanen [EMAIL PROTECTED] wrote:
Yes, the example on
http://www.scipy.org/Numpy_Example_List_With_Doc#cov
is wrong; cov(T,P) indeed returns a matrix. And it would be nice if
someone fixed this, you can simply register a wiki account and fix the
On Tue, Jul 29, 2008 at 9:10 PM, Anthony Kong
[EMAIL PROTECTED] wrote:
I am trying out the example here
(http://www.scipy.org/Numpy_Example_List_With_Doc#cov)
from numpy import *
...
T = array([1.3, 4.5, 2.8, 3.9])
P = array([2.7, 8.7, 4.7, 8.2])
cov(T,P)
The answer is supposed to be
On Fri, Jul 25, 2008 at 12:32 PM, Frank Lagor [EMAIL PROTECTED] wrote:
Perhaps I do not understand something properly, if so could someone please
explain the behavior I notice with numpy.linalg.svd when acting on arrays.
It gives the incorrect answer, but works fine with matrices. My numpy is
On Fri, Jul 25, 2008 at 12:36 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, Jul 25, 2008 at 12:32 PM, Frank Lagor [EMAIL PROTECTED] wrote:
Perhaps I do not understand something properly, if so could someone please
explain the behavior I notice with numpy.linalg.svd when acting on arrays
On Fri, Jul 25, 2008 at 1:24 PM, Gideon Simpson [EMAIL PROTECTED] wrote:
How does python (or numpy/scipy) do exponentiation? If I do x**p,
where p is some positive integer, will it compute x*x*...*x (p times),
or will it use logarithms?
Here are some examples:
np.array([[1,2], [3,4]])**2
On Fri, Jul 25, 2008 at 1:32 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, Jul 25, 2008 at 1:24 PM, Gideon Simpson [EMAIL PROTECTED] wrote:
How does python (or numpy/scipy) do exponentiation? If I do x**p,
where p is some positive integer, will it compute x*x*...*x (p times
On Fri, Jul 11, 2008 at 12:58 AM, Charles R Harris
[EMAIL PROTECTED] wrote:
The problem might be the old ipython version (8.1) shipped with ubuntu 8.04.
Debian is slow to update and I've been trying out ubuntu for 64 bit testing.
Debian Lenny is at ipython 0.8.4.
On Wed, Jul 9, 2008 at 12:43 PM, Catherine Moroney
[EMAIL PROTECTED] wrote:
2008/7/9 Catherine Moroney [EMAIL PROTECTED]:
I have a question about performing element-wise logical operations
on numpy arrays.
If a, b and c are numpy arrays of the same size, does the
following syntax
I don't know what to write for a doc string for alterdot and
restoredot. Any ideas?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
I'm writing the doc string for array_equal. From the existing one-line
doc string I expect array_equal to return True or False. But I get
this:
np.array_equal([1,2], [1,2])
True
np.array_equal([1,2], [1,2,3])
0
np.array_equal(np.array([1,2]), np.array([1,2,3]))
0
np.__version__
On Mon, Jul 7, 2008 at 9:30 AM, Keith Goodman [EMAIL PROTECTED] wrote:
I'm writing the doc string for array_equal. From the existing one-line
doc string I expect array_equal to return True or False. But I get
this:
np.array_equal([1,2], [1,2])
True
np.array_equal([1,2], [1,2,3])
0
On Mon, Jul 7, 2008 at 9:40 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Mon, Jul 7, 2008 at 9:30 AM, Keith Goodman [EMAIL PROTECTED] wrote:
I'm writing the doc string for array_equal. From the existing one-line
doc string I expect array_equal to return True or False. But I get
On Thu, Jul 3, 2008 at 6:57 AM, Brain Stormer [EMAIL PROTECTED] wrote:
I am using numpy to create an array then filling some of the values using a
for loop, I was wondering if there is way to easily fill the values without
iterating through sort of like array.fill[start:stop,start
:stop]? The
On Wed, Jun 11, 2008 at 9:47 AM, Simon Palmer [EMAIL PROTECTED] wrote:
clustering yes, hierarchical no.
I think scipy-cluster [1] only stores the upper triangle of the
distance matrix.
[1] http://code.google.com/p/scipy-cluster/
___
Numpy-discussion
On Tue, Jun 10, 2008 at 12:56 AM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/6/9 Keith Goodman [EMAIL PROTECTED]:
Does anyone have a function that converts ranks into a Gaussian?
I have an array x:
import numpy as np
x = np.random.rand(5)
I rank it:
x = x.argsort().argsort()
x_ranked
On Sat, Jun 7, 2008 at 6:48 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/6/7 Keith Goodman [EMAIL PROTECTED]:
On Fri, Jun 6, 2008 at 10:46 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/6/6 Keith Goodman [EMAIL PROTECTED]:
I'd like to shift the columns of a 2d array one column
On Mon, Jun 9, 2008 at 7:02 PM, Pierre GM [EMAIL PROTECTED] wrote:
On Monday 09 June 2008 22:06:24 Keith Goodman wrote:
On Mon, Jun 9, 2008 at 4:45 PM, Robert Kern [EMAIL PROTECTED] wrote:
There are subtleties in computing ranks when ties are involved. Take a
look at the implementation
On Mon, Jun 9, 2008 at 7:35 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Mon, Jun 9, 2008 at 21:06, Keith Goodman [EMAIL PROTECTED] wrote:
On Mon, Jun 9, 2008 at 4:45 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Mon, Jun 9, 2008 at 18:34, Keith Goodman [EMAIL PROTECTED] wrote:
Does anyone have
On Sun, Jun 8, 2008 at 7:02 AM, Vineet Jain (gmail) [EMAIL PROTECTED] wrote:
Currently my code handles market returns and stocks as 1d arrays. While the
function below expects a matrix. Is there an equivalent of the function
below which works with numpy arrays?
I'd like to do:
beta, resids,
On Fri, Jun 6, 2008 at 10:46 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/6/6 Keith Goodman [EMAIL PROTECTED]:
I'd like to shift the columns of a 2d array one column to the right.
Is there a way to do that without making a copy?
This doesn't work:
import numpy as np
x = np.random.rand
I'd like to shift the columns of a 2d array one column to the right.
Is there a way to do that without making a copy?
This doesn't work:
import numpy as np
x = np.random.rand(2,3)
x[:,1:] = x[:,:-1]
x
array([[ 0.44789223, 0.44789223, 0.44789223],
[ 0.80600897, 0.80600897,
On Fri, Jun 6, 2008 at 1:41 PM, Alan McIntyre [EMAIL PROTECTED] wrote:
Here's an automatically generated list from the current numpy trunk.
I should really post the script I used to make this somewhere.
Anybody have any suggestions on a good place to put it?
I don't know where to put it (the
On Thu, Jun 5, 2008 at 4:54 PM, Christopher Marshall
[EMAIL PROTECTED] wrote:
I will be calculating the mean and variance of a vector with millions of
elements.
I was wondering how well numpy's mean and variance functions handle the
numerical stability of such a calculation.
How's this for
On Thu, Jun 5, 2008 at 6:55 PM, Alan McIntyre [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 9:06 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 4:54 PM, Christopher Marshall
Are you worried that the mean might overflow on the intermediate sum?
I suspect (but please
On Wed, Jun 4, 2008 at 5:39 PM, Vineet Jain (gmail) [EMAIL PROTECTED] wrote:
Timeseries1 = daily or weekly close of stock a
Timeseries2 = daily or weekly close of market index (spx, , etc)
Beta of stock a is what I would like to compute as explained in this article
on Wikipedia:
On Wed, Jun 4, 2008 at 6:04 PM, Keith Goodman [EMAIL PROTECTED] wrote:
It might also be useful to shuffle (mp.random.shuffle) the market
returns and repeat the beta calculation many times to estimate the
noise level of your beta estimates.
I guess that is more of a measure of how different
On Sat, May 31, 2008 at 3:09 PM, Pauli Virtanen [EMAIL PROTECTED] wrote:
The reason for the strange behavior of slice assignment is that when the
left and right sides in a slice assignment are overlapping views of the
same array, the result is currently effectively undefined. Same is true
for
This looks good:
import numpy as np
x = np.random.rand(2,3)
x.mean(None, out=x)
---
ValueError: wrong shape for output
But this is strange:
x.std(None, out=x)
0.28264369725
x
array([[ 0.54718012, 0.94296181,
On Thu, May 29, 2008 at 9:26 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
2008/5/23 Keith Goodman [EMAIL PROTECTED]:
On Fri, May 23, 2008 at 11:44 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 23, 2008 at 12:22 PM, Keith Goodman [EMAIL PROTECTED] wrote:
But the first example
x
On Thu, May 29, 2008 at 9:26 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
2008/5/23 Keith Goodman [EMAIL PROTECTED]:
On Fri, May 23, 2008 at 11:44 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 23, 2008 at 12:22 PM, Keith Goodman [EMAIL PROTECTED] wrote:
But the first example
x
On Thu, May 29, 2008 at 4:36 PM, Raul Kompass [EMAIL PROTECTED] wrote:
I'm new to using numpy. Today I experimented a bit with indexing
motivated by the finding that although
a[a0.5] and a[where(a0.5)] give the same expected result (elements of
a greater than 0.5)
a[argwhere(a0.5)] results
On Thu, May 29, 2008 at 6:32 PM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Thu, 29 May 2008, Keith Goodman apparently wrote:
a[[0,1]]
That one looks odd. But it is just shorthand for:
a[[0,1],:]
Do you mean that ``a[[0,1],:]`` is a more primitive
expression than ``a[[0,1]]``? In what
On Thu, May 29, 2008 at 8:50 PM, Alan G Isaac [EMAIL PROTECTED] wrote:
Is ``a[[0,1]]`` completely equivalent to ``a[[0,1],...]``
and ``a[[0,1],:]``?
They look, smell, and taste the same. But I can't read array's
__getitem__ since it is in C instead of python.
np.index_exp[[0,1]]
([0, 1],)
Does anyone else get this seg fault?
def fn():
x = np.random.rand(5,2)
x.cumsum(None, out=x)
return x
:
fn()
*** glibc detected *** /usr/bin/python: double free or corruption
(out): 0x08212dc8 ***
I'm running 1.0.4 from Debian Lenny with python 2.5.2 compiled with
gcc
On Wed, May 28, 2008 at 7:30 AM, Keith Goodman [EMAIL PROTECTED] wrote:
Does anyone else get this seg fault?
def fn():
x = np.random.rand(5,2)
x.cumsum(None, out=x)
return x
:
fn()
*** glibc detected *** /usr/bin/python: double free or corruption
(out): 0x08212dc8 ***
I
301 - 400 of 525 matches
Mail list logo