On Wed, May 28, 2008 at 1:06 PM, Alan McIntyre [EMAIL PROTECTED] wrote:
On Wed, May 28, 2008 at 3:34 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
I wonder if this is something that ought to be looked at for all
functions with an out parameter? ndarray.compress also had problems
with array
On Wed, May 28, 2008 at 1:16 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Wed, May 28, 2008 at 1:06 PM, Alan McIntyre [EMAIL PROTECTED] wrote:
On Wed, May 28, 2008 at 3:34 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
I wonder if this is something that ought to be looked at for all
functions
On Tue, May 27, 2008 at 4:27 PM, Nathan Bell [EMAIL PROTECTED] wrote:
On Tue, May 27, 2008 at 5:39 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
I'm not so sure. I know I wouldn't expect to get a different type back
with a call to abs(). Do we really want to change that expectation just
x = np.array([1.0])
np.isnan(x)
array([False], dtype=bool) # - Expected
np.isnan(x,x)
array([ 0.]) # - Surprise (to me)
The same happens with isfinite, isinf, etc.
My use case (self.x is an array):
def isnan(self):
y = self.copy()
np.isnan(y.x, y.x)
On Sat, May 24, 2008 at 7:09 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Sat, May 24, 2008 at 7:47 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Sat, May 24, 2008 at 8:31 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
Hi All,
I'm writing tests for ufuncs and turned up some oddities:
On Sat, May 24, 2008 at 7:36 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Sat, May 24, 2008 at 9:28 PM, Keith Goodman [EMAIL PROTECTED] wrote:
I think it's interesting how python and numpy bools behave differently.
x = np.array([True, True], dtype=bool)
x[0] + x[1]
True
x[0] x[1]
True
I'm writing unit tests for a module that contains matrices. I was
surprised that these are True:
import numpy.matlib as mp
x = mp.matrix([[mp.nan]])
x.any()
True
x.all()
True
My use case is (x == y).all() where x and y are the same matrix except
that x contains one NaN. Certianly x and
On Fri, May 23, 2008 at 10:16 AM, Keith Goodman [EMAIL PROTECTED] wrote:
I'm writing unit tests for a module that contains matrices. I was
surprised that these are True:
import numpy.matlib as mp
x = mp.matrix([[mp.nan]])
x.any()
True
x.all()
True
My use case is (x == y).all() where
On Fri, May 23, 2008 at 11:44 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 23, 2008 at 12:22 PM, Keith Goodman [EMAIL PROTECTED] wrote:
But the first example
x = mp.matrix([[mp.nan]])
x
matrix([[ NaN]])
x.all()
True
x.any()
True
is still surprising.
On non-boolean
On Thu, May 22, 2008 at 8:59 AM, Kevin Jacobs [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
After poking around for a bit, I was wondering if there was a faster method
for the following:
# Array of index values 0..n
items = numpy.array([0,3,2,1,4,2],dtype=int)
# Count the number of occurrences
On Thu, May 22, 2008 at 9:08 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Thu, May 22, 2008 at 8:59 AM, Kevin Jacobs [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
After poking around for a bit, I was wondering if there was a faster method
for the following:
# Array of index values 0..n
items
On Thu, May 22, 2008 at 9:15 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Thu, May 22, 2008 at 9:08 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Thu, May 22, 2008 at 8:59 AM, Kevin Jacobs [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
After poking around for a bit, I was wondering
On Thu, May 22, 2008 at 9:22 AM, Robin [EMAIL PROTECTED] wrote:
On Thu, May 22, 2008 at 4:59 PM, Kevin Jacobs [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
After poking around for a bit, I was wondering if there was a faster method
for the following:
# Array of index values 0..n
items =
I have a class that stores some of its data in a matrix. I can't
figure out how to do right adds with a matrix. Here's a toy example:
class Myclass(object):
def __init__(self, x, a):
self.x = x # numpy matrix
self.a = a # some attribute, say, an integer
def
On Wed, May 21, 2008 at 4:07 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Wed, May 21, 2008 at 5:28 PM, Keith Goodman [EMAIL PROTECTED] wrote:
I have a class that stores some of its data in a matrix. I can't
figure out how to do right adds with a matrix. Here's a toy example:
class Myclass
On Tue, May 20, 2008 at 9:11 AM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/5/20 Vasileios Gkinis [EMAIL PROTECTED]:
I have a question concerning nan in NumPy.
Lets say i have an array of sample measurements
a = array((2,4,nan))
in NumPy calculating the mean of the elements in array a
On Tue, May 20, 2008 at 6:12 PM, David Cournapeau
[EMAIL PROTECTED] wrote:
Keith Goodman wrote:
Or
np.nansum(a) / np.isfinite(a).sum()
A nanmean would be nice to have in numpy.
nanmean, nanstd and nanmedian are available in scipy, though.
Thanks for pointing that out. Studying nanmedian
On Fri, May 16, 2008 at 11:23 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 16, 2008 at 11:23 AM, Stuart Brorson [EMAIL PROTECTED] wrote:
In [66]: numpy.sign(numpy.nan)
Out[66]: 0.0
IMO, the output should be NaN, not zero.
You're probably right. I would like to see what other systems
The most basic, and the most contentious, design decision of a new
matrix class is matrix indexing. There seems to be two camps:
1. The matrix class should be more like the array class. In particular
x[0,:] should return a 1d array or a 1d array like object that
contains the orientation (row or
On Sun, May 11, 2008 at 12:44 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Sun, May 11, 2008 at 1:01 PM, Keith Goodman [EMAIL PROTECTED] wrote:
The most basic, and the most contentious, design decision of a new
matrix class is matrix indexing. There seems to be two camps:
1. The matrix
I added a wiki page: http://scipy.org/NewMatrixSpec
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Sun, May 11, 2008 at 8:09 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Sun, May 11, 2008 at 12:16 PM, Alan G Isaac [EMAIL PROTECTED] wrote:
To be specific: I do not recall any place in the NumPy Book
where this behavior is promised.
It's promised in the docstring!
A matrix is a
On Sat, May 10, 2008 at 8:08 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Sat, 10 May 2008, Jarrod Millman wrote:
unless there are major objections, I am going to back out
the matrices changes in the 1.1 branch.
If these are backed out, will some kind of deprecation
warning be added for
On Sat, May 10, 2008 at 9:24 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Sat, 10 May 2008, Keith Goodman apparently wrote:
Shouldn't a deprecation warning explain what the future
behavior will be?
I do not think so. I think the warning should say:
use x[0,:] instead of x[0] to return row 0
Would it break a numpy design principle to allow ix_ to take 1xn and
nx1 matrices as input?
Here's the use case I had in mind:
import numpy.matlib as mp
x = mp.asmatrix(mp.arange(9).reshape(3,3))
ridx = x.sum(1) 3
cidx = x.sum(0) 9
x[mp.ix_(ridx, cidx)]
On Sat, May 10, 2008 at 2:35 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:
Please, let's just leave the current matrix class alone. Any change
sufficient to make matrix not terrible, will break everyone's code. Instead,
the goal should be build a new matrix class (say newmatrix) where we can
The recently proposed changes to the matrix class was the final push I
needed to begin slowly porting my package from matrices to arrays. But
I'm already stuck in the first stage (all new modules must use
arrays).
Here's a toy example of iterating over columns of a matrix:
x is a nxm matrix
y is
On Fri, May 9, 2008 at 11:23 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 9, 2008 at 12:52 PM, Keith Goodman [EMAIL PROTECTED] wrote:
The recently proposed changes to the matrix class was the final push I
needed to begin slowly porting my package from matrices to arrays. But
I'm
On Fri, May 9, 2008 at 11:43 AM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 9, 2008 at 1:41 PM, Keith Goodman [EMAIL PROTECTED] wrote:
That looks good. But at the end of the function I'll have to convert
back to a 1d array if the input is 1d
np.whence_you_came_from(x)
I guess
Is there a reason why clip doesn't take out as an input? It seems to
work when I added it.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Fri, May 9, 2008 at 4:39 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 9, 2008 at 6:27 PM, Keith Goodman [EMAIL PROTECTED] wrote:
Is there a reason why clip doesn't take out as an input?
Oversight. The out= argument was added to the .clip() method
relatively recently.
Oh. I didn't
rant I noticed that scipy.org uses google-analytics. Does the numpy
project need it? It is even on the numpy trac which to me gives new
meaning to trac. /rant
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
I'm trying to design a labeled array class. A labeled array contains a
2d array and two lists. One list labels the rows of the array (e.g.
variable names) and another list labels the columns of the array (e.g.
dates).
You can sum (or multiply, divide, subtract, etc.) two labeled arrays
that have
On Tue, May 6, 2008 at 10:03 AM, Timothy Hochberg [EMAIL PROTECTED] wrote:
Why don't you just roll your own?
def nans(shape, dtype=float):
... a = np.empty(shape, dtype)
... a.fill(np.nan)
... return a
...
nans([3,4])
array([[ NaN, NaN, NaN, NaN],
[ NaN, NaN,
What is .T? It looks like an attribute, behaves like a method, and
smells like magic. I'd like to add it to my class but don't no where
to begin.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
On Tue, May 6, 2008 at 12:28 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Tue, May 6, 2008 at 2:22 PM, Keith Goodman [EMAIL PROTECTED] wrote:
What is .T? It looks like an attribute, behaves like a method, and
smells like magic. I'd like to add it to my class but don't no where
to begin
I'm the click of a botton away from changing the python default on my
Debian Lenny system from 2.4 to 2.5. Has anyone experienced any numpy
issues after the switch?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
On Sun, May 4, 2008 at 7:40 AM, Timothy Hochberg [EMAIL PROTECTED] wrote:
If you don't need the old array after the cut, I think that you could use
the input array as the output array and then take a slice, saving a
temporary and one-quarter of your assignments (on average). Something like.
On Sun, May 4, 2008 at 8:14 AM, Hoyt Koepke [EMAIL PROTECTED] wrote:
and then update the others only if the row you're updating has that
minimum value in it. Then, when scanning for the min dist, you only
need to scan O(n) rows.
Sorry, let me clarify -- Update the entries
On Fri, May 2, 2008 at 7:25 PM, Robert Kern [EMAIL PROTECTED] wrote:
Assuming x is contiguous and you can modify x in-place:
In [1]: from numpy import *
In [2]: def dist(x):
...:x = x + 1e10 * eye(x.shape[0])
...:i, j = where(x == x.min())
...:return i[0], j[0]
On Sun, May 4, 2008 at 5:59 PM, Damian R. Eads [EMAIL PROTECTED] wrote:
Hi,
Looks like a fun discussion: it's too bad for me I did not join it
earlier. My first try at scipy-cluster was completely in Python. Like you,
I also tried to find the most efficient way to transform the distance
On Fri, May 2, 2008 at 11:51 PM, Hoyt Koepke [EMAIL PROTECTED] wrote:
You know, for linkage clustering and BHC, I've found it a lot easier
to work with an intermediate 1d map of indices and never resize the
distance matrix. I then just remove one element from this map at each
iteration,
On Sat, May 3, 2008 at 5:05 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
Robert Kern wrote:
I can get a ~20% improvement with the following:
In [9]: def mycut(x, i):
...: A = x[:i,:i]
...: B = x[:i,i+1:]
...: C = x[i+1:,:i]
...: D = x[i+1:,i+1:]
How can I make this function faster? It removes the i-th row and
column from an array.
def cut(x, i):
idx = range(x.shape[0])
idx.remove(i)
y = x[idx,:]
y = y[:,idx]
return y
import numpy as np
x = np.random.rand(500,500)
timeit cut(x, 100)
100 loops, best of 3: 16.8 ms
On Fri, May 2, 2008 at 5:38 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 6:24 PM, Keith Goodman [EMAIL PROTECTED] wrote:
How can I make this function faster? It removes the i-th row and
column from an array.
Why do you want to do that?
Single linkage clustering; x
On Fri, May 2, 2008 at 5:47 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 5:38 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 6:24 PM, Keith Goodman [EMAIL PROTECTED] wrote:
How can I make this function faster? It removes the i-th row
On Fri, May 2, 2008 at 6:05 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 7:24 PM, Keith Goodman [EMAIL PROTECTED] wrote:
How can I make this function faster? It removes the i-th row and
column from an array.
def cut(x, i):
idx = range(x.shape[0
On Fri, May 2, 2008 at 6:29 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
Isn't the lengthy part finding the distance between clusters? I can think
of several ways to do that, but I think you will get a real speedup by doing
that in c or c++. I have a module made in boost python that holds
On Tue, Apr 29, 2008 at 10:11 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
indexing should should work just like ndarray indexing,
except that when it produces a 2d result then a matrix
is returned. (This means giving up the current behavior
of ``x[0,:]``.)
I often use x[i,:] and x[:,i] where x
On Tue, Apr 29, 2008 at 11:21 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Tue, 29 Apr 2008, Keith Goodman apparently wrote:
I often use x[i,:] and x[:,i] where x is a matrix and i is
a scalar. I hope this continues to return a matrix.
1. Could you give an example of the circumstances
On Tue, Apr 29, 2008 at 11:46 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Tue, Apr 29, 2008 at 11:21 AM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Tue, 29 Apr 2008, Keith Goodman apparently wrote:
I often use x[i,:] and x[:,i] where x is a matrix and i is
a scalar. I hope
On Tue, Apr 29, 2008 at 2:18 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
On 29/04/2008, Keith Goodman [EMAIL PROTECTED] wrote:
In my use i is most commonly an array (i = M.where(y.A)[0] where y is
a nx1 matrix), sometimes a list, and in ipython when debugging or
first writing the code
On Tue, Apr 29, 2008 at 2:43 PM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:
The problem is that ``x[0]`` being 2d has produced a variety
of anomalies, and the natural fix is for ``x[0]`` to be 1d.
Gael has argued strongly that she should be able to use the
following notation:
On Tue, Apr 29, 2008 at 5:18 PM, Alan G Isaac [EMAIL PROTECTED] wrote:
On Tue, 29 Apr 2008, Keith Goodman apparently wrote:
I hope that changing x[0,:] is considered a major change since it will
break most any package based on matrices (mine). And so
I hope that such a change wouldn't
On Thu, Mar 6, 2008 at 12:36 PM, [EMAIL PROTECTED] wrote:
Proposed solution:
-
It's probably not the best way (noob, that's me), but this situation could
be fixed by:
1) add a fill keyword to loadtxt such that
def loadtxt(...,fill=-999):
2) add the
On Thu, Mar 6, 2008 at 1:12 PM, Sean Arms [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
On Thu, Mar 6, 2008 at 12:36 PM, [EMAIL PROTECTED] wrote:
Proposed solution:
-
It's probably not the best way (noob, that's me), but this situation
could
On Feb 5, 2008 10:54 AM, mark [EMAIL PROTECTED] wrote:
Is there a function to compute the matrix rank of a numpy array or
matrix?
I'm sure there's a more direct way, but numpy.linalg.lstsq returns the
rank of a matrix.
___
Numpy-discussion mailing
On Feb 5, 2008 11:58 AM, [EMAIL PROTECTED] wrote:
The problem is for an array larger than 256*256 the sum is going crazy.
In [45]: numpy.arange(256*256)
Out[45]: array([0, 1, 2, ..., 65533, 65534, 65535])
In [46]: numpy.arange(256*256).sum()
Out[46]: 2147450880
In [47]:
On Dec 31, 2007 2:43 PM, Jarrod Millman [EMAIL PROTECTED] wrote:
I just wanted to announce that we now have a NumPy/SciPy blog
aggregator thanks to Gaël Varoquaux: http://planet.scipy.org/
http://scipy.com sends me to the planet scipy page. Shouldn't it go to
http://scipy.org instead?
On Dec 26, 2007 12:22 PM, Mathew Yeates [EMAIL PROTECTED] wrote:
I have an arbitrary number of lists. I want to form all possible
combinations from all lists. So if
r1=[dog,cat]
r2=[1,2]
I want to return [[dog,1],[dog,2],[cat,1],[cat,2]]
It's obvious when the number of lists is not
import numpy.matlib as M
x = M.asmatrix(['a', 'b', 'c'])
x == 'a'
array([[ True, False, False]], dtype=bool) # I expected a matrix
x = M.asmatrix([1, 2, 3])
x == 1
matrix([[ True, False, False]], dtype=bool) # This looks good
M.__version__
'1.0.5.dev4445'
On Nov 13, 2007 8:42 PM, David Cournapeau [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
On Nov 12, 2007 10:51 AM, David Cournapeau [EMAIL PROTECTED] wrote:
On Nov 13, 2007 3:37 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Nov 12, 2007 10:10 AM, Peter Creasey [EMAIL PROTECTED] wrote
On Nov 12, 2007 10:51 AM, David Cournapeau [EMAIL PROTECTED] wrote:
On Nov 13, 2007 3:37 AM, Keith Goodman [EMAIL PROTECTED] wrote:
On Nov 12, 2007 10:10 AM, Peter Creasey [EMAIL PROTECTED] wrote:
The following code calling numpy v1.0.4 fails to terminate on my machine,
which
On Nov 12, 2007 10:10 AM, Peter Creasey [EMAIL PROTECTED] wrote:
The following code calling numpy v1.0.4 fails to terminate on my machine,
which was not the case with v1.0.3.1
from numpy import arange, float64
from numpy.linalg import eig
a = arange(13*13, dtype = float64)
I noticed that
python setup.py install --prefix=/usr/local
installs a new file called numpy-1.0.5.dev4445.egg-info. It used to be
that files were only installed in
/usr/local/lib/python2.4/site-packages/numpy.
What is the file used for?
$ cat
On 8/9/07, Gary Ruben [EMAIL PROTECTED] wrote:
FWIW,
The list comprehension is faster than using map()
In [7]: %timeit map(lambda x:x[0],bounds)
1 loops, best of 3: 49.6 -¦s per loop
In [8]: %timeit [x[0] for x in bounds]
1 loops, best of 3: 20.8 -¦s per loop
zip is even faster on
On 8/8/07, mark [EMAIL PROTECTED] wrote:
But what if I want to multiply every value between -5 and +5 by 100.
This does NOT work:
d[ d-5 and d5 ] *= 100
d[(d-5) (d5)] *= 100
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
On 8/7/07, Nils Wagner [EMAIL PROTECTED] wrote:
I have a list of integer numbers. The entries can vary between 0 and 19.
How can I count the occurrence of any number. Consider
data
[9, 6, 9, 6, 7, 9, 9, 10, 7, 9, 9, 6, 7, 9, 8, 8, 11, 9, 6, 7, 10, 9, 7, 9,
7, 8, 9, 8, 7, 9]
Is there a
On 8/7/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 8/7/07, Nils Wagner [EMAIL PROTECTED] wrote:
I have a list of integer numbers. The entries can vary between 0 and 19.
How can I count the occurrence of any number. Consider
data
[9, 6, 9, 6, 7, 9, 9, 10, 7, 9, 9, 6, 7, 9, 8, 8, 11
On 7/31/07, Eric Emsellem [EMAIL PROTECTED] wrote:
Here is an example:
start = -30.
end = 30.
npix = 31
step = (end - start) / (npix - 1.)
gridX = num.arange(start-step/2., end+step/2., step)
array([-31., -29., -27., -25., -23., -21., -19., -17., -15., -13., -11.,
-9., -7., -5.,
On 7/17/07, David Cournapeau [EMAIL PROTECTED] wrote:
I noticed that min and max already ignore Nan, which raises the
question: why are there nanmin and nanmax functions ?
Using min and max when you have NaNs is dangerous. Here's an example:
x = M.matrix([[ 1.0, 2.0, M.nan]])
x.min()
1.0
x
On 7/12/07, David Cournapeau [EMAIL PROTECTED] wrote:
While profiling some code, I noticed that sum in numpy is kind of
slow once you use axis argument:
Here is a related thread:
http://projects.scipy.org/pipermail/numpy-discussion/2007-February/025903.html
On 7/11/07, Michał Szpadzik [EMAIL PROTECTED] wrote:
I would like to know how change this True/False to 1/0
Multiplying by 1 changes the True/False to 1/0. But sum gives the same
result for True/False as it does for 1/0. So maybe you don't need to
convert from True/False to 1/0?
On 5/16/07, Anne Archibald [EMAIL PROTECTED] wrote:
Numpy has a max() function. It takes an array, and possibly some extra
arguments (axis and default). Unfortunately, this means that
numpy.max(-1.3,2,7)
-1.3
This can lead to surprising bugs in code that either explicitly
expects it to
On 4/29/07, dmitrey [EMAIL PROTECTED] wrote:
now there is
MATLABNDArray Matrix
size(a,n) a.shape[n] a.shape[n]
but it should be
size(a,n) a.shape[n-1] a.shape[n-1]
I made the change. But how should we change the comment?
get
On 4/25/07, Fernando Perez [EMAIL PROTECTED] wrote:
On 4/25/07, Andrew Straw [EMAIL PROTECTED] wrote:
The May/June issue of Computing in Science and Engineering
http://computer.org/cise: is out and has a Python theme. Many folks we
know and love from the community and mailing lists
On 4/23/07, Christopher Barker [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
Here's a better doc string that explains This will be a new view
object if possible; otherwise, it will return a copy.
Does this exist somewhere, or are you contributing it now?
At the moment
On 4/17/07, David Cournapeau [EMAIL PROTECTED] wrote:
Now that you mention it, I am also puzzled by this one: I can see why
you would use atlas3-sse2-dev without atlas3-base-dev (for the static
library), but not having atlas3-base-dev makes it imposible to
dynamically link to atlas libraries
On 4/18/07, Charles R Harris [EMAIL PROTECTED] wrote:
On 4/18/07, Keith Goodman [EMAIL PROTECTED] wrote:
I'd like to compile atlas so that I can take full advantage of my core
2 duo. Numpy dynamically links to the debian binary of atlas-sse that
I installed. But the atlas website says
On 4/18/07, rex [EMAIL PROTECTED] wrote:
Keith Goodman [EMAIL PROTECTED] [2007-04-18 10:49]:
I'd like to compile atlas so that I can take full advantage of my core
2 duo.
If your use is entirely non-commercial you can use Intel's MKL with
built-in optimized BLAS and LAPACK and avoid
On 4/5/07, Christopher Barker [EMAIL PROTECTED] wrote:
If only MS would supply BLAS/LAPACK.
Yeah, too bad more people don't use atlas. Then MS would embrace
atlas, extend it, and...I forget the last step.
___
Numpy-discussion mailing list
On 4/5/07, Robert Kern [EMAIL PROTECTED] wrote:
There have been significant improvements to the Core
Duo 2 code in ATLAS releases [snip]
What kind of speed up are people seeing with Core 2 Duo aware ATLAS?
___
Numpy-discussion mailing list
On 3/9/07, Bill Baxter [EMAIL PROTECTED] wrote:
Has enough time passed with no top level random function that we can
now put one back in?
If I recall, the former top level rand() was basically removed because
it didn't adhere to the shapes are always tuples convention.
Top-level rand
On 2/15/07, Sebastian Haase [EMAIL PROTECTED] wrote:
On 2/15/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 2/15/07, Keith Goodman [EMAIL PROTECTED] wrote:
I built a new computer: Core 2 Duo 32-bit Debian etch with numpy
1.0.2.dev3546. The repeatability test still fails. In order to make
On 1/27/07, Keith Goodman [EMAIL PROTECTED] wrote:
I get slightly different results when I repeat a calculation. In a
long simulation the differences snowball and swamp the effects I am
trying to measure.
In the attach script there are three tests.
In test1, I construct matrices x and y
On 2/14/07, Tommy Grav [EMAIL PROTECTED] wrote:
I need to fit a gaussian profile to a set of points and would like to
use scipy (or numpy) to
do the least square fitting if possible. I am however unsure if the
proper routines are
available, so I thought I would ask to get some hints to get
This eats up memory quickly on my system.
import numpy.matlib as M
def memleak():
a = M.randn(500, 1)
while True:
a = a.argsort(0)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
On 2/5/07, Francesc Altet [EMAIL PROTECTED] wrote:
El dl 05 de 02 del 2007 a les 08:45 -0800, en/na Keith Goodman va
escriure:
This eats up memory quickly on my system.
import numpy.matlib as M
def memleak():
a = M.randn(500, 1)
while True:
a = a.argsort(0)
Yeah
On 2/4/07, Sebastian Haase [EMAIL PROTECTED] wrote:
On 2/3/07, Robert Kern [EMAIL PROTECTED] wrote:
Stephen Simmons wrote:
The question though is whether all of the inner loop's overhead is
necessary.
My counterexample using numpy.dot() suggests there's considerable scope
for
On 2/3/07, Stephen Simmons [EMAIL PROTECTED] wrote:
Does anyone know why there is an order of magnitude difference
in the speed of numpy's array.sum() function depending on the axis
of the matrix summed?
To see this, import numpy and create a big array with two rows:
import numpy
On 2/2/07, Bruce Southey [EMAIL PROTECTED] wrote:
I am curious why I do not see any mention of the compilers and
versions that were used in this thread. Having just finally managed to
get SciPY installed from scratch (but not with atlas), I could see
that using different compliers or versions
On 2/1/07, Robert Kern [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
A port to Octave of the test script works fine on the same system.
Are you sure that your Octave port uses ATLAS to do the matrix product? Could
you post your port?
Here's the port. Yes, Octave uses atlas for matrix
On 1/30/07, Travis Oliphant [EMAIL PROTECTED] wrote:
I'm trying to help out the conversion to NumPy by offering patches to
various third-party packages that have used Numeric in the past.
Does anybody here have requests for which packages they would like to
see converted to use NumPy?
On 1/31/07, BBands [EMAIL PROTECTED] wrote:
import pyodbc
import numpy as np
connection = pyodbc.connect('DSN=DSNname')
cursor = connection.cursor()
symbol = 'ibm'
request = select to_days(Date), Close from price where symbol = ' +
symbol + ' and date '2006-01-01'
for row in
On 1/31/07, Tom Denniston [EMAIL PROTECTED] wrote:
i would do something like the following. I don't have your odbc
library so I mocked it up with a fake iterator called it. This
example would be for a two column result where the first is an int and
the second a string. Note it creates a
On 1/27/07, Keith Goodman [EMAIL PROTECTED] wrote:
I get slightly different results when I repeat a calculation. In a
long simulation the differences snowball and swamp the effects I am
trying to measure.
Here's a unit test for the problem. I am distributing it in hopes of
raising awareness
On 1/29/07, Charles R Harris [EMAIL PROTECTED] wrote:
That's odd, the LSB bit of the double precision mantissa is only about
2.2e-16, so you can't *get* differences as small as 8.4e-22 without about
70 bit mantissa's. Hmmm, and extended double precision only has 63 bit
mantissa's. Are you
On 1/29/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 1/29/07, Charles R Harris [EMAIL PROTECTED] wrote:
That's odd, the LSB bit of the double precision mantissa is only about
2.2e-16, so you can't *get* differences as small as 8.4e-22 without about
70 bit mantissa's. Hmmm, and extended
On 1/29/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 1/29/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 1/29/07, Charles R Harris [EMAIL PROTECTED] wrote:
That's odd, the LSB bit of the double precision mantissa is only about
2.2e-16, so you can't *get* differences as small as 8.4e
On 1/29/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 1/29/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 1/29/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 1/29/07, Charles R Harris [EMAIL PROTECTED] wrote:
That's odd, the LSB bit of the double precision mantissa is only about
401 - 500 of 525 matches
Mail list logo