2009/12/18 Wayne Watson sierra_mtnv...@sbcglobal.net:
It's starting to come back to me. I found a few old graphics books that
get into transformation matrices and such. Yes, earth centered. I ground
out some code with geometry and trig that at least gets the first point
on the path right. I
2009/12/14 Dr. Phillip M. Feldman pfeld...@verizon.net:
When I issue the command
np.lookfor('bessel')
I get the following:
Search results for 'bessel'
---
numpy.i0
Modified Bessel function of the first kind, order 0.
numpy.kaiser
Return the Kaiser window.
2009/12/12 T J tjhn...@gmail.com:
Hi,
Suppose I have an array of shape: (n, k, k). In this case, I have n
k-by-k matrices. My goal is to compute the product of a (potentially
large) user-specified selection (with replacement) of these matrices.
For example,
x =
2009/12/9 Dr. Phillip M. Feldman pfeld...@verizon.net:
Pauli Virtanen-3 wrote:
Nevertheless, I can't really regard dropping the imaginary part a
significant issue.
I am amazed that anyone could say this. For anyone who works with Fourier
transforms, or with electrical circuits, or with
2009/12/2 Keith Goodman kwgood...@gmail.com:
On Wed, Dec 2, 2009 at 5:09 PM, Neal Becker ndbeck...@gmail.com wrote:
David Warde-Farley wrote:
On 2-Dec-09, at 6:55 PM, Howard Chong wrote:
def myFindMaxA(myList):
implement finding maximum value with for loop iteration
maxIndex=0
2009/12/2 David Warde-Farley d...@cs.toronto.edu:
On 2-Dec-09, at 8:09 PM, Neal Becker wrote:
Not bad, although I wonder whether a partial sort could be faster.
Probably (if the array is large) but depending on n, not if it's in
Python. Ideal problem for Cython, though.
How is Cython
2009/12/2 Keith Goodman kwgood...@gmail.com:
On Wed, Dec 2, 2009 at 5:27 PM, Anne Archibald
peridot.face...@gmail.com wrote:
2009/12/2 Keith Goodman kwgood...@gmail.com:
On Wed, Dec 2, 2009 at 5:09 PM, Neal Becker ndbeck...@gmail.com wrote:
David Warde-Farley wrote:
On 2-Dec-09, at 6:55 PM
2009/11/30 James Bergstra bergs...@iro.umontreal.ca:
Your question involves a few concepts:
- an integer vector describing the position of an element
- the logical shape (another int vector)
- the physical strides (another int vector)
Ignoring the case of negative offsets, a physical
2009/11/29 Dr. Phillip M. Feldman pfeld...@verizon.net:
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from
2009/11/28 Wayne Watson sierra_mtnv...@sbcglobal.net:
Anne Archibald wrote:
2009/11/28 Wayne Watson sierra_mtnv...@sbcglobal.net:
I was only illustrating a way that I would not consider, since the
hardware has already created the pdf. I've already coded it pretty much
as you have suggested
2009/11/28 Wayne Watson sierra_mtnv...@sbcglobal.net:
I was only illustrating a way that I would not consider, since the
hardware has already created the pdf. I've already coded it pretty much
as you have suggested. As I think I mention ed above, I'm a bit
surprised numpy doesn't provide the
2009/11/27 Christopher Barker chris.bar...@noaa.gov:
The point is that I don't think we can just decide to use Unicode or
Bytes in all places where PyString was used earlier.
Agreed.
I only half agree. It seems to me that for almost all situations where
PyString was used, the right data type
2009/11/16 Christopher Barker chris.bar...@noaa.gov:
Charles R Harris wrote:
I would like some advise on the best way to add the new functions. I've
added a new package polynomial, and that package contains four new
modules: chebyshev, polynomial, polytemplate, polyutils.
This seems to
2009/11/16 Christopher Barker chris.bar...@noaa.gov:
Anne Archibald wrote:
2009/11/13 Christopher Barker chris.bar...@noaa.gov:
Wow! great -- you sounded interested, but I had no idea you'd run out
and do it! thanks! we'll check it out.
well, it turns out the Python version is unacceptably
2009/11/16 Robert Kern robert.k...@gmail.com:
On Mon, Nov 16, 2009 at 18:05, Christopher Barker chris.bar...@noaa.gov
wrote:
Charles R Harris wrote:
That's what I ended up doing. You still need to do import
numpy.polynomial to get to them, they aren't automatically imported
into the numpy
2009/11/13 Christopher Barker chris.bar...@noaa.gov:
Anne Archibald wrote:
2009/11/10 Christopher Barker chris.bar...@noaa.gov:
I have a bunch of points in 2-d space, and I need to find out which
pairs of points are within a certain distance of one-another (regular
old Euclidean norm
2009/11/12 Lou Pecora lou_boog2...@yahoo.com:
- Original Message
From: Christopher Barker chris.bar...@noaa.gov
To: Discussion of Numerical Python numpy-discussion@scipy.org
Sent: Thu, November 12, 2009 12:37:37 PM
Subject: Re: [Numpy-discussion] finding close together points.
2009/11/11 Christopher Barker chris.bar...@noaa.gov:
Anne Archibald wrote:
2009/11/10 Christopher Barker chris.bar...@noaa.gov:
I have a bunch of points in 2-d space, and I need to find out which
pairs of points are within a certain distance of one-another (regular
old Euclidean norm
2009/11/10 Christopher Barker chris.bar...@noaa.gov:
Hi all,
I have a bunch of points in 2-d space, and I need to find out which
pairs of points are within a certain distance of one-another (regular
old Euclidean norm).
This is an eminently reasonable thing to want, and KDTree should
support
2009/11/8 josef.p...@gmail.com:
On Sat, Nov 7, 2009 at 7:53 PM, David Goldsmith d.l.goldsm...@gmail.com
wrote:
Thanks, Anne.
On Sat, Nov 7, 2009 at 1:32 PM, Anne Archibald peridot.face...@gmail.com
wrote:
2009/11/7 David Goldsmith d.l.goldsm...@gmail.com:
snip
Also, my
2009/11/8 David Goldsmith d.l.goldsm...@gmail.com:
On Sat, Nov 7, 2009 at 11:59 PM, Anne Archibald peridot.face...@gmail.com
wrote:
2009/11/7 David Goldsmith d.l.goldsm...@gmail.com:
So in essence, at least as it presently functions, the shape of 'a'
*defines* what the individual choices
for your continued patience and tutelage.
DG
On Sun, Nov 8, 2009 at 5:36 AM, josef.p...@gmail.com wrote:
On Sun, Nov 8, 2009 at 5:00 AM, David Goldsmith d.l.goldsm...@gmail.com
wrote:
On Sun, Nov 8, 2009 at 12:57 AM, Anne Archibald
peridot.face...@gmail.com
wrote:
2009/11/8 David
2009/11/8 David Goldsmith d.l.goldsm...@gmail.com:
On Sun, Nov 8, 2009 at 7:40 PM, Anne Archibald peridot.face...@gmail.com
wrote:
As Josef said, this is not correct. I think the key point of confusion is
this:
Do not pass choose two arrays.
Pass it one array and a *list* of arrays
2009/11/7 Stas K stanc...@gmail.com:
Thank you, Josef
It is exactly what I want:
ar[:,None]**2 + ar**2
Do you know something about performance of this? In my real program ar have
~ 10k elements, and expression for v more complicated (it has some
trigonometric functions)
The construction
2009/11/7 David Goldsmith d.l.goldsm...@gmail.com:
Hi, all! I'm working to clarify the docstring for np.choose (Stefan pointed
out to me that it is pretty unclear, and I agreed), and, now that (I'm
pretty sure that) I've figured out what it does in its full generality
(e.g., when the
2009/11/7 David Goldsmith d.l.goldsm...@gmail.com:
Thanks, Anne.
On Sat, Nov 7, 2009 at 1:32 PM, Anne Archibald peridot.face...@gmail.com
wrote:
2009/11/7 David Goldsmith d.l.goldsm...@gmail.com:
snip
Also, my experimenting suggests that the index array ('a', the first
argument
2009/11/5 David Goldsmith d.l.goldsm...@gmail.com:
On Thu, Nov 5, 2009 at 3:26 PM, David Warde-Farley d...@cs.toronto.edu
wrote:
On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
Interesting thread, which leaves me wondering two things: is it
documented
somewhere (e.g., at the IEEE site)
2009/11/5 josef.p...@gmail.com:
On Thu, Nov 5, 2009 at 10:42 PM, David Goldsmith
d.l.goldsm...@gmail.com wrote:
On Thu, Nov 5, 2009 at 3:26 PM, David Warde-Farley d...@cs.toronto.edu
wrote:
On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
Interesting thread, which leaves me wondering two
2009/11/1 Thomas Robitaille thomas.robitai...@gmail.com:
Hi,
I'm trying to generate random 64-bit integer values for integers and
floats using Numpy, within the entire range of valid values for that
type. To generate random 32-bit floats, I can use:
Others have addressed why this is giving
2009/11/1 Bill Blinn bbl...@gmail.com:
What is the best way to create a view that is composed of sections of many
different arrays?
The short answer is, you can't. Numpy arrays must be located
contiguous blocks of memory, and the elements along any dimension must
be equally spaced. A view is
2009/10/30 Stephen Simmons m...@stevesimmons.com:
I should clarify what I meant..
Suppose I have a recarray with 50 fields and want to read just one of
those fields. PyTables/HDF will read in the compressed data for chunks
of complete rows, decompress the full 50 fields, and then give me
2009/10/20 Sebastian Walter sebastian.wal...@gmail.com:
On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald
peridot.face...@gmail.com wrote:
2009/10/19 Sebastian Walter sebastian.wal...@gmail.com:
I'm all for generic (u)funcs since they might come handy for me since
I'm doing lots of operation
2009/10/20 josef.p...@gmail.com:
On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben gru...@bigpond.net.au wrote:
Hi Gaël,
If you've got a 1D array/vector called a, I think the normal idiom is
np.dot(a,a)
For the more general case, I think
np.tensordot(a, a, axes=something_else)
should do it,
2009/10/20 josef.p...@gmail.com:
On Tue, Oct 20, 2009 at 3:09 PM, Anne Archibald
peridot.face...@gmail.com wrote:
2009/10/20 josef.p...@gmail.com:
On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben gru...@bigpond.net.au wrote:
Hi Gaël,
If you've got a 1D array/vector called a, I think the normal
2009/10/19 Sebastian Walter sebastian.wal...@gmail.com:
I'm all for generic (u)funcs since they might come handy for me since
I'm doing lots of operation on arrays of polynomials.
Just as a side note, if you don't mind my asking, what sorts of
operations do you do on arrays of polynomials? In
2009/10/17 Adam Ginsburg adam.ginsb...@colorado.edu:
My code is actually wrong but I still have the problem I've
identified that sqrt is leading to precision errors. Sorry about the
earlier mistake.
I think you'll find that numpy's sqrt is as good as it gets for double
precision. You can
2009/6/25 Mag Gam magaw...@gmail.com:
Hello.
I am very new to NumPy and Python. We are doing some research in our
Physics lab and we need to store massive amounts of data (100GB
daily). I therefore, am going to use hdf5 and h5py. The problem is I
am using np.loadtxt() to create my array and
I'm not sure it's worth having a function to replace a one-liner
(column_stack followed by reshape). But if you're going to implement
this with slice assignment, you should take advantage of the
flexibility this method allows and offer the possibility of
interleaving raggedly, that is, where the
2009/6/8 Robert Kern robert.k...@gmail.com:
On Mon, Jun 8, 2009 at 17:04, David Goldsmithd_l_goldsm...@yahoo.com wrote:
I look forward to an instructive reply: the Pythonic way to do it would be
to take advantage of the facts that Numpy is pre-vectorized and uses
broadcasting, but so far I
2009/6/4 josef.p...@gmail.com:
intersect1d should throw a domain error if you give it arrays with
non-unique elements, which is not done for speed reasons
It seems to me that this is the basic source of the problem. Perhaps
this can be addressed? I realize maintaining compatibility with the
2009/6/4 David Paul Reichert d.p.reich...@sms.ed.ac.uk:
Hi all,
I would be glad if someone could help me with
the following issue:
From what I've read on the web it appears to me
that numpy should be about as fast as matlab. However,
when I do simple matrix multiplication, it consistently
2009/4/29 Dan Goodman dg.gm...@thesamovar.net:
Robert Kern wrote:
On Wed, Apr 29, 2009 at 16:19, Dan Goodman dg.gm...@thesamovar.net wrote:
Robert Kern wrote:
On Wed, Apr 29, 2009 at 08:03, Daniel Yarlett daniel.yarl...@gmail.com
wrote:
As you can see, Current is different in the two
2009/4/10 Ian Mallett geometr...@gmail.com:
The vectors are used to jitter each particle's initial speed, so that the
particles go in different directions instead of moving all as one. Using
the unit vector causes the particles to make the smooth parabolic shape.
The jitter vectors much then
2009/4/9 Charles R Harris charlesr.har...@gmail.com:
On Tue, Apr 7, 2009 at 12:44 PM, Dan Lenski dlen...@gmail.com wrote:
Hi all,
I often want to use some kind of dimension-reducing function (like min(),
max(), sum(), mean()) on an array without actually removing the last
dimension, so
2009/3/30 João Luís Silva jsi...@fc.up.pt:
Hi,
I wrote a script to calculate the *optical* autocorrelation of an
electric field. It's like the autocorrelation, but sums the fields
instead of multiplying them. I'm calculating
I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf)
You may be
2009/3/28 Geoffrey Irving irv...@naml.us:
On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern robert.k...@gmail.com wrote:
2009/3/27 Charles R Harris charlesr.har...@gmail.com:
On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern robert.k...@gmail.com wrote:
On Fri, Mar 27, 2009 at 17:38, Bryan Cole
2009/3/5 Francesc Alted fal...@pytables.org:
A Thursday 05 March 2009, Francesc Alted escrigué:
Well, I suppose that, provided that Cython could perform the for-loop
transformation, giving support for strided arrays would be relatively
trivial, and the performance would be similar than numexpr
2009/3/5 M Trumpis mtrum...@berkeley.edu:
Hi Nadav.. if you want a lower resolution 2d function with the same
field of view (or whatever term is appropriate to your case), then in
principle you can truncate your higher frequencies and do this:
sig = ifft2_func(sig[N/2 - M/2:N/2 + M/2, N/2 -
On 02/03/2009, Gideon Simpson simp...@math.toronto.edu wrote:
I recently discovered that for 8 byte floating point numbers, my
fortran compilers (gfortran 4.2 and ifort 11.0) on an OS X core 2 duo
machine believe the smallest number 2.220507...E-308. I presume that
my C compilers have
2008/12/15 Benjamin Haynor bhay...@hotmail.com:
I was wondering if I can concatenate 3 arrays, where the result will be a
view of the original three arrays, instead of a copy of the data. For
example, suppose I write the following
import numpy as n
a = n.array([[1,2],[3,4]])
b =
2008/11/28 T J [EMAIL PROTECTED]:
import numpy as np
x = np.ones((3,0))
x
array([], shape(3,0), dtype=float64)
To preempt, I'm not really concerned with the answer to: Why would
anyone want to do this?
I just want to know what is happening. Especially, with
x[0,:] = 5
(which works).
On 05/11/2008, Charles R Harris [EMAIL PROTECTED] wrote:
On Tue, Nov 4, 2008 at 11:05 PM, T J [EMAIL PROTECTED] wrote:
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/11/5 Charles R Harris [EMAIL PROTECTED]:
Hi All,
I'm thinking of adding some
If what you are trying to do is actually ensure all data is within the
range [a,b], you may be interested to know that python's % operator
works on floating-point numbers:
In [1]: -0.1 % 1
Out[1]: 0.90002
So if you want all samples in the range (0,1) you can just do y%=1.
Anne
2008/10/9 David Bolme [EMAIL PROTECTED]:
I have written up basic nearest neighbor algorithm. It does a brute
force search so it will be slower than kdtrees as the number of points
gets large. It should however work well for high dimensional data. I
have also added the option for user
2008/10/12 Linda Seltzer [EMAIL PROTECTED]:
Here is an example that works for any working numpy installation:
import numpy as npy
npy.zeros((256, 256))
This suggestion from David did work so far, and removing the other import
line enabled the program to run.
However, the data types the
2008/10/10 Gael Varoquaux [EMAIL PROTECTED]:
I have been unable to vectorize the following operation::
window_size = 10
nb_windows = 819
nb_clusters = 501
restricted_series = np.random.random(size=(window_size, nb_clusters,
2008/10/7 paul taney [EMAIL PROTECTED]:
Hi,
I have this silly color filter that Stefan gave me:
def vanderwalt(image, f):
colorfilter, thanks to Stefan van der Walt
RED, GRN, BLU = 0, 1, 2
bluemask = (image[...,BLU] f*image[...,GRN]) \
(image[...,BLU]
2008/10/7 Stéfan van der Walt [EMAIL PROTECTED]:
The generalised ufuncs branch was made available before SciPy'08. We
solicited comments on its implementation and structuring, but received
very little feedback. Unless there are any further comments from the
community, I propose that we
2008/10/7 Christopher Barker [EMAIL PROTECTED]:
I wonder if the euclidian norm would make sense for this application:
HowFarFromBlue = np.sqrt((255-image[...,BLU])**2 +
image[...,GRN]**2 +
image[...,RED]**2)
smaller numbers would be bluest
2008/10/3 David Bolme [EMAIL PROTECTED]:
I remember reading a paper or book that stated that for data that has
been normalized correlation and Euclidean are equivalent and will
produce the same knn results. To this end I spent a couple hours this
afternoon doing the math. This document is
2008/10/2 David Bolme [EMAIL PROTECTED]:
It may be useful to have an interface that handles both cases:
similarity and dissimilarity. Often I have seen Nearest Neighbor
algorithms that look for maximum similarity instead of minimum
distance. In my field (biometrics) we often deal with very
2008/10/1 Gael Varoquaux [EMAIL PROTECTED]:
On Tue, Sep 30, 2008 at 06:10:46PM -0400, Anne Archibald wrote:
k=None in the third call to T.query seems redundant. It should be
possible do put some logics so that the call is simply
distances, indices = T.query(xs, distance_upper_bound=1.0
2008/10/1 Barry Wark [EMAIL PROTECTED]:
Thanks for taking this on. The scikits.ann has licensing issues (as
noted above), so it would be nice to have a clean-room implementation
in scipy. I am happy to port the scikits.ann API to the final API that
you choose, however, if you think that would
2008/9/30 bevan [EMAIL PROTECTED]:
Hello,
I have some XY data. I would like to generate the equations for an upper and
lower envelope that excludes a percentage of the data points.
I would like to define the slope of the envelope line (say 3) and then have my
code find the intercept that
2008/9/30 Peter [EMAIL PROTECTED]:
On Tue, Sep 30, 2008 at 5:10 AM, Christopher Barker
[EMAIL PROTECTED] wrote:
Anne Archibald wrote:
I suggest the creation of
a new submodule of scipy, scipy.spatial,
+1
Here's one to consider:
http://pypi.python.org/pypi/Rtree
and perhaps other stuff
2008/9/30 Gael Varoquaux [EMAIL PROTECTED]:
On Tue, Sep 30, 2008 at 05:31:17PM -0400, Anne Archibald wrote:
T = KDTree(data)
distances, indices = T.query(xs) # single nearest neighbor
distances, indices = T.query(xs, k=10) # ten nearest neighbors
distances, indices = T.query(xs, k=None
seem like a good
first step.
2008/9/27 Nathan Bell [EMAIL PROTECTED]:
On Sat, Sep 27, 2008 at 11:18 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
I think a kd-tree implementation would be a valuable addition to
scipy, perhaps in a submodule scipy.spatial that might eventually
contain other
2008/9/30 frank wang [EMAIL PROTECTED]:
I really do not know the difference of debug mode and the pdb debugger. To
me, it seems that pdb is only way to debug the python code. How do the
expert of numpy/python debug their code? Are there any more efficient way to
debug the code in python
2008/9/28 Geoffrey Irving [EMAIL PROTECTED]:
Is there an efficient way to implement a nonuniform gather operation
in numpy? Specifically, I want to do something like
n,m = 100,1000
X = random.uniform(size=n)
K = random.randint(n, size=m)
Y = random.uniform(size=m)
for k,y in zip(K,Y):
2008/9/27 Andrea Gavana [EMAIL PROTECTED]:
I was wondering if someone had any suggestions/references/snippets
of code on how to find the minimum distance between 2 paths in 3D.
Basically, for every path, I have I series of points (x, y, z) and I
would like to know if there is a better way,
2008/9/27 Robert Kern [EMAIL PROTECTED]:
On Sat, Sep 27, 2008 at 15:23, Anne Archibald [EMAIL PROTECTED] wrote:
2008/9/27 Andrea Gavana [EMAIL PROTECTED]:
I was wondering if someone had any suggestions/references/snippets
of code on how to find the minimum distance between 2 paths in 3D
2008/9/26 David Cournapeau [EMAIL PROTECTED]:
Charles R Harris wrote:
I'm also wondering about the sign ufunc. It should probably return nan
for nans, but -1,0,1 are the current values. We also need to decide
which end of the sorted values the nans should go to. I'm a bit
partial to the
2008/9/25 Peter Saffrey [EMAIL PROTECTED]:
I've bodged my way through my median problems (see previous postings). Now I
need to take a z-score of an array that might contain nans. At the moment, if
the array, which is 7000 elements, contains 1 nan or more, all the results
come
out as nan.
2008/9/19 Eric Firing [EMAIL PROTECTED]:
Pierre GM wrote:
It seems to me that there are pragmatic reasons
why people work with NaNs for missing values,
that perhaps shd not be dismissed so quickly.
But maybe I am overlooking a simple solution.
nansomething solutions tend to be considerably
2008/9/19 David Cournapeau [EMAIL PROTECTED]:
Anne Archibald wrote:
That was in amax/amin. Pretty much every other function that does
comparisons needs to be fixed to work with nans. In some cases it's
not even clear how: where should a sort put the nans in an array?
The problem is more
2008/9/19 Pierre GM [EMAIL PROTECTED]:
On Friday 19 September 2008 03:11:05 David Cournapeau wrote:
Hm, I am always puzzled when I think about nan handling :) It always
seem there is not good answer.
Which is why we have masked arrays, of course ;)
I think the numpy attitude to nans should
2008/9/19 Paul Moore [EMAIL PROTECTED]:
Robert Kern robert.kern at gmail.com writes:
On Thu, Sep 18, 2008 at 16:55, Paul Moore pf_moore at yahoo.co.uk wrote:
I want to generate a series of random samples, to do simulations based
on them. Essentially, I want to be able to produce a SAMPLESIZE
2008/9/19 David Cournapeau [EMAIL PROTECTED]:
I guess my formulation was poor: I never use NaN as missing values
because I never use missing values, which is why I wanted the opinion of
people who use NaN in a different manner (because I don't have a good
idea on how those people would like
2008/9/18 David Cournapeau [EMAIL PROTECTED]:
Peter Saffrey wrote:
Is this the correct behavior for median with nan?
That's the expected behavior, at least :) (this is also the expected
behavior of most math packages I know, including matlab and R, so this
should not be too surprising if
2008/8/22 Catherine Moroney [EMAIL PROTECTED]:
I'm looking for a way to acccomplish the following task without lots
of loops involved, which are really slowing down my code.
I have a 128x512 array which I want to break down into 2x2 squares.
Then, for each 2x2 square I want to do some simple
2008/8/25 Daniel Lenski [EMAIL PROTECTED]:
On Mon, 25 Aug 2008 03:48:54 +, Daniel Lenski wrote:
* it's fast enough for 100,000 determinants, but it bogs due to
all the temporary arrays when I try to do 1,000,000 determinants
(=72 MB array)
I've managed to reduce the memory
2008/8/17 Robert Kern [EMAIL PROTECTED]:
I suggested that we move it to a branch for the time being so we can
play with it and come up with examples of its use. If you have
examples that you have already written, I would love to see them. I,
for one, am amenable to seeing this in 1.2.0, but
2008/8/15 Andrew Dalke [EMAIL PROTECTED]:
On Aug 15, 2008, at 6:41 PM, Andrew Dalke wrote:
I don't think it's enough. I don't like environmental
variable tricks like that. My tests suggest:
current SVN: 0.12 seconds
my patch: 0.10 seconds
removing some top-level imports: 0.09
2008/8/14 Norbert Nemec [EMAIL PROTECTED]:
Travis E. Oliphant wrote:
NAN's don't play well with comparisons because comparison with them is
undefined.See numpy.nanmin
This is not true! Each single comparison with a NaN has a well defined
outcome. The difficulty is only that certain
2008/8/12 Stéfan van der Walt [EMAIL PROTECTED]:
Hi Andrew
2008/8/12 Andrew Dalke [EMAIL PROTECTED]:
This is buggy for the case of a list containing only NaNs.
import numpy as np
np.NAN
nan
np.min([np.NAN])
nan
np.nanmin([np.NAN])
inf
Thanks for the report. This should be
2008/8/12 Joe Harrington [EMAIL PROTECTED]:
So, I endorse extending min() and all other statistical routines to
handle NaNs, possibly with a switch to turn it on if a suitably fast
algorithm cannot be found (which is competitor IDL's solution).
Certainly without a switch the default behavior
2008/8/6 Eric Firing [EMAIL PROTECTED]:
While I agree with the other posters that import * is not preferred,
if you want to use it, the solution is to use amin and amax, which are
provided precisely to avoid the conflict. Just as arange is a numpy
analog of range, amin and amax are numpy
2008/7/15 Francesc Alted [EMAIL PROTECTED]:
Maybe is only that. But by using the term 'frequency' I tend to think
that you are expecting to have one entry (observation) in your array
for each time 'tick' since time start. OTOH, the term 'resolution'
doesn't have this implication, and only
2008/7/14 Francesc Alted [EMAIL PROTECTED]:
After pondering about the opinions about the first proposal, the idea we
are incubating is to complement the ``datetime64`` with a 'resolution'
metainfo. The ``datetime64`` will still be based on a int64 type, but
the meaning of the 'ticks' would
2008/7/11 Pierre GM [EMAIL PROTECTED]:
A final note on time scales
---
Wow, indeed. In environmental sciences (my side) and in finances (Matt's), we
very rarely have a need for that precision, thankfully...
We do, sometimes, in pulsar astronomy. But I think it's
2008/7/11 Jon Wright [EMAIL PROTECTED]:
Timezones are a heck of a problem if you want to be accurate. You are
talking about nanosecond resolutions, however, atomic clocks in orbit
apparently suffer from relativistic corrections of the order 38000
nanoseconds per day [1]. What will you do
2008/7/9 Robert Kern [EMAIL PROTECTED]:
Because that's just what a buffer= argument *is*. It is not a place
for presenting the starting pointer to exotically-strided memory. Use
__array_interface__s to describe the full range of representable
memory. See below.
Aha! Is this stuff documented
2008/7/10 Charles R Harris [EMAIL PROTECTED]:
On Thu, Jul 10, 2008 at 9:33 AM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/7/9 Robert Kern [EMAIL PROTECTED]:
Because that's just what a buffer= argument *is*. It is not a place
for presenting the starting pointer to exotically-strided
2008/7/10 Dan Lussier [EMAIL PROTECTED]:
I am relatively new to numpy and am having trouble with the speed of
a specific array based calculation that I'm trying to do.
What I'm trying to do is to calculate the total total potential
energy and coordination number of each atom within a
2008/7/9 Robert Kern [EMAIL PROTECTED]:
- Which operations do the functions exactly affect?
It seems that alterdot sets the dot function slot to a BLAS
version, but what operations does this affect?
dot(), vdot(), and innerproduct() on C-contiguous arrays which are
Matrix-Matrix,
2008/7/9 Catherine Moroney [EMAIL PROTECTED]:
I have a question about performing element-wise logical operations
on numpy arrays.
If a, b and c are numpy arrays of the same size, does the
following
syntax work?
mask = (a 1.0) ((b 3.0) | (c 10.0))
It seems to be performing correctly,
2008/7/9 Charles R Harris [EMAIL PROTECTED]:
On Wed, Jul 9, 2008 at 2:34 PM, Marlin Rowley [EMAIL PROTECTED]
wrote:
Thanks Chuck, but I wasn't quit clear with my question.
You answered exactly according to what I asked, but I failed to mention
needing the dot product instead of just the
Hi,
When trying to construct an ndarray, I sometimes run into the
more-or-less mystifying error expected a single-segment buffer
object:
Out[54]: (0, 16, 8)
In [55]: A=np.zeros(2); A=A[np.newaxis,...];
np.ndarray(strides=A.strides,shape=A.shape,buffer=A,dtype=A.dtype)
2008/7/9 Robert Kern [EMAIL PROTECTED]:
Yes, the buffer interface, at least the subset that ndarray()
consumes, requires that all of the data be contiguous in memory.
array_as_buffer() checks for that using PyArray_ISONE_SEGMENT(), which
looks like this:
#define PyArray_ISONESEGMENT(m)
2008/7/8 Alan McIntyre [EMAIL PROTECTED]:
On Tue, Jul 8, 2008 at 1:29 PM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:
Alan McIntyre wrote:
2. The behavior of __mul__ seems odd:
What is odd about this?
It is patterned after
'a' * 3
'a' * 4
'a' * 5
for regular python strings.
101 - 200 of 419 matches
Mail list logo