2008/9/30 Gael Varoquaux <[EMAIL PROTECTED]>:
> On Tue, Sep 30, 2008 at 05:31:17PM -0400, Anne Archibald wrote:
>> T = KDTree(data)
>
>> distances, indices = T.query(xs) # single nearest neighbor
>
>> distances, indices = T.query(xs, k=10) # ten nearest neighbors
2008/10/1 Gael Varoquaux <[EMAIL PROTECTED]>:
> On Tue, Sep 30, 2008 at 06:10:46PM -0400, Anne Archibald wrote:
>> > k=None in the third call to T.query seems redundant. It should be
>> > possible do put some logics so that the call is simply
>
>&g
2008/10/1 Barry Wark <[EMAIL PROTECTED]>:
> Thanks for taking this on. The scikits.ann has licensing issues (as
> noted above), so it would be nice to have a clean-room implementation
> in scipy. I am happy to port the scikits.ann API to the final API that
> you choose, however, if you think that
2008/10/2 David Bolme <[EMAIL PROTECTED]>:
> It may be useful to have an interface that handles both cases:
> similarity and dissimilarity. Often I have seen "Nearest Neighbor"
> algorithms that look for maximum similarity instead of minimum
> distance. In my field (biometrics) we often deal with
2008/10/3 David Bolme <[EMAIL PROTECTED]>:
> I remember reading a paper or book that stated that for data that has
> been normalized correlation and Euclidean are equivalent and will
> produce the same knn results. To this end I spent a couple hours this
> afternoon doing the math. This document
2008/10/7 paul taney <[EMAIL PROTECTED]>:
> Hi,
>
> I have this silly color filter that Stefan gave me:
>
>
> def vanderwalt(image, f):
>"""colorfilter, thanks to Stefan van der Walt"""
>RED, GRN, BLU = 0, 1, 2
>bluemask = (image[...,BLU] > f*image[...,GRN]) & \
> (image[.
2008/10/7 Stéfan van der Walt <[EMAIL PROTECTED]>:
> The generalised ufuncs branch was made available before SciPy'08. We
> solicited comments on its implementation and structuring, but received
> very little feedback. Unless there are any further comments from the
> community, I propose that we
2008/10/7 Christopher Barker <[EMAIL PROTECTED]>:
> I wonder if the euclidian norm would make sense for this application:
>
> HowFarFromBlue = np.sqrt((255-image[...,BLU])**2 +
> image[...,GRN]**2 +
> image[...,RED]**2)
>
> smaller numbers would be
2008/10/10 Gael Varoquaux <[EMAIL PROTECTED]>:
> I have been unable to vectorize the following operation::
>
>window_size = 10
>nb_windows = 819
>nb_clusters = 501
>restricted_series = np.random.random(size=(window_size, nb_clusters,
>
2008/10/9 David Bolme <[EMAIL PROTECTED]>:
> I have written up basic nearest neighbor algorithm. It does a brute
> force search so it will be slower than kdtrees as the number of points
> gets large. It should however work well for high dimensional data. I
> have also added the option for user d
2008/10/12 Linda Seltzer <[EMAIL PROTECTED]>:
>> Here is an example that works for any working numpy installation:
>>
>> import numpy as npy
>> npy.zeros((256, 256))
> This suggestion from David did work so far, and removing the other import
> line enabled the program to run.
> However, the data ty
If what you are trying to do is actually ensure all data is within the
range [a,b], you may be interested to know that python's % operator
works on floating-point numbers:
In [1]: -0.1 % 1
Out[1]: 0.90002
So if you want all samples in the range (0,1) you can just do y%=1.
Anne
2008/
2008/11/5 Charles R Harris <[EMAIL PROTECTED]>:
> Hi All,
>
> I'm thinking of adding some new ufuncs. Some possibilities are
>
> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes.
> absdiff(a,b) = abs(a - b)
On 05/11/2008, Charles R Harris <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, Nov 4, 2008 at 11:05 PM, T J <[EMAIL PROTECTED]> wrote:
> > On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald
> >
> > <[EMAIL PROTECTED]> wrote:
> >
> > > 2
2008/11/18 Robert Young <[EMAIL PROTECTED]>:
> Is there a method in NumPy that reduces a matrix to it's reduced row echelon
> form? I'm brand new to both NumPy and linear algebra, and I'm not quite sure
> where to look.
Unfortunately, reduced row-echelon form doesn't really work when using
approx
2008/11/28 T J <[EMAIL PROTECTED]>:
import numpy as np
x = np.ones((3,0))
x
> array([], shape(3,0), dtype=float64)
>
> To preempt, I'm not really concerned with the answer to: Why would
> anyone want to do this?
>
> I just want to know what is happening. Especially, with
>
x[0
2008/12/15 Benjamin Haynor :
> I was wondering if I can concatenate 3 arrays, where the result will be a
> view of the original three arrays, instead of a copy of the data. For
> example, suppose I write the following
> import numpy as n
> a = n.array([[1,2],[3,4]])
> b = n.array([[5,6],[7,8]])
>
On 02/03/2009, Gideon Simpson wrote:
> I recently discovered that for 8 byte floating point numbers, my
> fortran compilers (gfortran 4.2 and ifort 11.0) on an OS X core 2 duo
> machine believe the smallest number 2.220507...E-308. I presume that
> my C compilers have similar results.
>
> I
2009/3/5 M Trumpis :
> Hi Nadav.. if you want a lower resolution 2d function with the same
> field of view (or whatever term is appropriate to your case), then in
> principle you can truncate your higher frequencies and do this:
>
> sig = ifft2_func(sig[N/2 - M/2:N/2 + M/2, N/2 - M/2:N/2+M/2])
>
>
2009/3/5 Francesc Alted :
> A Thursday 05 March 2009, Francesc Alted escrigué:
>> Well, I suppose that, provided that Cython could perform the for-loop
>> transformation, giving support for strided arrays would be relatively
>> trivial, and the performance would be similar than numexpr in this
>> c
2009/3/28 Geoffrey Irving :
> On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern wrote:
>> 2009/3/27 Charles R Harris :
>>>
>>> On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern wrote:
On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote:
> I have a number of arrays of shape (N,4,4). I need to p
2009/3/30 João Luís Silva :
> Hi,
>
> I wrote a script to calculate the *optical* autocorrelation of an
> electric field. It's like the autocorrelation, but sums the fields
> instead of multiplying them. I'm calculating
>
> I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf)
You may be in troubl
2009/4/9 Charles R Harris :
>
>
> On Tue, Apr 7, 2009 at 12:44 PM, Dan Lenski wrote:
>>
>> Hi all,
>>
>> I often want to use some kind of dimension-reducing function (like min(),
>> max(), sum(), mean()) on an array without actually removing the last
>> dimension, so that I can then do operations
2009/4/10 Ian Mallett :
> The vectors are used to "jitter" each particle's initial speed, so that the
> particles go in different directions instead of moving all as one. Using
> the unit vector causes the particles to make the smooth parabolic shape.
> The jitter vectors much then be of a random
2009/4/29 Dan Goodman :
> Robert Kern wrote:
>> On Wed, Apr 29, 2009 at 16:19, Dan Goodman wrote:
>>> Robert Kern wrote:
On Wed, Apr 29, 2009 at 08:03, Daniel Yarlett
wrote:
> As you can see, Current is different in the two cases. Any ideas how I
> can recreate the behavio
2009/6/4 :
> intersect1d should throw a domain error if you give it arrays with
> non-unique elements, which is not done for speed reasons
It seems to me that this is the basic source of the problem. Perhaps
this can be addressed? I realize maintaining compatibility with the
current behaviour is
2009/6/4 David Paul Reichert :
> Hi all,
>
> I would be glad if someone could help me with
> the following issue:
>
> From what I've read on the web it appears to me
> that numpy should be about as fast as matlab. However,
> when I do simple matrix multiplication, it consistently
> appears to be a
2009/6/8 Robert Kern :
> On Mon, Jun 8, 2009 at 17:04, David Goldsmith wrote:
>>
>> I look forward to an instructive reply: the "Pythonic" way to do it would be
>> to take advantage of the facts that Numpy is "pre-vectorized" and uses
>> broadcasting, but so far I haven't been able to figure out
I'm not sure it's worth having a function to replace a one-liner
(column_stack followed by reshape). But if you're going to implement
this with slice assignment, you should take advantage of the
flexibility this method allows and offer the possibility of
interleaving "raggedly", that is, where the
2009/6/25 Mag Gam :
> Hello.
>
> I am very new to NumPy and Python. We are doing some research in our
> Physics lab and we need to store massive amounts of data (100GB
> daily). I therefore, am going to use hdf5 and h5py. The problem is I
> am using np.loadtxt() to create my array and create a data
2009/10/5 Christopher Barker :
> Francesc Alted wrote:
>> A Saturday 03 October 2009 10:06:12 Christopher Barker escrigué:
>>> This idea was inspired by a discussion at the SciPy conference, in which
>>> we spent a LOT of time during the numpy tutorial talking about how to
>>> accumulate values in
2009/10/17 Adam Ginsburg :
> My code is actually wrong but I still have the problem I've
> identified that sqrt is leading to precision errors. Sorry about the
> earlier mistake.
I think you'll find that numpy's sqrt is as good as it gets for double
precision. You can try using numpy's float9
2009/10/19 Sebastian Walter :
>
> I'm all for generic (u)funcs since they might come handy for me since
> I'm doing lots of operation on arrays of polynomials.
Just as a side note, if you don't mind my asking, what sorts of
operations do you do on arrays of polynomials? In a thread on
scipy-dev we
2009/10/20 Sebastian Walter :
> On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald
> wrote:
>> 2009/10/19 Sebastian Walter :
>>>
>>> I'm all for generic (u)funcs since they might come handy for me since
>>> I'm doing lots of operation on arrays of pol
2009/10/20 :
> On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben wrote:
>> Hi Gaël,
>>
>> If you've got a 1D array/vector called "a", I think the normal idiom is
>>
>> np.dot(a,a)
>>
>> For the more general case, I think
>> np.tensordot(a, a, axes=something_else)
>> should do it, where you should be ab
2009/10/20 :
> On Tue, Oct 20, 2009 at 3:09 PM, Anne Archibald
> wrote:
>> 2009/10/20 :
>>> On Sun, Oct 18, 2009 at 6:06 AM, Gary Ruben wrote:
>>>> Hi Gaël,
>>>>
>>>> If you've got a 1D array/vector called "a", I think
2009/10/30 Stephen Simmons :
> I should clarify what I meant..
>
> Suppose I have a recarray with 50 fields and want to read just one of
> those fields. PyTables/HDF will read in the compressed data for chunks
> of complete rows, decompress the full 50 fields, and then give me back
> the data f
2009/11/1 Bill Blinn :
> What is the best way to create a view that is composed of sections of many
> different arrays?
The short answer is, you can't. Numpy arrays must be located
contiguous blocks of memory, and the elements along any dimension must
be equally spaced. A view is simply another ar
2009/11/1 Thomas Robitaille :
> Hi,
>
> I'm trying to generate random 64-bit integer values for integers and
> floats using Numpy, within the entire range of valid values for that
> type. To generate random 32-bit floats, I can use:
Others have addressed why this is giving bogus results. But it's
2009/11/5 David Goldsmith :
> On Thu, Nov 5, 2009 at 3:26 PM, David Warde-Farley
> wrote:
>>
>> On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
>>
>> > Interesting thread, which leaves me wondering two things: is it
>> > documented
>> > somewhere (e.g., at the IEEE site) precisely how many *decima
2009/11/5 :
> On Thu, Nov 5, 2009 at 10:42 PM, David Goldsmith
> wrote:
>> On Thu, Nov 5, 2009 at 3:26 PM, David Warde-Farley
>> wrote:
>>>
>>> On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
>>>
>>> > Interesting thread, which leaves me wondering two things: is it
>>> > documented
>>> > somewhe
2009/11/7 Stas K :
> Thank you, Josef
> It is exactly what I want:
>
> ar[:,None]**2 + ar**2
>
> Do you know something about performance of this? In my real program ar have
> ~ 10k elements, and expression for v more complicated (it has some
> trigonometric functions)
The construction of ar[:,Non
2009/11/7 David Goldsmith :
> Hi, all! I'm working to clarify the docstring for np.choose (Stefan pointed
> out to me that it is pretty unclear, and I agreed), and, now that (I'm
> pretty sure that) I've figured out what it does in its full generality
> (e.g., when the 'choices' array is greater t
2009/11/7 David Goldsmith :
> Thanks, Anne.
>
> On Sat, Nov 7, 2009 at 1:32 PM, Anne Archibald
> wrote:
>>
>> 2009/11/7 David Goldsmith :
>
>
>
>>
>> > Also, my experimenting suggests that the index array ('a', the first
>> &
2009/11/8 :
> On Sat, Nov 7, 2009 at 7:53 PM, David Goldsmith
> wrote:
>> Thanks, Anne.
>>
>> On Sat, Nov 7, 2009 at 1:32 PM, Anne Archibald
>> wrote:
>>>
>>> 2009/11/7 David Goldsmith :
>>
>>
>>
>>>
>>> >
2009/11/8 David Goldsmith :
> On Sat, Nov 7, 2009 at 11:59 PM, Anne Archibald
> wrote:
>>
>> 2009/11/7 David Goldsmith :
>> > So in essence, at least as it presently functions, the shape of 'a'
>> > *defines* what the individual choices are within
ll arrays to each other, index array at the end.*/
>
> This would appear to confirm that "co-broadcasting" is performed if
> necessary, but what does the "index array at the end" phrase mean?
>
> Thanks for your continued patience and tutelage.
>
> DG
>
2009/11/8 David Goldsmith :
> On Sun, Nov 8, 2009 at 7:40 PM, Anne Archibald
> wrote:
>>
>> As Josef said, this is not correct. I think the key point of confusion is
>> this:
>>
>> Do not pass choose two arrays.
>>
>> Pass it one array and a *lis
2009/11/10 Christopher Barker :
> Hi all,
>
> I have a bunch of points in 2-d space, and I need to find out which
> pairs of points are within a certain distance of one-another (regular
> old Euclidean norm).
This is an eminently reasonable thing to want, and KDTree should
support it. Unfortunatel
2009/11/12 Lou Pecora :
>
>
> - Original Message
> From: Christopher Barker
> To: Discussion of Numerical Python
> Sent: Thu, November 12, 2009 12:37:37 PM
> Subject: Re: [Numpy-discussion] finding close together points.
>
> Lou Pecora wrote:
>> a KD tree for 2D nearest neighbor seems li
2009/11/11 Christopher Barker :
> Anne Archibald wrote:
>> 2009/11/10 Christopher Barker :
>
>>> I have a bunch of points in 2-d space, and I need to find out which
>>> pairs of points are within a certain distance of one-another (regular
>>> old Eucl
2009/11/13 Christopher Barker :
> Anne Archibald wrote:
>>>> 2009/11/10 Christopher Barker :
>>>>> I have a bunch of points in 2-d space, and I need to find out which
>>>>> pairs of points are within a certain distance of one-another (regular
>>&g
2009/11/16 Christopher Barker :
> Charles R Harris wrote:
>> I would like some advise on the best way to add the new functions. I've
>> added a new package polynomial, and that package contains four new
>> modules: chebyshev, polynomial, polytemplate, polyutils.
>
> This seems to belong more in sci
2009/11/16 Christopher Barker :
> Anne Archibald wrote:
>> 2009/11/13 Christopher Barker :
>>> Wow! great -- you sounded interested, but I had no idea you'd run out
>>> and do it! thanks! we'll check it out.
>
> well, it turns out the Python version i
2009/11/16 Robert Kern :
> On Mon, Nov 16, 2009 at 18:05, Christopher Barker
> wrote:
>> Charles R Harris wrote:
>>> That's what I ended up doing. You still need to do "import
>>> numpy.polynomial" to get to them, they aren't automatically imported
>>> into the numpy namespace.
>>
>> good start.
2009/11/27 Christopher Barker :
>
>> The point is that I don't think we can just decide to use Unicode or
>> Bytes in all places where PyString was used earlier.
>
> Agreed.
I only half agree. It seems to me that for almost all situations where
PyString was used, the right data type is a python3 s
2009/11/28 Wayne Watson :
>
> I was only illustrating a way that I would not consider, since the
> hardware has already created the pdf. I've already coded it pretty much
> as you have suggested. As I think I mention ed above, I'm a bit
> surprised numpy doesn't provide the code you suggest as part
2009/11/29 Dr. Phillip M. Feldman :
> All of the statistical packages that I am currently using and have used in
> the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
> sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
> sampling from a normally-distr
2009/11/28 Wayne Watson :
> Anne Archibald wrote:
>> 2009/11/28 Wayne Watson :
>>
>>> I was only illustrating a way that I would not consider, since the
>>> hardware has already created the pdf. I've already coded it pretty much
>>> as you have sugg
2009/11/30 James Bergstra :
> Your question involves a few concepts:
>
> - an integer vector describing the position of an element
>
> - the logical shape (another int vector)
>
> - the physical strides (another int vector)
>
> Ignoring the case of negative offsets, a physical offset is the inner
>
2009/12/2 Keith Goodman :
> On Wed, Dec 2, 2009 at 5:09 PM, Neal Becker wrote:
>> David Warde-Farley wrote:
>>
>>> On 2-Dec-09, at 6:55 PM, Howard Chong wrote:
>>>
def myFindMaxA(myList):
"""implement finding maximum value with for loop iteration"""
maxIndex=0
maxVal=m
2009/12/2 David Warde-Farley :
> On 2-Dec-09, at 8:09 PM, Neal Becker wrote:
>
>> Not bad, although I wonder whether a partial sort could be faster.
>
> Probably (if the array is large) but depending on n, not if it's in
> Python. Ideal problem for Cython, though.
How is Cython support for generic
2009/12/2 Keith Goodman :
> On Wed, Dec 2, 2009 at 5:27 PM, Anne Archibald
> wrote:
>> 2009/12/2 Keith Goodman :
>>> On Wed, Dec 2, 2009 at 5:09 PM, Neal Becker wrote:
>>>> David Warde-Farley wrote:
>>>>
>>>>> On 2-Dec-09, at 6:5
2009/12/8 Robert Kern :
> On Tue, Dec 8, 2009 at 15:25, Pierre GM wrote:
>> On Dec 8, 2009, at 12:54 PM, Robert Kern wrote:
>>>
>>> As far as I can tell, the faulty global seterr() has been in place
>>> since 1.1.0, so fixing it at all should be considered a feature
>>> change. It's not likely to
2009/12/9 Dr. Phillip M. Feldman :
>
>
> Pauli Virtanen-3 wrote:
>>
>> Nevertheless, I can't really regard dropping the imaginary part a
>> significant issue.
>>
>
> I am amazed that anyone could say this. For anyone who works with Fourier
> transforms, or with electrical circuits, or with electro
2009/12/12 T J :
> Hi,
>
> Suppose I have an array of shape: (n, k, k). In this case, I have n
> k-by-k matrices. My goal is to compute the product of a (potentially
> large) user-specified selection (with replacement) of these matrices.
> For example,
>
> x = [0,1,2,1,3,3,2,1,3,2,1,5,3,2,3,5,
2009/12/14 Dr. Phillip M. Feldman :
>
> When I issue the command
>
> np.lookfor('bessel')
>
> I get the following:
>
> Search results for 'bessel'
> ---
> numpy.i0
> Modified Bessel function of the first kind, order 0.
> numpy.kaiser
> Return the Kaiser window.
> numpy
2009/12/18 Wayne Watson :
> It's starting to come back to me. I found a few old graphics books that
> get into transformation matrices and such. Yes, earth centered. I ground
> out some code with geometry and trig that at least gets the first point
> on the path right. I think I can probably apply
2009/12/21 David Goldsmith :
> On Mon, Dec 21, 2009 at 9:57 AM, Christopher Barker
> wrote:
>> Dag Sverre Seljebotn wrote:
>>> I recently got motivated to get better linear algebra for Python;
>>
>> wonderful!
>>
>>> To me that seems like the ideal way to split up code -- let NumPy/SciPy
>>> deal
2009/12/23 David Goldsmith :
> Starting a new thread for this.
>
> On Tue, Dec 22, 2009 at 7:13 PM, Anne Archibald
> wrote:
>
>> I think we have one major lacuna: vectorized linear algebra. If I have
>> to solve a whole whack of four-dimensional linear systems, right no
2009/12/23 David Warde-Farley :
> On 23-Dec-09, at 10:34 AM, Anne Archibald wrote:
>
>> The key idea would be that the "linear
>> algebra dimensions" would always be the last one(s); this is fairly
>> easy to arrange with rollaxis when it isn't already t
2010/1/19 Gael Varoquaux :
> For the google-completness of this thread, to get a speed gain, one needs
> to use the 'econ=True' flag to qr.
Be warned that in some installations (in particular some using ATLAS),
supplying econ=True can cause a segfault under certain conditions (I
think only when
2010/1/19 Charles R Harris :
>
> Note that if you apply the QR algorithm to a Vandermonde matrix with the
> columns properly ordered you can get a collection of graded orthogonal
> polynomials over a given set of points.
Or, if you want the polynomials in some other representation - by
values, or
2010/1/23 Alan G Isaac :
> Suppose x and y are conformable 2d arrays.
> I now want x to become a duplicate of y.
> I could create a new array:
> x = y.copy()
> or I could assign the values of y to x:
> x[:,:] = y
>
> As expected the latter is faster (no array creation).
> Are there better ways?
If
2010/1/23 Alan G Isaac :
> On 1/23/2010 5:01 PM, Anne Archibald wrote:
>> If both arrays are "C contiguous", or more generally contiguous blocks
>> of memory with the same strided structure, you might get faster
>> copying by flattening them first, so that it can go
On 11 March 2010 19:30, Tom K. wrote:
>
>
>
> davefallest wrote:
>>
>> ...
>> In [3]: np.arange(1.01, 1.1, 0.01)
>> Out[3]: array([ 1.01, 1.02, 1.03, 1.04, 1.05, 1.06, 1.07, 1.08,
>> 1.09, 1.1 ])
>>
>> Why does the ... np.arange command end up including my stop value?
Don't use arange for
On 12 March 2010 13:54, gerardo.berbeglia wrote:
>
> Hello,
>
> I want to "divide" an n x n (2-dimension) numpy array matrix A by a n
> (1-dimension) array d as follows:
Look up "broadcasting" in the numpy docs. The short version is this:
operations like division act elementwise on arrays of the
On 18 March 2010 09:57, Francesc Alted wrote:
> Hi,
>
> Konrad Hinsen has just told me that my article "Why Modern CPUs Are Starving
> and What Can Be Done About It", which has just released on the March/April
> issue of "Computing in Science and Engineering", also made into this month's
> free-ac
On 18 March 2010 13:53, Francesc Alted wrote:
> A Thursday 18 March 2010 16:26:09 Anne Archibald escrigué:
>> Speak for your own CPUs :).
>>
>> But seriously, congratulations on the wide publication of the article;
>> it's an important issue we often don't thi
On 20 March 2010 06:32, Francesc Alted wrote:
> A Friday 19 March 2010 18:13:33 Anne Archibald escrigué:
> [clip]
>> What I didn't go into in detail in the article was that there's a
>> trade-off of processing versus memory access available: we could
>> reduce th
On 20 March 2010 14:56, Dag Sverre Seljebotn
wrote:
> Pauli Virtanen wrote:
>> Anne Archibald wrote:
>>> I'm not knocking numpy; it does (almost) the best it can. (I'm not
>>> sure of the optimality of the order in which ufuncs are executed; I
>>&g
On 20 March 2010 16:18, Sebastian Haase wrote:
> On Sat, Mar 20, 2010 at 8:22 PM, Anne Archibald
> wrote:
>> On 20 March 2010 14:56, Dag Sverre Seljebotn
>> wrote:
>>> Pauli Virtanen wrote:
>>>> Anne Archibald wrote:
>>>>> I'm not
On 22 March 2010 14:42, Pauli Virtanen wrote:
> la, 2010-03-20 kello 17:36 -0400, Anne Archibald kirjoitti:
>> I was in on that discussion. My recollection of the conclusion was
>> that on the one hand they're useful, carefully applied, while on the
>> other hand
On 08/11/2007, David Cournapeau <[EMAIL PROTECTED]> wrote:
> For copy and array creation, I understand this, but for element-wise
> operations (mean, min, and max), this is not enough to explain the
> difference, no ? For example, I can understand a 50 % or 100 % time
> increase for simple operati
On 16/11/2007, Rahul Garg <[EMAIL PROTECTED]> wrote:
> It would be awesome if you guys could respond to some of the following
> questions :
> a) Can you guys tell me briefly about the kind of problems you are
> tackling with numpy and scipy?
> b) Have you ever felt that numpy/scipy was slow and ha
On 20/11/2007, Geoffrey Zhu <[EMAIL PROTECTED]> wrote:
> I have N tabulated data points { (x_i, y_i, z_i) } that describes a 3D
> surface. The surface is pretty "smooth." However, the number of data
> points is too large to be stored and manipulated efficiently. To make
> it easier to deal with, I
On 16/12/2007, Hans Meine <[EMAIL PROTECTED]> wrote:
> (*: It's similar with math.hypot, which I have got to know and appreciate
> nowadays.)
I'd like to point out that math.hypot is a nontrivial function which
is easy to get wrong:
In [6]: x=1e200; y=1e200;
In [7]: math.hypot(x,y)
Out[7]: 1.41
On 27/12/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> in my code i am trying to normalise a matrix as below
>
> mymatrix=matrix(..# items are of double type..can be negative
> values)
> numrows,numcols=mymatrix.shape
>
> for i in range(numrows):
> temp=mymatrix[i].max()
>
On 28/12/2007, Christopher Barker <[EMAIL PROTECTED]> wrote:
> I like the array methods a lot -- is there any particular reason there
> is no ndarray.abs(), or has it just not been added?
Here I have to disagree with you.
Numpy provides ufuncs as general powerful tools for operating on
matrices.
On 28/12/2007, Christopher Barker <[EMAIL PROTECTED]> wrote:
> Anne Archibald wrote:
> > Numpy provides ufuncs as general powerful tools for operating on
> > matrices. More can be added relatively easily, they provide not just
> > the basic "apply" operatio
On 01/01/2008, Neal Becker <[EMAIL PROTECTED]> wrote:
> This is a c-api question.
>
> I'm trying to get iterators that are both fast and reasonably general. I
> did confirm that iterating using just the general PyArrayIterObject
> protocol is not as fast as using c-style pointers for contiguous ar
On 07/01/2008, Charles R Harris <[EMAIL PROTECTED]> wrote:
>
> One place where Numpy differs from MatLab is the way memory is handled.
> MatLab is always generating new arrays, so for efficiency it is worth
> preallocating arrays and then filling in the parts. This is not the case in
> Numpy where
On 07/01/2008, Timothy Hochberg <[EMAIL PROTECTED]> wrote:
> I'm fairly dubious about assigning float to ints as is. First off it looks
> like a bug magnet to me due to accidentally assigning a floating point value
> to a target that one believes to be float but is in fact integer. Second,
> C-sty
On 08/01/2008, Charles R Harris <[EMAIL PROTECTED]> wrote:
> I'm starting to get interested in implementing float16 support ;) My
> tentative program goes something like this:
>
> 1) Add the operators to the scalar type. This will give sorting, basic
> printing, addition, etc.
> 2) Add conversions
On 08/01/2008, Charles R Harris <[EMAIL PROTECTED]> wrote:
> Well, at a minimum people will want to read, write, print, and promote them.
> That would at least let people work with the numbers, and since my
> understanding is that the main virtue of the format is compactness for
> storage and comm
On 30/01/2008, Scott Ransom <[EMAIL PROTECTED]> wrote:
> That works fine with arrays, scalars, or array/scalar mixes in the
> calling. I do understand that more complicated functions might
> require vectorize(), however, I wonder if sometimes it is used
> when it doesn't need to be?
It certainly
On 30/01/2008, Francesc Altet <[EMAIL PROTECTED]> wrote:
> A Wednesday 30 January 2008, Nadav Horesh escrigué:
> > In the following piece of code:
> > >>> import numpy as N
> > >>> R = N.arange(9).reshape(3,3)
> > >>> ax = [1,2]
> > >>> R
> >
> > array([[0, 1, 2],
> >[3, 4, 5],
> >[
On 03/02/2008, Damian Eads <[EMAIL PROTECTED]> wrote:
> Good day,
>
> Reversing a 1-dimensional array in numpy is simple,
>
> A = A[:,:,-1] .
>
> However A is a new array referring to the old one and is no longer
> contiguous.
>
> While trying to reverse an array in place and keep it contig
On 05/02/2008, Chris Finley <[EMAIL PROTECTED]> wrote:
> After searching the archives, I was unable to find a good method for
> changing the stride of the correlate or convolve routines. I am doing a
> Daubechies analysis of some sample data, say data = arange(0:80). The
> coefficient array or fou
On 06/02/2008, Robert Kern <[EMAIL PROTECTED]> wrote:
> > I guess the all function doesn't know about generators?
>
> Yup. It works on arrays and things it can turn into arrays by calling the C
> API
> equivalent of numpy.asarray(). There's a ton of magic and special cases in
> asarray() in order
201 - 300 of 478 matches
Mail list logo