On the scipy user list, this exac question appeared last month, so yu can
check the answers on the archive :)
Matthieu
2007/3/2, Stephen Kelly [EMAIL PROTECTED]:
Hi,
I'm working on a project that requires interpolation, and I found this
post
thanks. I hadn't seen it.
anyway, from very rough benchmarks I did, the quickest and easiest way of
computing the euclidean norm of a 1D array is:
n = sqrt(dot(x,x.conj()))
much faster than:
n = sqrt(sum(abs(x)**2))
and much much faster than:
n = scipy.linalg.norm(x)
regards,
lorenzo.
On
Hi James
On Fri, Mar 09, 2007 at 08:44:34PM -0300, James Turner wrote:
Last year I wrote a program that uses the affine_transform()
function in numarray to resample and co-add datacubes with WCS
offsets in 3D. This function makes it relatively easy to handle
N-D offsets and rotations with a
A Divendres 09 Març 2007 18:56, Francesc Altet escrigué:
A Divendres 09 Març 2007 18:40, Sebastian Haase escrigué:
Which dtypes are supported by numexpr ?
Well, numexpr does support any dtype that is homogeneous, except 'uint64'.
This is because internally all the unsigned types are upcasted
Hi Eric,
A Divendres 09 Març 2007 15:32, Eric Brown escrigué:
Hi All,
I have a set of large arrays to which I have to do exp functions
over and over again. I am wondering if there is any benefit to
teaching numpy how to use the vForce frameworks to do this function.
Does anyone
Hi Stephen,
I'd de glad to test your bincount function for histogramming purposes.
David
2007/3/14, Stephen Simmons [EMAIL PROTECTED]:
Well, there were no responses to my earlier email proposing changes to
numpy.bincount() to make it faster and more flexible. Does this mean
noone is using
David Koch wrote:
Hi,
so one thing I came across now is the following, very simple:
Matlab:
A = []
while
A = [A some_scalar_value]
end
In Python, I tried:
A = empty((0,0))
while
A = concatenate((A, array([someScalarValue])), 1)
end
which returns an error
On 3/14/07, Sven Schreiber [EMAIL PROTECTED] wrote:
If you want a 1d-array in the end you could try empty(0) to start with,
and then do hstack((A, your_scalar)) or something like that.
Yeah, that works - odd, I thought concatenate((a,b),0) == hstack((a,b))
Thanks
/David
On 3/14/07, David Koch [EMAIL PROTECTED] wrote:
On 3/14/07, Sven Schreiber [EMAIL PROTECTED] wrote:
If you want a 1d-array in the end you could try empty(0) to start with,
and then do hstack((A, your_scalar)) or something like that.
Depending on what your generating routine looks like,
On 3/14/2007 2:46 PM, Robert Cimrman wrote:
a = []
while ...
a.append( scalar )
a = array( a )
While it may help, growing Python lists is also an O(N) process.
One can reduce the amount of allocations by preallocating an ndarray of
a certain size (e.g. 1024 scalars), filling it up, and
On Wed, Mar 14, 2007 at 09:46:46AM -0700, Travis Oliphant wrote:
Perhaps when the new bytes type is added to Python we will have a way to
view a memory area as a bytes object and be able to make a pickle
without creating that extra copy in memory.
Perhaps this is an aspect that could be
So far my migration seems to be going well. I have one problem:
I've been using the scipy_base.insert and scipy_base.extract functions
and the behavior in numpy is not the same.
a = [0, 0, 0, 0]
mask = [0, 0, 0, 1]
c = [10]
numpy.insert(a, mask, c)
would change a so that
a = [0, 0, 0, 10]
I'm in the process of migrating from Numeric to numpy. In some of my
code I have the following:
a = zeros(num_elements, PyObject)
b = zeros(num_elements, PyObject)
a is an array of python string objects and b is an array holding
mx.DateTime objects. What do I have to do to migrate this over to
Hi,
Please remind me what's wrong with pylab's
rand and randn !
I just learned about their existence recently and thought
they seem quite handy and should go directly into (the top-level of) numpy.
Functions that have the same name and do the same thing don't conflict
either ;-)
-Sebastian
On
vinjvinj wrote:
So far my migration seems to be going well. I have one problem:
I've been using the scipy_base.insert and scipy_base.extract functions
and the behavior in numpy is not the same.
a = [0, 0, 0, 0]
mask = [0, 0, 0, 1]
c = [10]
numpy.insert(a, mask, c)
would change a so that
a =
vinjvinj wrote:
I'm in the process of migrating from Numeric to numpy. In some of my
code I have the following:
a = zeros(num_elements, PyObject)
b = zeros(num_elements, PyObject)
PyObject -- object
-Travis
___
Numpy-discussion mailing list
On 3/14/07, Sturla Molden [EMAIL PROTECTED] wrote:
On 3/14/2007 2:46 PM, Robert Cimrman wrote:
a = []
while ...
a.append( scalar )
a = array( a )
While it may help, growing Python lists is also an O(N) process.
This may just be a terminology problem, but just to be clear, appending
El dc 14 de 03 del 2007 a les 09:46 -0700, en/na Travis Oliphant va
escriure:
Glen W. Mabey wrote:
Hello,
After running a simulation that took 6 days to complete, my script
proceeded to attempt to write the results out to a file, pickled.
The operation failed even though there was 1G of
Hey Bill,
what are you using to communicate with the server ?
May I recommend looking at Pyro !
(Python remote objects)
It would allow you to get your proxy objects.
And also handles exception super clean and easy.
I have used it for many years ! It's very stable !
(If you run into problems, take
On 3/14/07, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi,
Please remind me what's wrong with pylab's
rand and randn !
I just learned about their existence recently and thought
they seem quite handy and should go directly into (the top-level of)
numpy.
Functions that have the same name and do
Is there a clever way to multiply each column of a matrix by a vector
*element-wise*? That is, the equivalent of (from some 1D v and 2D m):
r = numpy.empty_like(m)
for i in range(m.shape[-1]):
r[...,i] = v*m[...,i]
Thanks,
Alex
___
Numpy-discussion
Miguel Oliveira, Jr. wrote:
Hello,
I've got a few codes that use arrayrange within numpy. It happens
that the new version of numpy apparently doesn't recognize
arrayrange... I've tried to change it to arange, but it doesn't
work... So, for example, the code below used to create a sine
Alexander Michael wrote:
Is there a clever way to multiply each column of a matrix by a vector
*element-wise*? That is, the equivalent of (from some 1D v and 2D m):
r = numpy.empty_like(m)
for i in range(m.shape[-1]):
r[...,i] = v*m[...,i]
numpy.multiply(m, v[:,numpy.newaxis])
If m
On 3/14/07, Timothy Hochberg [EMAIL PROTECTED] wrote:
On 3/14/07, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi,
Please remind me what's wrong with pylab's
rand and randn !
I just learned about their existence recently and thought
they seem quite handy and should go directly into (the
I would use something like this:
t = linspace(0, durSecs, durSecs*SRate)
Do you know the 'Numpy Example List'
http://www.scipy.org/Numpy_Example_List
Regards Eike.
PS: Ah, you did subscribe.
___
Numpy-discussion mailing list
Hi there,
I have just installed numpy-1.0.1 from source, which seemed to go
fine. However when I try to import numpy I get a segmentation
fault.
A have a 64 bit machine running RedHat Enterprise Linux and Python 2.34
Any clues greatly appreciated.
Cheers,
Cory.
If I recall correctly, there's a bug in numpy 1.0.1 on Linux-x86-64
that causes this segfault. This is fixed in the latest SVN version of
numpy, so if you can grab that, it should work.
I can't find the trac ticket, but I ran into this some weeks ago.
Zach
On Mar 14, 2007, at 1:36 PM,
On Wed, 14 Mar 2007 13:02:10 +0100
Francesc Altet [EMAIL PROTECTED] wrote:
The info above is somewhat inexact. I was talking about the enhanced
numexpr version included in PyTables 2.0 (see [1]). The original version of
numexpr (see [2]) doesn't have support for int64 on 32-bit platforms and
Hi,
Now that I'm commanding my old AMD Duron machine, I've made some
benchmarks just to prove that the numexpr computing is not influenced by
the size of the CPU cache, but I failed miserably (and Tim was right:
there is a dependency of the numexpr efficency on CPU cache size).
Provided that the
We'd like to do what most call a zoom FFT; we only are interested
in the frequencies of say, 6kHZ to 9kHz with a given N, and so the
computations from DC to 6kHz are wasted CPU time.
Can this be done without additional numpy pre-filtering computations?
If explicit filtering is needed to
Thanks, Sebastian. I'll take a look at Pyro. Hadn't heard of it.
I'm using just xmlrpclib with pickle right now.
--bb
On 3/15/07, Sebastian Haase [EMAIL PROTECTED] wrote:
Hey Bill,
what are you using to communicate with the server ?
May I recommend looking at Pyro !
(Python remote objects)
On 3/12/07, Travis Oliphant [EMAIL PROTECTED] wrote:
I'm not convinced that the broadcasting is causing the slow-downs.
Currently, the code has two path-ways. One gets called when the inputs
are scalars which is equivalent to the old code and the second gets
called when broadcasting is
On 3/15/07, Bill Baxter [EMAIL PROTECTED] wrote:
Thanks, Sebastian. I'll take a look at Pyro. Hadn't heard of it.
I'm using just xmlrpclib with pickle right now.
I took a look at Pyro -- it looks nice.
The only thing I couldn't find, though, is how decouple the wx GUI on
the server side from
On 3/14/07, Daniel Mahler [EMAIL PROTECTED] wrote:
On 3/12/07, Travis Oliphant [EMAIL PROTECTED] wrote:
I'm not convinced that the broadcasting is causing the slow-downs.
Currently, the code has two path-ways. One gets called when the inputs
are scalars which is equivalent to the old code
On 3/14/07, Ray S [EMAIL PROTECTED] wrote:
We'd like to do what most call a zoom FFT; we only are interested
in the frequencies of say, 6kHZ to 9kHz with a given N, and so the
computations from DC to 6kHz are wasted CPU time.
Can this be done without additional numpy pre-filtering computations?
On 15/03/07, Ray Schumacher [EMAIL PROTECTED] wrote:
The desired band is rather narrow, as the goal is to determine the f of a
peak that always occurs in a narrow band of about 1kHz around 7kHz
2) frequency shift, {low pass}, and downsample
By this I would take it to mean, multiply by a
On 3/14/07, Ray Schumacher [EMAIL PROTECTED] wrote:
On 3/14/07, Charles R Harris wrote:
Sounds like you want to save cpu cycles.
How much you can save will depend
on the ratio of the bandwidth to the nyquist.
The desired band is rather narrow, as the goal is to determine the f of a
peak
37 matches
Mail list logo