On Sun, 8 Feb 2015, Stefan Reiterer wrote:
And as already mentioned: other matrix languages also allow it, but they warn
about it's usage.
This has indeed it's merits.
numpy isn't a matrix language. They're arrays. Storing numbers that you are
thinking of as a vector in an array doesn't
coefficients it means
complex), though I've not read any of the Python code.
You're not doing anything wrong.
irfftn takes complex input and returns real output.
The exception is a bug which is triggered because max(axes) = len(axes).
Warren Focke
On Tue, 7 Feb 2012, Henry Gomersall wrote:
On Tue, 2012-02-07 at 11:53 -0800, Warren Focke wrote:
You're not doing anything wrong.
irfftn takes complex input and returns real output.
The exception is a bug which is triggered because max(axes) =
len(axes).
Is this a bug I should register
On Tue, 7 Feb 2012, Henry Gomersall wrote:
On Tue, 2012-02-07 at 12:26 -0800, Warren Focke wrote:
Is this a bug I should register?
Yes.
It should work right if you replace
s[axes[-1]] = (s[axes[-1]] - 1) * 2
with
s[-1] = (a.shape[axes[-1]] - 1) * 2
but I'm not really
The svs are
1.1695e-01, 1.1682e-01, 6.84719250e-10
so if you try
np.linalg.pinv(a,1e-5)
array([[ 0.41097834, 3.12024106, -3.26279309],
[-3.38526587, 0.30274957, 1.89394811],
[ 2.98692033, -2.30459609, 0.28627222]])
you at least get an answer that's not
M *= N[:, newaxis]
On Tue, 3 Mar 2009, Jose Borreguero wrote:
I guess there has to be an easy way for this. I have:
M.shape=(1,3)
N.shape=(1,)
I want to do this:
for i in range(1):
M[i]*=N[i]
without the explicit loop
-Jose
___
On Tue, 9 Sep 2008, David Cournapeau wrote:
We don't use SSE and co in numpy, and I doubt the compilers (even
Intel one) are able to generate effective SSE for numpy ATM. Actually,
double and float are about the same speed for x86 (using the x87 FPU
and not the SSE units), because
On Thu, 8 May 2008, Charles R Harris wrote:
On Thu, May 8, 2008 at 10:11 AM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/5/8 Charles R Harris [EMAIL PROTECTED]:
What realistic probability is in the range exp(-1000) ?
Well, I ran into it while doing a maximum-likelihood fit - my early
On Thu, 8 May 2008, Charles R Harris wrote:
On Thu, May 8, 2008 at 11:46 AM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/5/8 Charles R Harris [EMAIL PROTECTED]:
On Thu, May 8, 2008 at 10:56 AM, Robert Kern [EMAIL PROTECTED]
wrote:
When you're running an optimizer over a PDF, you will
The vectors that you used to build your covariance matrix all lay in or close
to
a 3-dimensional subspace of the 4-dimensional space in which they were
represented. So one of the eigenvalues of the covariance matrix is 0, or close
to it; the matrix is singular. Condition is the ratio of the
Yes.
Your first eigenvalue is effectively 0, the values you see are just noise.
Different implementations produce different noise.
As for the signs ot the eigenvector components, which direction is + or - X is
arbitrary. Different implementations follow different conventions as to which
is
On Thu, 15 Nov 2007, George Nurser wrote:
It looks to me like
a,b = (zeros((2,)),)*2
is equivalent to
x= zeros((2,))
a,b=(x,)*2
Correct.
If this is indeed a feature rather than a bug, is there an alternative
compact way to allocate many arrays?
a, b = [zeros((2,)) for x in range(2)]
http://www.oonumerics.org/blitz/
On Fri, 2 Nov 2007, Bill Baxter wrote:
Does anyone know of a C or C++ library that's similar to NumPy?
Seems like all the big C++ efforts are focused on linear algebra
rather than general purpose multidimensional arrays.
I've written a multidimensional array
On Thu, 23 Aug 2007, Christopher Barker wrote:
but that feels like a kludge. maybe some sort of TheseArrays are binary
equal would be useful.
But there are multiple possible NaNs, so you couldn't rely on the bits
comparing.
Maybe something with masked arrays?
w
On Thu, 2 Aug 2007, Lars Friedrich wrote:
What I understood is that numpy uses FFTPACK's algorithms.
Sort of. It appears to be a hand translation from F77 to C.
From www.netlib.org/fftpack (is this the right address?) I took that
there is a single-precision and double-precision-version
On Thu, 2 Aug 2007, Charles R Harris wrote:
On X86 machines the main virtue would be smaller and more cache friendly
arrays because double precision arithmetic is about the same speed as single
precision, sometimes even a bit faster. The PPC architecture does have
faster single than double
On Mon, 25 Jun 2007, Torgil Svensson wrote:
handled with sub-classing. At a minimum float(str(nan))==nan should
evaluate as True.
False. No NaN should ever compare equal to anything, even itself. But if
the system is 754-compliant, it won't.
isnan(float(str(nan))) == True would be nice,
On futher contemplation, and hearing others' arguments, I'm changing my
vote. Make it compatible with python.
w
On Tue, 24 Apr 2007, Warren Focke wrote:
On Tue, 24 Apr 2007, Timothy Hochberg wrote:
On 4/24/07, Robert Kern [EMAIL PROTECTED] wrote:
Christian Marquardt wrote
On Tue, 24 Apr 2007, Timothy Hochberg wrote:
On 4/24/07, Robert Kern [EMAIL PROTECTED] wrote:
Christian Marquardt wrote:
Restore the invariant, and follow python.
This seems to imply that once upon a time numpy/numeric/numarray followed
python here, but as far as I can recall
But even C89 required that x == (x/y)*y + (x%y), and that's not the case
here.
w
On Mon, 23 Apr 2007, David M. Cooke wrote:
On Apr 23, 2007, at 16:41 , Christian Marquardt wrote:
On Mon, April 23, 2007 22:29, Christian Marquardt wrote:
Actually,
it happens for normal integers as well:
On Wed, 14 Mar 2007, Charles R Harris wrote:
On 3/14/07, Ray Schumacher [EMAIL PROTECTED] wrote:
What I had been doing is a 2048 N full real_FFT with a Hann window, and
further analyzing the side lobe/bin energy (via linear interp) to try to
more precisely determine the f within the
The frequencies produced by the two recipies are not the same. But the
DFT is periodic in both frequency and time. So whether you think that the
number in bin in n/2 should correspond to frequency n/2 or -n/2, it's the
same number.
w
On Mon, 5 Feb 2007, Hanno Klemm wrote:
Hi there,
I have
On Mon, 5 Feb 2007, Timothy Hochberg wrote:
On 2/5/07, Hanno Klemm [EMAIL PROTECTED] wrote:
[numpy.fft[
The packing of the result is standard: If A = fft(a, n), then A[0]
contains the zero-frequency term, A[1:n/2+1] contains the
positive-frequency terms, and A[n/2+1:]
23 matches
Mail list logo