Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-08 Thread Warren Focke
On Sun, 8 Feb 2015, Stefan Reiterer wrote:

 And as already mentioned: other matrix languages also allow it, but they warn 
 about it's usage.
 This has indeed it's merits.

numpy isn't a matrix language. They're arrays. Storing numbers that you are 
thinking of as a vector in an array doesn't turn the array into a vector. There 
are linear algebra modules; I haven't used them.

w

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft.irfftn fails apparently unexpectedly

2012-02-07 Thread Warren Focke


On Tue, 7 Feb 2012, Henry Gomersall wrote:

 On Tue, 2012-02-07 at 01:04 +0100, Torgil Svensson wrote:
 irfftn is an optimization for real input and does not take complex
 input. You have to use numpy.fft.ifftn instead:

 hmmm, that doesn't sound right to me (though there could be some non
 obvious DFT magic that I'm missing). Indeed,

 np.irfftn(np.rfftn(a)) ~= a  # The interim array is complex

 Though the documentation is a bit vague as to what inputs are expected!

 Actually, reading the fftpack docs, it *does* seem that this is the
 correct behaviour (assuming when it says Fourier coefficients it means
 complex), though I've not read any of the Python code.


You're not doing anything wrong.
irfftn takes complex input and returns real output.
The exception is a bug which is triggered because max(axes) = len(axes).

Warren Focke

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft.irfftn fails apparently unexpectedly

2012-02-07 Thread Warren Focke


On Tue, 7 Feb 2012, Henry Gomersall wrote:

 On Tue, 2012-02-07 at 11:53 -0800, Warren Focke wrote:
 You're not doing anything wrong.
 irfftn takes complex input and returns real output.
 The exception is a bug which is triggered because max(axes) =
 len(axes).

 Is this a bug I should register?

Yes.

It should work right if you replace
 s[axes[-1]] = (s[axes[-1]] - 1) * 2
with
 s[-1] = (a.shape[axes[-1]] - 1) * 2
but I'm not really in a position to test it right now.

Warren Focke

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft.irfftn fails apparently unexpectedly

2012-02-07 Thread Warren Focke


On Tue, 7 Feb 2012, Henry Gomersall wrote:

 On Tue, 2012-02-07 at 12:26 -0800, Warren Focke wrote:
 Is this a bug I should register?

 Yes.

 It should work right if you replace
  s[axes[-1]] = (s[axes[-1]] - 1) * 2
 with
  s[-1] = (a.shape[axes[-1]] - 1) * 2
 but I'm not really in a position to test it right now.

 I can confirm that seems to solve the problem. Still want a bug report?

No, I can file it, thanks.

w

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix inversion

2011-08-10 Thread Warren Focke
The svs are
1.1695e-01,   1.1682e-01,   6.84719250e-10
so if you try

 np.linalg.pinv(a,1e-5)
array([[ 0.41097834,  3.12024106, -3.26279309],
[-3.38526587,  0.30274957,  1.89394811],
[ 2.98692033, -2.30459609,  0.28627222]])

you at least get an answer that's not near-random.

w

On Wed, 10 Aug 2011, Nadav Horesh wrote:

 The matrix in singular, so you can not expect a stable inverse.

   Nadav.

 
 From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
 On Behalf Of jp d [yo...@yahoo.com]
 Sent: 11 August 2011 03:50
 To: numpy-discussion@scipy.org
 Subject: [Numpy-discussion] matrix inversion

 hi,
 i am trying to invert matrices like this:
 [[ 0.01643777 -0.13539939  0.11946689]
 [ 0.12479926  0.01210898 -0.09217618]
 [-0.13050087  0.07575163  0.01144993]]

 in perl using Math::MatrixReal;
 and in various online calculators i get
 [  2.472715991745  3.680743681735 -3.831392002314 ]
 [ -4.673105249083 -5.348238625096 -5.703193038649 ]
 [  2.733966489601 -6.567940452290 -5.936617926811 ]

 using python , numpy and linalg.inv (or linalg.pinv) i get  a divergent answer
 [[  6.79611151e+07   1.01163031e+08   1.05303510e+08]
 [  1.01163057e+08   1.50585545e+08   1.56748838e+08]
 [  1.05303548e+08   1.56748831e+08   1.63164381e+08]]

 any suggestions?

 thanks
 jpd

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to multiply the rows of a matrix by a different number?

2009-03-03 Thread Warren Focke
M *= N[:, newaxis]

On Tue, 3 Mar 2009, Jose Borreguero wrote:

 I guess there has to be an easy way for this. I have:
 M.shape=(1,3)
 N.shape=(1,)

 I want to do this:
 for i in range(1):
 M[i]*=N[i]
 without the explicit loop


 -Jose

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floating Point Difference between numpy and numarray

2008-09-09 Thread Warren Focke


On Tue, 9 Sep 2008, David Cournapeau wrote:

 We don't use SSE and co in numpy, and I doubt the compilers (even
 Intel one) are able to generate effective SSE for numpy ATM. Actually,
 double and float are about the same speed for x86 (using the x87 FPU
 and not the SSE units), because internally, the register is 80 bits
 wide when doing computation. The real difference is the memory
 pressure induced by double (8 bytes per items) compared to float when
 doing computation with double, and for certain operations, for a
 reason I don't understand (log, sin and co are as fast for float and
 double using the FPU, but sqrt and divide are twice faster for float,
 for example).

Some of the transcendental functions are implemented with polynomials, so 
should 
take constant time.  Others may use iterative algorithms, which would require 
fewer iterations to reach single accuracy than double.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Log Arrays

2008-05-08 Thread Warren Focke


On Thu, 8 May 2008, Charles R Harris wrote:

 On Thu, May 8, 2008 at 10:11 AM, Anne Archibald [EMAIL PROTECTED]
 wrote:

 2008/5/8 Charles R Harris [EMAIL PROTECTED]:

 What realistic probability is in the range exp(-1000) ?

 Well, I ran into it while doing a maximum-likelihood fit - my early
 guesses had exceedingly low probabilities, but I needed to know which
 way the probabilities were increasing.


 The number of bosons in the universe is only on the order of 1e-42.
 Exp(-1000) may be convenient, but as a probability it is a delusion. The
 hypothesis none of the above would have a much larger prior.

You might like to think so.  Sadly, not.

If you're doing a least-square (or any other maximum-likelihood) fit to 2000 
data points, exp(-1000) is the highest probability you can reasonably hope for. 
For a good fit.  Chi-square is -2*ln(P).  In the course of doing the fit, you 
will evaluate many parameter sets which are bad fits, and the probablility will 
be much lower.

This has no real effect on the current discussion, but:

The number of bosons in the universe (or any subset thereof) is not 
well-defined.  It's not just a question of not knowing the number; there really 
is no answer to that question (well, ok, 'mu').  It's like asking which slit 
the 
particle went through in a double-slit interference experiment.  The question 
is 
incorrect.  Values 1 will never be tenable, but I suspect that the minus sign 
was a typo.  The estimates I hear for the number of baryons (protons, atoms) 
are 
~ 1e80.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Log Arrays

2008-05-08 Thread Warren Focke


On Thu, 8 May 2008, Charles R Harris wrote:

 On Thu, May 8, 2008 at 11:46 AM, Anne Archibald [EMAIL PROTECTED]
 wrote:

 2008/5/8 Charles R Harris [EMAIL PROTECTED]:

 On Thu, May 8, 2008 at 10:56 AM, Robert Kern [EMAIL PROTECTED]
 wrote:

 When you're running an optimizer over a PDF, you will be stuck in the
 region of exp(-1000) for a substantial amount of time before you get
 to the peak. If you don't use the log representation, you will never
 get to the peak because all of the gradient information is lost to
 floating point error. You can consult any book on computational
 statistics for many more examples. This is a long-established best
 practice in statistics.

 But IEEE is already a log representation. You aren't gaining precision,
 you
 are gaining more bits in the exponent at the expense of fewer bits in the
 mantissa, i.e., less precision. As I say, it may be convenient, but if
 cpu
 cycles matter, it isn't efficient.

 Efficiency is not the point here. IEEE floats simply cannot represent
 the difference between exp(-1000) and exp(-1001). This difference can
 matter in many contexts.

 For example, suppose I have observed a source in X-rays and I want to
 fit a blackbody spectrum. I have, say, hundreds of spectral bins, with
 a number of photons in each. For any temperature T and amplitude A, I
 can compute the distribution of photons in each bin (Poisson with mean
 depending on T and A), and obtain the probability of obtaining the
 observed number of photons in each bin. Even if a blackbody with
 temperature T and amplitude A is a perfect fit I should expect this


 This is an interesting problem, partly because I was one of the first guys
 to use synthetic spectra (NO, OH), to fit temperatures to spectral data. But
 also because it might look like the Poisson matters. But probably not, the
 curve fitting effectively aggregates data and the central limit theorem
 kicks in, so that the normal least squares approach will probably work.
 Also, if the number of photons in a bin is much more that about 10, the
 computation of the Poisson distribution probably uses a Gaussian, with
 perhaps a small adjustment for the tail. So I'm curious, did you find any
 significant difference trying both methods?

I can't address Anne's results, but I've certainly found it to make a 
difference 
in my work, and it's pretty standard in high-energy astronomy.  The central 
limit theorem does not get you out of having to use the right PDF to compare 
data to model.  It does mean that, if you have enough events, the PDF of the 
fitness function is fairly chi-squarish, so much of the logic developed for 
least-squares fitting still applies (like for finding confidence intervals on 
the fitted parameters).  Using Poisson statistics instead of a Gaussian 
approximation is actually pretty easy, see Cash (Astrophysical Journal, Part 1, 
vol. 228, Mar. 15, 1979, p. 939-947 
http://adsabs.harvard.edu/abs/1979ApJ...228..939C)

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] finding eigenvectors etc

2008-02-20 Thread Warren Focke
The vectors that you used to build your covariance matrix all lay in or close 
to 
a 3-dimensional subspace of the 4-dimensional space in which they were 
represented.  So one of the eigenvalues of the covariance matrix is 0, or close 
to it; the matrix is singular.  Condition is the ratio of the largest 
eigenvalue 
to the smallest, large values can be troublesome.  Here it is ~1e17, which is 
the dynamic range of doubles.  Which means that the value you observe for the 
smallest eigenvaulue is just the result of rounding errors.

w

On Wed, 20 Feb 2008, [EMAIL PROTECTED] wrote:

 Different implementations follow different conventions as to which
 is which.

 thank you for the replies ..the reason why i asked was that the most
 significant eigenvectors ( sorted according to eigenvalues) are  later
 used in calculations and then the results obtained differ  in java and
 python..so i was worried as to which one to use

 Your matrix is almost singular, is badly conditionned,

 Mathew, can you explain that..i didn't quite get it..
 dn

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] finding eigenvectors etc

2008-02-19 Thread Warren Focke
Yes.

Your first eigenvalue is effectively 0, the values you see are just noise. 
Different implementations produce different noise.

As for the signs ot the eigenvector components, which direction is + or - X is 
arbitrary.  Different implementations follow different conventions as to which 
is which.  Sometimes it's just an accident.

Nothing-to-see-here-move-along-ly,
w

On Wed, 20 Feb 2008, Matthieu Brucher wrote:

 Hi,

 The results are OK, they are very close. Your matrix is almost singular, is
 badly conditionned, ... But the results are very close is you check them in
 a relative way. 3.84433376e-03 or -6.835301757686207E-4 is the same compared
 to 2.76980401e+13

 Matthieu

 2008/2/20, [EMAIL PROTECTED] [EMAIL PROTECTED]:

 hi
 i was calculating eigenvalues and eigenvectors for a covariancematrix
 using numpy

 adjfaces=matrix(adjarr)
 faces_trans=adjfaces.transpose()
 covarmat=adjfaces*faces_trans
 evalues,evect=eigh(covarmat)

 for a sample covarmat like
 [[  1.69365981e+13 , -5.44960784e+12,  -9.00346400e+12 , -2.48352625e
 +12]
 [ -5.44960784e+12,   5.08860660e+12,  -8.67539205e+11  , 1.22854045e
 +12]
 [ -9.00346400e+12,  -8.67539205e+11,   1.78184943e+13  ,-7.94749110e
 +12]
 [ -2.48352625e+12 ,  1.22854045e+12,  -7.94749110e+12 ,  9.20247690e
 +12]]

 i get these
 evalues
 [  3.84433376e-03,  4.17099934e+12 , 1.71771364e+13 ,  2.76980401e+13]

 evect
 [[ 0.5-0.04330262  0.60041892 -0.62259297]
 [ 0.5-0.78034307 -0.35933516  0.10928372]
 [ 0.5 0.25371931  0.3700265   0.74074753]
 [ 0.5 0.56992638 -0.6026 -0.22743827]]

 what bothers me is that for the same covarmat i get a different set of
 eigenvectors and eigenvalues when i use java library Jama's methods
 Matrix faceM = new Matrix(faces, nrfaces,length);
 Matrix faceM_transpose = faceM.transpose();
 Matrix covarM = faceM.times(faceM_transpose);
 EigenvalueDecomposition E = covarM.eig();
 double[] eigValue = diag(E.getD().getArray());
 double[][] eigVector = E.getV().getArray();

 here the eigValue=
 [-6.835301757686207E-4, 4.170999335736721E12, 1.7177136443134865E13,
 2.7698040117669414E13]

 and eigVector
 [
 [0.5, -0.04330262221379265, 0.6004189175979487, 0.6225929700052174],
 [0.5, -0.7803430730840767, -0.3593351608695496, -0.10928371540423852],
 [0.49994, 0.2537193127299541, 0.370026504572483,
 -0.7407475253159538],
 [0.49994, 0.5699263825679145, -0.602613008821,
 0.22743827071497524]
 ]

 I am quite confused bythis difference in results ..the first element
 in eigValue is different and also the signs in last column of
 eigVectors are diff..can someone tell me why this happens?
 thanks
 dn




 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




 -- 
 French PhD student
 Website : http://matthieu-brucher.developpez.com/
 Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
 LinkedIn : http://www.linkedin.com/in/matthieubrucher

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array allocation using tuples gives views of same array

2007-11-15 Thread Warren Focke


On Thu, 15 Nov 2007, George Nurser wrote:

 It looks to me like
 a,b = (zeros((2,)),)*2
 is equivalent to
 x= zeros((2,))
 a,b=(x,)*2

Correct.

 If this is indeed a feature rather than a bug, is there an alternative
 compact way to allocate many arrays?

a, b = [zeros((2,)) for x in range(2)]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C or C++ package like NumPy?

2007-11-01 Thread Warren Focke
http://www.oonumerics.org/blitz/

On Fri, 2 Nov 2007, Bill Baxter wrote:

 Does anyone know of a C or C++ library that's similar to NumPy?
 Seems like all the big C++ efforts are focused on linear algebra
 rather than general purpose multidimensional arrays.

 I've written a multidimensional array class in the D programming
 language with an API modeled loosely after NumPy's.  It works ok but
 there are a lot of differences between a statically typed language
 like C++ or D and a dynamic one like Python.  So I was looking around
 for API inspiration from other C/C++ libraries.  But the projects I
 know about are all focused on linear algebra (like ublas and MTL) and
 don't support general N-dimensional arrays.

 Thanks for your comments.
 --bb
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] comparing arrays with NaN in them.

2007-08-23 Thread Warren Focke


On Thu, 23 Aug 2007, Christopher Barker wrote:

 but that feels like a kludge. maybe some sort of TheseArrays are binary
 equal would be useful.

But there are multiple possible NaNs, so you couldn't rely on the bits 
comparing.

Maybe something with masked arrays?

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fourier with single precision

2007-08-02 Thread Warren Focke


On Thu, 2 Aug 2007, Lars Friedrich wrote:

 What I understood is that numpy uses FFTPACK's algorithms.

Sort of.  It appears to be a hand translation from F77 to C.

 From www.netlib.org/fftpack (is this the right address?) I took that 
 there is a single-precision and double-precision-version of the 
 algorithms. How hard would it be (for example for me...) to add the 
 single-precision versions to numpy? I am not a decent C-hacker, but if 
 someone tells me, that this task is not *too* hard, I would start 
 looking more closely at the code...

It shouldn't be hard.  fftpack.c will make a single-precision version if 
DOUBLE is not defined at compile time.

 Would it make sense, that if one passes an array of dtype =
 numpy.float32 to the fft function, a complex64 is returned, and if one
 passes an array of dtype = numpy.float64, a complex128 is returned?

Sounds like reasonable default behavior.  Might be useful if the caller 
could overrride it.


w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fourier with single precision

2007-08-02 Thread Warren Focke


On Thu, 2 Aug 2007, Charles R Harris wrote:

 On X86 machines the main virtue would be smaller and more cache friendly
 arrays because double precision arithmetic is about the same speed as single
 precision, sometimes even a bit faster. The PPC architecture does have
 faster single than double precision, so there it could make a difference.

Yeah, I was wondering if I should mention that.  I think SSE has real 
single precision, if you can convince the compiler to do it that way. 
Even better if it could be vectorized with SSE.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] annoying numpy string to float conversion behaviour

2007-06-25 Thread Warren Focke


On Mon, 25 Jun 2007, Torgil Svensson wrote:

 handled with sub-classing. At a minimum float(str(nan))==nan should
 evaluate as True.

False.  No NaN should ever compare equal to anything, even itself.  But if 
the system is 754-compliant, it won't.

isnan(float(str(nan))) == True would be nice, though.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-25 Thread Warren Focke
On futher contemplation, and hearing others' arguments, I'm changing my
vote.  Make it compatible with python.

w

On Tue, 24 Apr 2007, Warren Focke wrote:



 On Tue, 24 Apr 2007, Timothy Hochberg wrote:

  On 4/24/07, Robert Kern [EMAIL PROTECTED] wrote:
  
   Christian Marquardt wrote:
 
 
 
Restore the invariant, and follow python.
 
 
  This seems to imply that once upon a time numpy/numeric/numarray followed
  python here, but as far as I can recall that was never the case. Instead
  they followed C completely, ostensibly for performance reasons. That
  preserved the invariant in question, but not compatibility with Python.

 That's how I remember it, too.  And I think I'd prefer it be that way
 again, unless it can be demonstrated that the hit from being consistent
 with python is small.

 Note that C99 specifies behavior opposite that of python (but still
 consistent with itself).

 w

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-24 Thread Warren Focke


On Tue, 24 Apr 2007, Timothy Hochberg wrote:

 On 4/24/07, Robert Kern [EMAIL PROTECTED] wrote:
 
  Christian Marquardt wrote:



   Restore the invariant, and follow python.


 This seems to imply that once upon a time numpy/numeric/numarray followed
 python here, but as far as I can recall that was never the case. Instead
 they followed C completely, ostensibly for performance reasons. That
 preserved the invariant in question, but not compatibility with Python.

That's how I remember it, too.  And I think I'd prefer it be that way
again, unless it can be demonstrated that the hit from being consistent
with python is small.

Note that C99 specifies behavior opposite that of python (but still
consistent with itself).

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-23 Thread Warren Focke
But even C89 required that x == (x/y)*y + (x%y), and that's not the case
here.

w

On Mon, 23 Apr 2007, David M. Cooke wrote:

 On Apr 23, 2007, at 16:41 , Christian Marquardt wrote:
  On Mon, April 23, 2007 22:29, Christian Marquardt wrote:
  Actually,
 
  it happens for normal integers as well:
 
  n = np.array([-5, -100, -150])
  n // 100
 array([ 0, -1, -1])
  -5//100, -100//100, -150//100
 (-1, -1, -2)
 
  and finally:
 
  n % 100
 array([95,  0, 50])
  -5 % 100, -100 % 100, -150 % 100
 (95, 0, 50)
 
  So plain python / using long provides consistent results across //
  and %, but numpy doesn't...

 Python defines x // y as returning the floor of the division, and x %
 y has the same sign as y. However, in C89, it is implementation-
 defined (i.e., portability-pain-in-the-ass) whether the floor or ceil
 is used when the signs of x and y differ. In C99, the result should
 be truncated. From the C99 spec, sec. 6.5.5, #6:
 When integers are divided, the result of the / operator
 is   the   algebraic   quotient  with  any  fractional  part
 discarded.76)  If the quotient  a/b  is  representable,  the
 expression (a/b)*b + a%b shall equal a.

 Numpy follows whatever the C compiler decides to use, because of
 speed-vs.-Python compatibility tradeoff.

 
Christian.
 
  On Mon, April 23, 2007 22:20, Christian Marquardt wrote:
  Dear all,
 
  this is odd:
 
  import numpy as np
  fact = 2825L * 86400L
  nn = np.array([-20905000L])
  nn
 array([-20905000], dtype=int64)
  nn[0] // fact
 0
 
  But:
 
  long(nn[0]) // fact
 -1L
 
  Is this a bug in numpy, or in python's implementation of longs? I
  would
  think both should give the same, really... (Python 2.5, numpy
  1.0.3dev3725,
  Linux, Intel compilers...)
 
  Many thanks for any ideas / advice,
 
Christian

 --
 ||\/|
 /--\
 |David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
 |[EMAIL PROTECTED]


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] zoom FFT with numpy?

2007-03-15 Thread Warren Focke


On Wed, 14 Mar 2007, Charles R Harris wrote:

 On 3/14/07, Ray Schumacher [EMAIL PROTECTED] wrote:
 
  What I had been doing is a 2048 N full real_FFT with a Hann window, and
  further analyzing the side lobe/bin energy (via linear interp) to try to
  more precisely determine the f within the peak's bin. (How legitimately
  valuable that is I'm not sure... IANAM)


 That's usually fine. You might want to zero fill to get more samples through
 the band. It would help if you gave the sample frequency in Hz too. Anyway,
 unless time is important, I would just zero fill by a factor of 4-8 and
 transform. You can get the same effect with a chirp-z transform, but again
 this is depends on how much programming work you want to do. If you just
 have a couple of lines in the band that you want to locate you could also
 try maximum entropy methods.

Or do

1) explicit multipy by the transform matrix but only use a few rows.

but evaluate the transform matrix on a frequency grid finer than the
1/(2048*tsamp) that is actually independent.  Then fit sin(f)/f to the
power spectrum.  Either of these should give better interpolation than
linear.  I've seen this done (and pass peer review) to determine pulsar
frequencies.  I also remain unconvinced that interpolation provides a
better result, but that can be determined by analyzing fake data with a
known frequency.

If you're trying to determine the significance of the result, the fit
should somehow take into account the fact that the interpolated data
points are not real degrees of freedom.  But your power estimates are
already not independent since you've applied a Hann window.  Probably
should also fit to the line response of a Hann window rather than
sin(f)/f.  Plus something (flat?) for the noise.  You could determine the
real # of degrees of freedom by simulating the procedure many times (with
noise) and fitting a chisquare function to the distribution of fit
chisquare values that you see.  This would also give an empirical estimate
of how well you were determining the frequency, which is probably better
than mucking about with degrees of freedom, anyway.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT definition

2007-02-05 Thread Warren Focke
The frequencies produced by the two recipies are not the same.  But the
DFT is periodic in both frequency and time.  So whether you think that the
number in bin in n/2 should correspond to frequency n/2 or -n/2, it's the
same number.

w

On Mon, 5 Feb 2007, Hanno Klemm wrote:


 Hi there,

 I have a question regarding the definitions surrounding FFTs. The help
 to numpy.fft.fft says:

  help(N.fft.fft)
 Help on function fft in module numpy.fft.fftpack:

 fft(a, n=None, axis=-1)
 fft(a, n=None, axis=-1)

 Will return the n point discrete Fourier transform of a. n
 defaults to the
 length of a. If n is larger than a, then a will be zero-padded to
 make up
 the difference. If n is smaller than a, the first n items in a will be
 used.

 The packing of the result is standard: If A = fft(a, n), then A[0]
 contains the zero-frequency term, A[1:n/2+1] contains the
 positive-frequency terms, and A[n/2+1:] contains the
 negative-frequency
 terms, in order of decreasingly negative frequency. So for an 8-point
 transform, the frequencies of the result are [ 0, 1, 2, 3, 4, -3,
 -2, -1].

 This is most efficient for n a power of two. This also stores a
 cache of
 working memory for different sizes of fft's, so you could
 theoretically
 run into memory problems if you call this too many times with too many
 different n's.

 

 However, the help to numpy.fft.helper.fftfreq says:

  help(N.fft.helper.fftfreq)
 Help on function fftfreq in module numpy.fft.helper:

 fftfreq(n, d=1.0)
 fftfreq(n, d=1.0) - f

 DFT sample frequencies

 The returned float array contains the frequency bins in
 cycles/unit (with zero at the start) given a window length n and a
 sample spacing d:

   f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even
   f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n)   if n is odd

 

 So one claims, that the packing goes from [0,1,...,n/2,-n/2+1,..,-1]
 (fft) and the other one claims the frequencies go from
 [0,1,...,n/2-1,-n/2,...-1]

 Is this inconsistent or am I missing something here?

 Hanno

 --
 Hanno Klemm
 [EMAIL PROTECTED]


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT definition

2007-02-05 Thread Warren Focke


On Mon, 5 Feb 2007, Timothy Hochberg wrote:

 On 2/5/07, Hanno Klemm [EMAIL PROTECTED] wrote:
 [numpy.fft[

 The packing of the result is standard: If A = fft(a, n), then A[0]
  contains the zero-frequency term, A[1:n/2+1] contains the
  positive-frequency terms, and A[n/2+1:] contains the
  negative-frequency
  terms, in order of decreasingly negative frequency. So for an 8-point
  transform, the frequencies of the result are [ 0, 1, 2, 3, 4, -3,
  -2, -1].


 [scipy.fft]


   f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even
f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n)   if n is odd
 
  
 
  So one claims, that the packing goes from [0,1,...,n/2,-n/2+1,..,-1]
  (fft) and the other one claims the frequencies go from
  [0,1,...,n/2-1,-n/2,...-1]
 
  Is this inconsistent or am I missing something here?


 Both, I think.

 In the even case, the frequency at n/2 is shared by both the positive
 frequencies, so for that case things are consistent if not terribly clear.
 For the odd case, this is not true, and the scipy docs look correct in this
 case, while the numpy docs appear to assign an extra frequency to the
 positive branch. Of course that's not the one you were complaining about
 ;-).

Extra frequency where?

(numpy 1.0, debian sarge)
 n=9
 A=arange(n)

numpy docs:
 A[1:n/2+1]
array([1, 2, 3, 4])
 A[n/2+1:]
array([5, 6, 7, 8])

scipy docs:
 (n-1)/2
4
 -(n-1)/2
-4

Note that in the odd-n case, there is no Nyquist term.  If
F = fft(f), len(f) == 9
then F[-4] != F[4] (F[5] == F[-4] by periodicty in frequency)

w


 To be super pedantic, the discrete Fourier transform is periodic, so all of
 the frequencies can be regarded as positive or negative. That's not
 generally useful, since the assumptions that go into the DFT that make it
 periodic don't usually apply to the signal that you are sampling. Then again
 the results of DFTs are typicallly either small or silly in the vicinity of
 N//2.


 //=][=\\

 [EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion