[Numpy-discussion] runtime warning for where

2013-11-16 Thread David Pine
The program at the bottom of this message returns the following runtime warning:

python test.py
test.py:5: RuntimeWarning: invalid value encountered in divide
 return np.where(x==0., 1., np.sin(x)/x)

The function works correctly returning
x = np.array([  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.])
y = np.array([ 1.,  0.84147098,  0.45464871,  0.04704   , -0.18920062,
  -0.19178485, -0.04656925,  0.09385523,  0.12366978,  0.04579094,
  -0.05440211])

The runtime warning suggests that np.where evaluates np.sin(x)/x at all x, 
including x=0, even though the np.where function returns the correct value of 
1. when x is 0.  This seems odd to me.  Why issue a runtime warning? Nothing is 
wrong.  Moreover, I don't recall numpy issuing such warnings in earlier 
versions.

import numpy as np
import matplotlib.pyplot as plt

def sinc(x):
   return np.where(x==0., 1., np.sin(x)/x)

x = np.linspace(0., 10., 11)
y = sinc(x)

plt.plot(x, y)
plt.show()
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] polyfit with fixed points

2013-03-05 Thread David Pine
Jaime,

If you are going to work on this, you should also take a look at the recent
thread
http://mail.scipy.org/pipermail/numpy-discussion/2013-February/065649.html,
which is about the weighting function, which is in a confused state in the
current version of polyfit.  By the way, Numerical Recipes has a nice
discussion both about fixing parameters and about weighting the data in
different ways in polynomial least squares fitting.

David


On Mon, Mar 4, 2013 at 7:23 PM, Jaime Fernández del Río 
jaime.f...@gmail.com wrote:

 A couple of days back, answering a question in StackExchange (
 http://stackoverflow.com/a/15196628/110026), I found myself using
 Lagrange multipliers to fit a polynomial with least squares to data, making
 sure it went through some fixed points. This time it was relatively easy,
 because some 5 years ago I came across the same problem in real life, and
 spent the better part of a week banging my head against it. Even knowing
 what you are doing, it is far from simple, and in my own experience very
 useful: I think the only time ever I have fitted a polynomial to data with
 a definite purpose, it required that some points were fixed.

 Seeing that polyfit is entirely coded in python, it would be relatively
 straightforward to add support for fixed points. It is also something I
 feel capable, and willing, of doing.

  * Is such an additional feature something worthy of investigating, or
 will it never find its way into numpy.polyfit?
  * Any ideas on the best syntax for the extra parameters?

 Thanks,

 Jaime

 --
 (\__/)
 ( O.o)
 (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
 de dominación mundial.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] polyfit in NumPy v1.7

2013-02-27 Thread David Pine
As of NumPy v1.7, numpy.polyfit includes an option for providing weighting
to data to be fit.  It's a welcome addition, but the implementation seems a
bit non-standard, perhaps even wrong, and I wonder if someone can enlighten
me.

1. The documentation does not specify what the weighting array w is
supposed to be.  After some fooling around I figured out that it is
1/sigma, where sigma is the standard deviation uncertainty sigma in the y
data.  A better implementation, which would be consistent with how
weighting is done in scipy.optimize.curve_fit, would be to specify sigma
directly, rather than w.  This is typically what the user has at hand,
this syntax is more transparent, and it would make polyfit consistent with
the nonlinear fitting routine scipy.optimize.curve_fit.

2. The definition of the covariance matrix in polyfit is peculiar (or maybe
wrong, I'm not sure).  Towards the end of the code for polyfit, the
standard covariance matrix is calculated and given the variable name
Vbase.  However, instead of returning Vbase as the covariance matrix,
polyfit returns the quantity Vbase * fac, where fac = resids / (len(x) -
order - 2.0).  fac scales as N, the number of data points, so the
covariance matrix is about N times larger than it should be for large
values of N; the increase is smaller for small N (of order 10).  Perhaps
the authors of polyfit want to use a more conservative error estimate, but
the references given in the poyfit code make no mention of it.  I think it
would be much better to return the standard definition of the covariance
matrix (see, for example, the book Numerical Recipes).

David Pine
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] polyfit in NumPy v1.7

2013-02-27 Thread David Pine
Pauli, Josef, Chuck,

I read over the discussion on curve_fit.  I believe I now understand what
people are trying to do when they write about scaling the weighting and/or
covariance matrix.  And I agree that what polyfit does in its current form
is estimate the absolute errors in the data from the data and the fit.

Unfortunately, in the case that you want to supply the absolute uncertainty
estimates, polyfit doesn't leave the option of not making this
correction.  This is a mistake,

In my opinion, polyfit and curve_fit should be organized with the following
arguments:

ydata_err  :  array_like or float or None

absolute_err  :  bool (True or False)



If ydata_err is an array, then it has the same length as xdata and ydata
and gives the uncertainty (absolute or relative) of ydata.

If ydata_err is a float (a single value), it gives the uncertainty
(absolute or relative) of ydata.  That is, all ydata points have the same
uncertainty

If ydata_err is None, the ydata_err is set equal to 1 internally.

If absolute_err is True, then no correction is made to the covariance matrix

If absolute_err is False, the a correction is made to the covariance matrix
based on the square of residuals and the value(s) of ydata_err so that it
gives useful estimates of the uncertainties in the fitting parameters.

Finally, I have a quibble with the use of the extra factor of 2 subtracted
in the denominator of  fac (line 596).  This is highly non-standard.  Most
of the literature does not use this factor, and for good reason.  It leads
to complete nonsense when there are only 3 or 4 data points, for one thing.
 In supplying software for general use, the most common standard should be
used.

Finally, the documentation needs to be improved so that it is clear.  I
would be happy to contribute both to improving the documentation and
software.

David

If



On Wed, Feb 27, 2013 at 12:47 PM, Pauli Virtanen p...@iki.fi wrote:

 27.02.2013 16:40, David Pine kirjoitti:
 [clip]
  2.  I am sorry but I don't understand your response.  The matrix Vbase
  in the code is already the covariance matrix, _before_ it is scaled by
  fac.  Scaling it by fac and returning  Vbase*fac as the covariance
  matrix is wrong, at least according to the references I know, including
  Numerical Recipes, by Press et al, Data Reduction and Error Analysis
  for the Physical Sciences by Bevington, both standard works.

 The covariance matrix is (A^T A)^-1 only if the data is weighed by its
 standard errors prior to lstsq. Polyfit estimates the standard errors
 from the fit itself, which results in the `fac` multiplication.

 This is apparently what some people expect. The way the weight
 parameters work is however confusing, as they are
 w[i]=sigma_estimate/sigma[i], rather than being absolute errors.

 Anyway, as Josef noted, it's the same problem that curve_fit in Scipy
 had and probably the same fix needs to be done here.

 --
 Pauli Virtanen

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion