Re: [Numpy-discussion] Adding weights to cov and corrcoef

2014-03-06 Thread David Goldsmith
Date: Thu, 06 Mar 2014 13:40:40 +0100

 From: Sebastian Berg sebast...@sipsolutions.net
 Subject: Re: [Numpy-discussion] Adding weights to cov and corrcoef
 (Sebastian Berg)
 To: numpy-discussion@scipy.org
 Message-ID: 1394109640.9122.13.camel@sebastian-t440
 Content-Type: text/plain; charset=UTF-8

 On Mi, 2014-03-05 at 10:21 -0800, David Goldsmith wrote:
  +1 for it being too baroque for NumPy--should go in SciPy (if it
  isn't already there): IMHO, NumPy should be kept as lean and mean as
  possible, embellishments are what SciPy is for.  (Again, IMO.)
 

 Well, on the other hand, scipy does not actually have a `std` function
 of its own, I think.


Oh, well, in that case forget I said anything.  (Though I think it's
interesting that no one else has chimed in: if you're the only one that
needs it (at this time), perhaps it would be best to roll your own and
then offer to pass it around. :-))

DG


 So if it is quite useful I think this may be an
 option (I don't think I ever used weights with std, so I can't argue
 strongly for inclusion myself). Unless adding new functions to
 `scipy.stats` (or just statsmodels) which implement different types of
 weights is the longer term plan, then things might bite...

  DG
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion




 --

 Message: 5
 Date: Thu, 6 Mar 2014 13:45:36 +
 From: Nathaniel Smith n...@pobox.com
 Subject: Re: [Numpy-discussion] numpy gsoc ideas (was: numpy gsoc
 topic idea: configurable algorithm precision and vector math
 library
 integration)
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 CAPJVwBm=no71WvC9Zjh7DXNaGn0jpAGmvrOyoVvmHK-MW=
 r...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8

 On Thu, Mar 6, 2014 at 5:17 AM, Sturla Molden sturla.mol...@gmail.com
 wrote:
  Nathaniel Smith n...@pobox.com wrote:
 
  3. Using Cython in the numpy core
 
  The numpy core contains tons of complicated C code implementing
  elaborate operations like indexing, casting, ufunc dispatch, etc. It
  would be really nice if we could use Cython to write some of these
  things.
 
  So the idea of having a NumPy as a pure C library in the core is
 abandoned?

 This question doesn't make sense to me so I think I must be missing
 some context.

 Nothing is abandoned: This is one email by one person on one mailing
 list suggesting a project to the explore the feasibility of something.
 And anyway, Cython is just a C code generator, similar in principle to
 (though vastly more sophisticated than) the ones we already use. It's
 not like we've ever promised our users we'll keep stable which kind of
 code generators we use internally.

   However, there is a practical problem: Cython assumes that
  each .pyx file generates a single compiled module with its own
  Cython-defined API. Numpy, however, contains a large number of .c
  files which are all compiled together into a single module, with its
  own home-brewed system for defining the public API. And we can't
  rewrite the whole thing. So for this to be viable, we would need some
  way to compile a bunch of .c *and .pyx* files together into a single
  module, and allow the .c and .pyx files to call each other.
 
  Cython takes care of that already.
 
  http://docs.cython.org/src/userguide/sharing_declarations.html#cimport
 
 
 http://docs.cython.org/src/userguide/external_C_code.html#using-cython-declarations-from-c

 Linking multiple .c and .pyx files together into a single .so/.dll is
 much more complicated than just using 'cimport'. Try it if you don't
 believe me :-).

 -n

 --
 Nathaniel J. Smith
 Postdoctoral researcher - Informatics - University of Edinburgh
 http://vorpus.org


 --

 Message: 6
 Date: Thu, 6 Mar 2014 13:59:30 +
 From: Nathaniel Smith n...@pobox.com
 Subject: Re: [Numpy-discussion] numpy gsoc ideas (was: numpy gsoc
 topic idea: configurable algorithm precision and vector math
 library
 integration)
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 CAPJVwB=PmquKm5j4-oquCkLkcK8G1pipB8XLbq5=
 26izjbj...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8

 On Thu, Mar 6, 2014 at 9:11 AM, David Cournapeau courn...@gmail.com
 wrote:
 
  On Wed, Mar 5, 2014 at 9:11 PM, Nathaniel Smith n...@pobox.com wrote:
  So this project would have the following goals, depending on how
  practical this turns out to be: (1) produce a hacky proof-of-concept
  system for doing the above, (2) turn the hacky proof-of-concept into
  something actually viable for use in real life (possibly this would
  require getting changes upstream into Cython, etc.), (3) use this
  system to actually port some interesting numpy code into cython.
 
 
  Having to synchronise two projects may be hard for a GSoC, no ?

 Yeah, if someone

Re: [Numpy-discussion] Adding weights to cov and corrcoef (Sebastian Berg)

2014-03-05 Thread David Goldsmith
Date: Wed, 05 Mar 2014 17:45:47 +0100

 From: Sebastian Berg sebast...@sipsolutions.net
 Subject: [Numpy-discussion] Adding weights to cov and corrcoef
 To: numpy-discussion@scipy.org
 Message-ID: 1394037947.21356.20.camel@sebastian-t440
 Content-Type: text/plain; charset=UTF-8

 Hi all,

 in Pull Request https://github.com/numpy/numpy/pull/3864 Neol Dawe
 suggested adding new parameters to our `cov` and `corrcoef` functions to
 implement weights, which already exists for `average` (the PR still
 needs to be adapted).


Do you mean adopted?


 However, we may have missed something obvious, or maybe it is already
 getting too statistical for NumPy, or the keyword argument might be
 better `uncertainties` and `frequencies`. So comments and insights are
 very welcome :).


+1 for it being too baroque for NumPy--should go in SciPy (if it isn't
already there): IMHO, NumPy should be kept as lean and mean as possible,
embellishments are what SciPy is for.  (Again, IMO.)

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [JOB ANNOUNCEMENT] Software Developer permanent position

2014-02-21 Thread David Goldsmith
On Thu, Feb 20, 2014 at 10:37 PM, numpy-discussion-requ...@scipy.orgwrote:

 Date: Fri, 21 Feb 2014 07:43:17 +0100
 From: V. Armando Sol? s...@esrf.fr
 *Ref. 8173* *- Deadline for returning application forms: * *01/04/2014*


I assume that's the European date format, i.e., the due date is April 1,
2014, not Jan. 4 2014, oui?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] create numerical arrays from strings

2014-02-06 Thread David Goldsmith
Date: Thu, 6 Feb 2014 08:42:38 -0800

 From: Chris Barker chris.bar...@noaa.gov
 Subject: Re: [Numpy-discussion] create numerical arrays from strings
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 
 calgmxekvnqok6wty-jbjzgaeu5ewhh1_flmsqxjsujfclex...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

 1) so use np.mat !


To elaborate on this (because I, for one, was not aware that mat supported
this API, and, significantly, the fact that it does, does not appear in its
docstring:

import numpy as np
help(np.mat)
Help on function asmatrix in module numpy.matrixlib.defmatrix:

asmatrix(data, dtype=None)
Interpret the input as a matrix.

Unlike `matrix`, `asmatrix` does not make a copy if the input is already
a matrix or an ndarray.  Equivalent to ``matrix(data, copy=False)``.

Parameters
--
data : array_like
Input data.

Returns
---
mat : matrix
`data` interpreted as a matrix.

Examples

 x = np.array([[1, 2], [3, 4]])

 m = np.asmatrix(x)

 x[0,0] = 5

 m
matrix([[5, 2],
[3, 4]])
)

However, we do have:

a=np.mat('1 2;3 4')
a
matrix([[1, 2],
[3, 4]])
b = np.array(a)
b
array([[1, 2],
   [3, 4]])

and so, as we should expect:

c=np.array(np.mat('1 2;3 4'))
c
array([[1, 2],
   [3, 4]])

So the substance of the utility function Stefan suggests is one line:

def numstr2numarr(in):
 'in' is a matlab-style array containing strings for the numerical
array entries 
return np.array(np.mat(in))

In essence, numpy almost provides the API you're asking for.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A one-byte string dtype?

2014-01-21 Thread David Goldsmith
Am I the only one who feels that this (very important--I'm being sincere,
not sarcastic) thread has matured and specialized enough to warrant it's
own home on the Wiki?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A one-byte string dtype?

2014-01-21 Thread David Goldsmith
 Date: Tue, 21 Jan 2014 17:35:26 +
 From: Nathaniel Smith n...@pobox.com
 Subject: Re: [Numpy-discussion] A one-byte string dtype?
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 CAPJVwB=+47ofYvnvN76=
 ke3xlga2+gz+qd4f0xs2uboeysg...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

 On 21 Jan 2014 17:28, David Goldsmith d.l.goldsm...@gmail.com wrote:
 
 
  Am I the only one who feels that this (very important--I'm being sincere,
 not sarcastic) thread has matured and specialized enough to warrant it's
 own home on the Wiki?

 Sounds plausible, perhaps you could write up such a page?

 -n


I can certainly get one started (but I don't think I can faithfully
summarize all this thread's current content, so I apologize in advance for
leaving that undone).

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A one-byte string dtype?

2014-01-21 Thread David Goldsmith
On Tue, Jan 21, 2014 at 10:00 AM, numpy-discussion-requ...@scipy.orgwrote:

 Date: Tue, 21 Jan 2014 09:53:25 -0800
 From: David Goldsmith d.l.goldsm...@gmail.com
 Subject: Re: [Numpy-discussion] A one-byte string dtype?
 To: numpy-discussion@scipy.org
 Message-ID:
 CAFtPsZqRrDxrshBMVyS+Z=
 7altpxmrz4miujy2xebyi_fy5...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

  Date: Tue, 21 Jan 2014 17:35:26 +
  From: Nathaniel Smith n...@pobox.com
  Subject: Re: [Numpy-discussion] A one-byte string dtype?
  To: Discussion of Numerical Python numpy-discussion@scipy.org
  Message-ID:
  CAPJVwB=+47ofYvnvN76=
  ke3xlga2+gz+qd4f0xs2uboeysg...@mail.gmail.com
  Content-Type: text/plain; charset=utf-8
 
  On 21 Jan 2014 17:28, David Goldsmith d.l.goldsm...@gmail.com wrote:
  
  
   Am I the only one who feels that this (very important--I'm being
 sincere,
  not sarcastic) thread has matured and specialized enough to warrant it's
  own home on the Wiki?
 
  Sounds plausible, perhaps you could write up such a page?
 
  -n
 

 I can certainly get one started (but I don't think I can faithfully
 summarize all this thread's current content, so I apologize in advance for
 leaving that undone).

 DG


OK, I'm lost already: is there general agreement that this should jump
straight to one or more NEP's?  If not (or if there should be a Wiki page
for it additionally), should such become part of the NumPy Wiki @
Sourceforge or the SciPy Wiki at the scipy.org site?  If the latter, is
one's SciPy Wiki login the same as one's mailing list subscriber
maintenance login?  I guess starting such a page is not as trivial as I had
assumed.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A one-byte string dtype?

2014-01-21 Thread David Goldsmith
Date: Tue, 21 Jan 2014 19:20:12 +

 From: Robert Kern robert.k...@gmail.com
 Subject: Re: [Numpy-discussion] A one-byte string dtype?



 The wiki is frozen. Please do not add anything to it. It plays no role in
 our current development workflow. Drafting a NEP or two and iterating on
 them would be the next step.

 --
 Robert Kern


OK, well that's definitely beyond my level of expertise.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A one-byte string dtype? (Charles R Harris)

2014-01-20 Thread David Goldsmith
On Mon, Jan 20, 2014 at 9:11 AM, numpy-discussion-requ...@scipy.org wrote:

 I think that is right. Not having an effective way to handle these common
 scientific data sets will block acceptance of Python 3. But we do need to
 figure out the best way to add this functionality.

 Chuck


Sounds like it might be time for some formal data collection, e.g., a
wiki-poll of users' use-cases.  (I know this wouldn't be exhaustive, but at
least it will provide guidance and a checklist of situations we should be
sure our solution covers.)

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] using loadtxt to load a text file in to a numpy array (Charles R Harris)

2014-01-15 Thread David Goldsmith
On Wed, Jan 15, 2014 at 9:52 AM, numpy-discussion-requ...@scipy.org wrote:

 Date: Wed, 15 Jan 2014 10:57:51 -0700
 From: Charles R Harris charlesr.har...@gmail.com
 Subject: Re: [Numpy-discussion] using loadtxt to load a text file in
 to a numpy array
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 
 cab6mnxjpvjbsozzy0ctk1bk+kdcudivc9krzyt1johu33bz...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 On Wed, Jan 15, 2014 at 10:27 AM, Chris Barker chris.bar...@noaa.gov
 wrote:

 There was a discussion of this long ago and UCS-4 was chosen as the numpy
 standard. There are just too many complications that arise in supporting
 both.

 Chuck


In that case, perhaps another function altogether is called for.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quaternion type @ rosettacode.org

2014-01-03 Thread David Goldsmith
Thanks Anthony and Paul!

OlyDLG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Quaternion type @ rosettacode.org

2014-01-02 Thread David Goldsmith
Anyone here use/have an opinion about the Quaternion type @
rosettacode.orghttp://rosettacode.org/wiki/Simple_Quaternion_type_and_operations#Python?
Or have an opinion about it having derived the type from
collections.namedtuple?  Anyone have an open-source, numpy-based
alternative?  Ditto last question for Octonion and/or general n-basis
Grassmann (exterior) and/or Clifford Algebras?  (rosettacode appears to
have none of these).  Thanks!

David Goldsmith
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] proposal: min, max of complex should give warning (Ralf Gommers)

2013-12-31 Thread David Goldsmith

 As for your proposal, it would be good to know if adding a warning would
 actually catch any bugs. For the truncation warning it caught several in
 scipy and other libs IIRC.

 Ralf


In light of this, perhaps the pertinent unit tests should be modified (even
if the warning suggestion isn't adopted, about which I'm neutral...but I'm
a little surprised that there isn't a generic way to globally turn off
specific warnings).

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] getting the equivalent complex dtype from a real or int array

2013-10-29 Thread David Goldsmith
We really ought to have a special page for all of Robert's little gems!

DG

On Tue, Oct 29, 2013 at 10:00 AM, numpy-discussion-requ...@scipy.orgwrote:


 -Message: 5
 Date: Tue, 29 Oct 2013 17:02:33 +
 From: Robert Kern robert.k...@gmail.com
 Subject: Re: [Numpy-discussion] getting the equivalent complex dtype
 from a real or int array
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 
 caf6fjiuyndbe1uo9j6onl1pq+ovzx-ecqkz0qe9migyqt69...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

 On Tue, Oct 29, 2013 at 4:47 PM, Henry Gomersall h...@cantab.net wrote:
 
  Is there a way to extract the size of array that would be created by
  doing 1j*array?
 
  The problem I'm having is in creating an empty array to fill with
  complex values without knowing a priori what the input data type is.
 
  For example, I have a real or int array `a`.
 
  I want to create an array `b` which can hold values from 1j*a in such a
  way that I don't need to compute those explicitly (because I only need
  parts of the array say), without upcasting (or indeed downcasting) the
  result.
 
  So if `a` was dtype 'float32`, `b` would be of dtype `complex64`. If `a`
  was `int64`, `b` would be of dtype `complex128` etc.

 Quick and dirty:

 # Get a tiny array from `a` to test the dtype of its output when multiplied
 # by a complex float. It must be an array rather than a scalar since the
 # casting rules are different for array*scalar and scalar*scalar.
 dt = (a.flat[:2] * 1j).dtype
 b = np.empty(shape, dtype=dt)

 --
 Robert Kern

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OT: How are SVG data converted into curves

2013-10-16 Thread David Goldsmith
Does anyone on this list know how Scalable Vector Graphics C, S, etc.
command data are translated into curves (i.e., pixel maps) and might you be
willing to answer some questions off-list?  Thanks!

DG

PS: I receive numpy-discussion in digest mode, so if you qualify, please
reply directly to my email.  Thanks again.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] On Topic: Faster way to implement Bernstein polys: explicit or recursion?

2013-10-16 Thread David Goldsmith
Many thanks to Daniele Nicolodi for pointing me to the Wikipedia article
on Bézier curves.  Said article gives two formulae for the Bézier curve of
degree n: one explicit, one recursive.  Using numpy.polynomial.Polynomial
as the base class, and its evaluation method for the evaluation in each
dimension, which approach is likely to be faster for evaluation at
parameter t? Does it depend on the degree, i.e., one approach will likely
be faster for low degree while the other will likely be faster for higher
degree?  Thanks!

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in numpy.correlate documentation

2013-10-09 Thread David Goldsmith
Looks like Wolfram MathWorld would favor the docstring, but the possibility
of a use-domain dependency seems plausible (after all, a similar dilemma
is observed, e.g., w/ the Fourier Transform)--I guess one discipline's
future is another discipline's past. :-)

http://mathworld.wolfram.com/Autocorrelation.html

DG

Date: Tue, 8 Oct 2013 20:10:41 +0100

 From: Richard Hattersley rhatters...@gmail.com
 Subject: Re: [Numpy-discussion] Bug in numpy.correlate documentation
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 CAP=RS9k54vtNFHy9ppG=U09oEHwB=KLV0xvwR6BfFgB3o5S=
 f...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 Hi Bernard,

 Looks like you're on to something - two other people have raised this
 discrepancy before: https://github.com/numpy/numpy/issues/2588.
 Unfortunately, when it comes to resolving the discrepancy one of the
 previous comments takes the opposite view. Namely, that the docstring is
 correct and the code is wrong.

 Do different domains use different conventions here? Are there some
 references to back up one stance or another?

 But all else being equal, I'm guessing there'll be far more appetite for
 updating the documentation than the code.

 Regards,
 Richard Hattersley


 On 7 October 2013 22:09, Bernhard Spinnler bernhard.spinn...@gmx.net
 wrote:

  The numpy.correlate documentation says:
 
  correlate(a, v) = z[k] = sum_n a[n] * conj(v[n+k])
 

snip

  [so] according to the documentation, z should be
 
  z[-1] = a[1] * conj(v[0]) = 4.+0.j
  z[0]  = a[0] * conj(v[0]) + a[1] * conj(v[1]) = 2.-2.j
  z[1] = a[0] * conj(v[1]) = 0.-1.j
 
  which is the time reversed version of what correlate() calculates.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now available in Python (Dmitrey)

2013-10-06 Thread David Goldsmith
On Sun, Oct 6, 2013 at 10:00 AM, numpy-discussion-requ...@scipy.org wrote:

 Message: 2
 Date: Sat, 05 Oct 2013 21:36:48 +0300
 From: Dmitrey tm...@ukr.net
 Subject: Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now
 available inPython
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Cc: numpy-discussion@scipy.org
 Message-ID: 1380997576.559804301.aoyna...@frv43.ukr.net
 Content-Type: text/plain; charset=utf-8



 Seems like using the MATLAB solvers with MCR requires my wrappers
 containing in several files to be compiled with MATLAB Compiler before. I
 have no license for MATLAB thus I may have problems if I'll make it done
 and will spread it with OpenOpt suite code, also, binary files are
 incompatible with BSD license.


Darn, knew it was too good to be true.


 On the other hand, IIRC a little bit obsolete MATLAB versions (I don't
 think difference is essential) have more liberal licenses.
 As for MATLAB solvers examples, I have already mentioned them in the mail
 list, you could see them in http://openopt.org/ODE (just replace solver
 name from scipy_lsoda to ode23s or any other), http://openopt.org/NLP ,
 http://openopt.org/SNLE


Oooops, so sorry. :-o

DG


 --
 Regards, D. http://openopt.org/Dmitrey
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20131005/dd6638db/attachment-0001.html

 --

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 End of NumPy-Discussion Digest, Vol 85, Issue 17
 




-- 
From A Letter From The Future in Peak Everything by Richard Heinberg:

By the time I was an older teenager, a certain...attitude was developing
among the young people...a feeling of utter contempt for anyone over a
certain age--maybe 30 or 40.  The adults had consumed so many resources,
and now there were none left for their own children...when those adults
were younger, they [were] just doing what everybody else was doing...they
figured it was normal to cut down ancient forests for...phone books, pump
every last gallon of oil to power their SUV's...[but] for...my generation
all that was just a dim memory...We [grew up] living in darkness, with
shortages of food and water, with riots in the streets, with people begging
on street corners...for us, the adults were the enemy.

Want to *really* understand what's *really* going on?  Read Peak
Everything.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now available in Python

2013-10-05 Thread David Goldsmith
MCR stands for MATLAB Compiler Runtime and if that's all it requires,
that's great, 'cause that's free.  Look forward to giving this a try; does
the distribution come w/ examples?

DG

Date: Sat, 05 Oct 2013 11:27:04 +0300

 From: Dmitrey tm...@ukr.net
 Subject: Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now
 available inPython
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Cc: scipy-u...@scipy.org, numpy-discussion@scipy.org
 Message-ID: 1380961403.991447284.5qt9w...@frv43.ukr.net
 Content-Type: text/plain; charset=utf-8

 It requires MATLAB or MATLAB Component Runtime? (
 http://www.mathworks.com/products/compiler/mcr/ )
 I'm not regular subscriber of the mail list thus you'd better ask openopt
 forum.

 --
 Regards, D. http://openopt.org/Dmitrey
 ---  ? ---
 ?? : Eric Carlson  ecarl...@eng.ua.edu 
 : 5 ??? 2013, 01:19:28

 Hello,
 Does this require a MATLAB install, or are these equivalent routines?

 Thanks,
 Eric

 ___
 NumPy-Discussion mailing list

 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20131005/efa7fba5/attachment-0001.html

 --

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 End of NumPy-Discussion Digest, Vol 85, Issue 16
 




-- 
From A Letter From The Future in Peak Everything by Richard Heinberg:

By the time I was an older teenager, a certain...attitude was developing
among the young people...a feeling of utter contempt for anyone over a
certain age--maybe 30 or 40.  The adults had consumed so many resources,
and now there were none left for their own children...when those adults
were younger, they [were] just doing what everybody else was doing...they
figured it was normal to cut down ancient forests for...phone books, pump
every last gallon of oil to power their SUV's...[but] for...my generation
all that was just a dim memory...We [grew up] living in darkness, with
shortages of food and water, with riots in the streets, with people begging
on street corners...for us, the adults were the enemy.

Want to *really* understand what's *really* going on?  Read Peak
Everything.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Valid algorithm for generating a 3D Wiener Process?

2013-09-25 Thread David Goldsmith
Thanks, guys.  Yeah, I realized the problem w/ the
uniform-increment-variable-direction approach this morning: physically, it
ignores the fact that the particles hitting the particle being tracked are
going to have a distribution of momentum, not all the same, just varying in
direction.  But I don't quite understand Warren's observation: the
'angles' that describe the position undergo a random walk [actually, it
would seem that they don't, since they too fail the varying-as-white-noise
test], so the particle tends to move in the same direction over short
intervals--is this just another way of saying that, since I was varying
the angles by -1, 0, or 1 unit each time, the simulation is susceptible to
unnaturally long strings of -1, 0, or 1 increments?  Thanks again,

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Generating a (uniformly distributed) random bit list of length N

2013-09-23 Thread David Goldsmith
Thanks, St?fan, speed: N ~ 1e9.  Thanks again.

DG

--


 Message: 1
 Date: Sun, 22 Sep 2013 14:04:09 -0700
 From: David Goldsmith d.l.goldsm...@gmail.com
 Subject: [Numpy-discussion] Generating a (uniformly distributed)
 random bit  list of length N
 To: numpy-discussion@scipy.org
 Message-ID:
 
 caftpszqg7upjy8s04npwkn8gjvdvgoru3rpjcvln6zkrwnp...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 Is np.random.randint(2, size=N) the fastest way to do this?  Thanks!

 DG
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20130922/b3cebeaa/attachment-0001.html

 --

 Message: 2
 Date: Mon, 23 Sep 2013 03:22:06 +0200
 From: St?fan van der Walt ste...@sun.ac.za
 Subject: Re: [Numpy-discussion] Generating a (uniformly distributed)
 random bit list of length N
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 
 cabdkgqkq829a_nj7xxcs5xauz-ptwcyf_zbslt+0gufnzwy...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 On 22 Sep 2013 23:04, David Goldsmith d.l.goldsm...@gmail.com wrote:
 
  Is np.random.randint(2, size=N) the fastest way to do this?  Thanks!

 Are you concerned about speed or memory use? The operation you show should
 already be quite fast. A more memory efficient approach would be to
 generate integers and use their binary representation.

 St?fan
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20130923/e4f56af4/attachment-0001.html


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Generating a (uniformly distributed) random bit list of length N

2013-09-22 Thread David Goldsmith
Is np.random.randint(2, size=N) the fastest way to do this?  Thanks!

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problem w/ Win installer

2012-07-16 Thread David Goldsmith
Hi, folks!  Having a problem w/ the Windows installer; first, the
back-story: I have both Python 2.7 and 3.2 installed.  When I run the
installer and click next on the first dialog, I get the message that I need
Python 2.7, which was not found in my registry.  I ran regedit and searched
for Python and get multiple hits on both Python 2.7 and 3.2.  So, precisely
which registry key has to have the value Python  2.7 for the installer to
find it? Thanks!

OlyDLG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [OT: MATLAB] Any way to globally make Matlab struct attributes Python-property-like

2010-09-13 Thread David Goldsmith
I.e., I'd, at minimum, like to globally replace

get(Handel, 'Property')

with

object.Property

and

set(Handel, 'Property', value)

with

object.Property = value

to an arbitrary level of composition.

(It's really getting cumbersome having to compound gets and sets all
over the place while debugging.)

(Sorry for the OT post; I thought I'd get a more sympathetic response
here than on the MATLAB lists.) ;)

Thanks!

DG
-- 
In science it often happens that scientists say, 'You know that's a really
good argument; my position is mistaken,' and then they would actually change
their minds and you never hear that old view from them again. They really do
it. It doesn't happen as often as it should, because scientists are human
and change is sometimes painful. But it happens every day. I cannot recall
the last time something like that happened in politics or religion.

- Carl Sagan, 1987 CSICOP address
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 47, Issue 61

2010-08-21 Thread David Goldsmith
On Sat, Aug 21, 2010 at 10:00 AM, numpy-discussion-requ...@scipy.orgwrote:

 Date: Fri, 20 Aug 2010 14:30:58 -0500
 From: Robert Kern robert.k...@gmail.com
 Subject: Re: [Numpy-discussion] Making MATLAB and Python play nice
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:

 aanlktikrwzd0vtjisk+6xh2djbca1v1sxx_ln6g4g...@mail.gmail.comaanlktikrwzd0vtjisk%2b6xh2djbca1v1sxx_ln6g4g...@mail.gmail.com
 
 Content-Type: text/plain; charset=UTF-8

 On Fri, Aug 20, 2010 at 14:25, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  Hi! ?Please forgive the re-post: I forgot to change the subject line
  and I haven't seen a response to this yet, so I'm assuming the former
  might be the cause of the latter.

 Or perhaps because the licenses are plainly visible at the links?


Ah, I see: if I can't be bothered to click on the links, no one else can be
bothered to tell me that that's all I need to do to get my question
answered.  Unfortunately, the solutions are useless to me if they're not
freely redistributable, so I have no incentive to click on the links--which
do not advertise that they answer the licensing question--'til that question
is answered - catch-22.

DG



 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
 ? -- Umberto Eco


 --

 Message: 4
 Date: Sat, 21 Aug 2010 00:31:03 +0100
 From: Francesc Alted fal...@pytables.org
 Subject: Re: [Numpy-discussion] [ANN] carray: an in-memory compressed
datacontainer
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:

 aanlktimuaahqvgg4xwktwtv+cw+s_ypavxtpxnxfa...@mail.gmail.comaanlktimuaahqvgg4xwktwtv%2bcw%2bs_ypavxtpxnxfa...@mail.gmail.com
 
 Content-Type: text/plain; charset=ISO-8859-1

 2010/8/20, Zbyszek Szmek zbys...@in.waw.pl:
  OK, I've got a case where carray really shines :|
 
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py numpy 80 1000 4 1
  problem size: (80) x 1000 = 10^8.90309
  time for concat: 4.806s
  size of the final container: 6103.516 MB
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py concat 80 1000 4 1
  problem size: (80) x 1000 = 10^8.90309
  time for concat: 3.475s
  size of the final container: 6103.516 MB
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py carray 80 1000 4 1
  problem size: (80) x 1000 = 10^8.90309
  time for concat: 1.434s
  size of the final container: 373.480 MB
 
  Size is set to NOT hit the swap. This is still the easily compressible
  arange... but still, the results are very nice.

 Wow, the results with your processor are much nicer than with my Atom
 indeed.  But yeah, I somewhat expected this because Blosc works much
 faster with recent processors, as can be seen in:

 http://blosc.pytables.org/trac/wiki/SyntheticBenchmarks

 BTW, the difference between memcpy and memmove times for this
 benchmark is almost 40% for your computer, which is really large :-/
 Hmm, something must go really wrong with memcpy in some glibc
 distributions...

 At any rate, for real data that is less compressible the advantages of
 carray will be less apparent, but at least the proof of concept seems
 to work as intended, so I'm very happy with it.  I'm also expecting
 that the combination carray/numexpr would perform faster than plain
 computations programmed in C, most specially with modern processors,
 but will see how much faster exactly.

  Of course when the swap is hit, the ratio between carray and a normal
 array
  can grow to infinity :)
 
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py numpy 100 1000 3 1
  problem size: (100) x 1000 = 10^9
  time for concat: 35.700s
  size of the final container: 7629.395 MB
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py carray 100 1000 3 1
  problem size: (100) x 1000 = 10^9
  time for concat: 1.751s
  size of the final container: 409.633 MB

 Exactly.  This is another scenario where the carray concept can be
 really useful.

 --
 Francesc Alted


 --

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 End of NumPy-Discussion Digest, Vol 47, Issue 61
 




-- 
Privacy is overrated; Identity isn't.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Making MATLAB and Python play nice

2010-08-20 Thread David Goldsmith
Hi!  Please forgive the re-post: I forgot to change the subject line
and I haven't seen a response to this yet, so I'm assuming the former
might be the cause of the latter.  My question follows the quoted
posts.  Thanks!

 From: Sturla Molden stu...@molden.no
 Subject: Re: [Numpy-discussion] [SciPy-Dev] Good-bye, sort of (John
Hunter)
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID: 8c0b2317-2a22-4828-99e8-ac6c0f778...@molden.no
 Content-Type: text/plain;   charset=us-ascii;   format=flowed;
  delsp=yes

 There are just two sane solutions for Matlab: Either embed CPyton in a
 MEX file, or use Matlab's JVM to run Jython ;)

 http://vader.cse.lehigh.edu/~perkins/pymex.htmlhttp://vader.cse.lehigh.edu/%7Eperkins/pymex.html

 Sturla

 Date: Wed, 18 Aug 2010 09:33:59 +0100
 From: Robin robi...@gmail.com
 Subject: Re: [Numpy-discussion] [SciPy-Dev] Good-bye, sort of (John
Hunter)
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:

 aanlkti=cwwhiug4g5p+oew7g1qakyszyu57f9dwu6...@mail.gmail.comcwwhiug4g5p%2boew7g1qakyszyu57f9dwu6...@mail.gmail.com
 
 Content-Type: text/plain; charset=ISO-8859-1
 Just thought I'd mention another one since this came up:

 http://github.com/kw/pymex
 This one works very nicely - it proxies any Python objects so you can
 use, should you want to, the Matlab IDE as a python interpreter,
 supports numpy arrays etc. Also cross-platform - I even got it to work
 with 64 bit matlab/python on windows (in a fork on github).

Thanks for the ideas; are any/all of these solutions freely/easily
redistributable?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 47, Issue 47

2010-08-19 Thread David Goldsmith
Date: Wed, 18 Aug 2010 09:20:41 +0200

 From: Sturla Molden stu...@molden.no
 Subject: Re: [Numpy-discussion] [SciPy-Dev] Good-bye, sort of (John
Hunter)
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID: 8c0b2317-2a22-4828-99e8-ac6c0f778...@molden.no
 Content-Type: text/plain;   charset=us-ascii;   format=flowed;
  delsp=yes

 Den 18. aug. 2010 kl. 08.19 skrev Martin Raspaud
 martin.rasp...@smhi.se:

  Once upon a time, when my boss wanted me to use matlab, I found myself
  implementing a python interpreter in matlab...
 

 There are just two sane solutions for Matlab: Either embed CPyton in a
 MEX file, or use Matlab's JVM to run Jython ;)

 http://vader.cse.lehigh.edu/~perkins/pymex.htmlhttp://vader.cse.lehigh.edu/%7Eperkins/pymex.html

 Sturla



 Date: Wed, 18 Aug 2010 09:33:59 +0100
 From: Robin robi...@gmail.com
 Subject: Re: [Numpy-discussion] [SciPy-Dev] Good-bye, sort of (John
Hunter)
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:

 aanlkti=cwwhiug4g5p+oew7g1qakyszyu57f9dwu6...@mail.gmail.comcwwhiug4g5p%2boew7g1qakyszyu57f9dwu6...@mail.gmail.com
 
 Content-Type: text/plain; charset=ISO-8859-1
 Just thought I'd mention another one since this came up:

 http://github.com/kw/pymex
 This one works very nicely - it proxies any Python objects so you can
 use, should you want to, the Matlab IDE as a python interpreter,
 supports numpy arrays etc. Also cross-platform - I even got it to work
 with 64 bit matlab/python on windows (in a fork on github).


Thanks for the ideas; are any/all of these solutions freely/easily
redistributable?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] Good-bye, sort of (John Hunter)

2010-08-13 Thread David Goldsmith
  After several years now of writing Python and now having written my first
  on-the-job 15 operational MATLAB LOC, all of which are string, cell
 array,
  and file processing, I'm ready to say: MATLAB: what a PITA! :-(

 Ahh, cell arrays, they bring back memories.  Makes you pine for a
 dictionary, no?

 JDH


Not to mention writeline, readline, string concatenation using +, English
wording of loops, list comprehension, etc., etc., etc. - if people only
knew...

DG


 --

Privacy is overrated; Identity isn't.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] endian.h change

2010-07-30 Thread David Goldsmith
I assume this is addressed to David C., correct?

DG

On Fri, Jul 30, 2010 at 10:08 AM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:

 Hi David,

 Commit r8541 broke building with numscons for me, does this fix look okay:

 http://github.com/rgommers/numpy/commit/1c88007ab00cf378ebe19fbe54e9e868212c73d1

 I am puzzled though why my endian.h is not picked up in the build - I have
 a good collection of those on my system, at least in all OS X SDKs. Any
 idea?

 Cheers,
 Ralf

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Help w/ indexing, please

2010-07-27 Thread David Goldsmith
Hi!  I have a large M x K, M, K ~ 1e3 array L of indices - non-negative
integers in the range 0 to N-1 - and an N x 3 array C (a matplotlib
colormap).  I need to create an M x K x 3 array R such that R[m,k,j] =
C[L[m,k], j], j = 0,1,2.  I want to do so w/out having to loop through all
the (m,k) index pairs, but I'm having trouble wrapping my brain around how
to do it - please help.  Thanks.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help w/ indexing, please

2010-07-27 Thread David Goldsmith
On Tue, Jul 27, 2010 at 9:32 AM, John Salvatier
jsalv...@u.washington.eduwrote:

 I am pretty sure you should be able to do

 R = C[L, :]  and get the array you want.

 Try it with a small matrix where  you know the result you want. You may
 need to transpose some axes afterwards, but I don't think you should.


Thanks, John, that works; you may be right about the transposing, but I can
work that out empirically.  Thanks again!

DG


 On Tue, Jul 27, 2010 at 9:10 AM, David Goldsmith 
 d.l.goldsm...@gmail.comwrote:

 Hi!  I have a large M x K, M, K ~ 1e3 array L of indices - non-negative
 integers in the range 0 to N-1 - and an N x 3 array C (a matplotlib
 colormap).  I need to create an M x K x 3 array R such that R[m,k,j] =
 C[L[m,k], j], j = 0,1,2.  I want to do so w/out having to loop through all
 the (m,k) index pairs, but I'm having trouble wrapping my brain around how
 to do it - please help.  Thanks.

 DG

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Don't understand this error

2010-07-27 Thread David Goldsmith
res = np.fromfunction(make_res, (nx, ny))
   File C:\Python26\lib\site-packages\numpy\core\numeric.py, line 1538, in
fromfunction
args = indices(shape, dtype=dtype)
   File C:\Python26\lib\site-packages\numpy\core\numeric.py, line 1480, in
indices
tmp.shape = (1,)*i + (dim,)+(1,)*(N-i-1)
ValueError: total size of new array must be unchanged
Script terminated

If it's a new array, how can it already have a size that can't be changed?
What does this error really mean?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Don't understand this error

2010-07-27 Thread David Goldsmith
Thanks, that was it.

DG

On Tue, Jul 27, 2010 at 3:02 PM, Robert Kern robert.k...@gmail.com wrote:

 On Tue, Jul 27, 2010 at 16:59, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  res = np.fromfunction(make_res, (nx, ny))
 File C:\Python26\lib\site-packages\numpy\core\numeric.py, line 1538,
 in
  fromfunction
  args = indices(shape, dtype=dtype)
 File C:\Python26\lib\site-packages\numpy\core\numeric.py, line 1480,
 in
  indices
  tmp.shape = (1,)*i + (dim,)+(1,)*(N-i-1)
  ValueError: total size of new array must be unchanged
  Script terminated
 
  If it's a new array, how can it already have a size that can't be
 changed?
  What does this error really mean?

 indices() creates an array using arange() and repeat() and then
 reshapes it to the appropriate shape. You probably have bad nx or ny
 values.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
   -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problem using polyutils.mapdomain with fromfunction

2010-07-25 Thread David Goldsmith
Why am I being told my coefficient array is not 1-d when both coefficient
arrays--old and new--are reported to have shape (2L,):

C:\Users\Fermatpython
Python 2.6.5 (r265:79096, Mar 19 2010, 18:02:59) [MSC v.1500 64 bit (AMD64)]
on
win32
Type help, copyright, credits or license for more information.
 import numpy as np
 np.version.version
'1.4.1'
 from numpy.polynomial import polyutils as pu
 nx = 1
 ny = 1
 def whydoesntthiswork(x, y):
... old = np.array((0,2)); print old.shape
... new = np.array((-1,1)); print new.shape
... X = pu.mapdomain(x, old, new)
... return X
...
 result = np.fromfunction(whydoesntthiswork, (nx, ny))
(2L,)
(2L,)
Traceback (most recent call last):
  File stdin, line 1, in module
  File C:\Python26\lib\site-packages\numpy\core\numeric.py, line 1539, in
from
function
return function(*args,**kwargs)
  File stdin, line 4, in whydoesntthiswork
  File C:\Python26\lib\site-packages\numpy\polynomial\polyutils.py, line
280,
in mapdomain
[x] = as_series([x], trim=False)
  File C:\Python26\lib\site-packages\numpy\polynomial\polyutils.py, line
139,
in as_series
raise ValueError(Coefficient array is not 1-d)
ValueError: Coefficient array is not 1-d

Thanks in advance,

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem using polyutils.mapdomain with fromfunction

2010-07-25 Thread David Goldsmith
On Sun, Jul 25, 2010 at 7:08 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:


 On Sun, Jul 25, 2010 at 2:32 AM, David Goldsmith 
 d.l.goldsm...@gmail.comwrote:

 Why am I being told my coefficient array is not 1-d when both coefficient
 arrays--old and new--are reported to have shape (2L,):

 C:\Users\Fermatpython
 Python 2.6.5 (r265:79096, Mar 19 2010, 18:02:59) [MSC v.1500 64 bit
 (AMD64)] on
 win32
 Type help, copyright, credits or license for more information.
  import numpy as np
  np.version.version
 '1.4.1'
  from numpy.polynomial import polyutils as pu
  nx = 1
  ny = 1
  def whydoesntthiswork(x, y):
 ... old = np.array((0,2)); print old.shape
 ... new = np.array((-1,1)); print new.shape
 ... X = pu.mapdomain(x, old, new)
 ... return X
 ...
  result = np.fromfunction(whydoesntthiswork, (nx, ny))
 (2L,)
 (2L,)
 Traceback (most recent call last):
   File stdin, line 1, in module
   File C:\Python26\lib\site-packages\numpy\core\numeric.py, line 1539,
 in from
 function
 return function(*args,**kwargs)
   File stdin, line 4, in whydoesntthiswork
   File C:\Python26\lib\site-packages\numpy\polynomial\polyutils.py, line
 280,
 in mapdomain
 [x] = as_series([x], trim=False)
   File C:\Python26\lib\site-packages\numpy\polynomial\polyutils.py, line
 139,
 in as_series
 raise ValueError(Coefficient array is not 1-d)
 ValueError: Coefficient array is not 1-d

 Thanks in advance,


 Because fromfunction passes 2d arrays to  whydoesntthiswork and mapdomain
 doesn't accept 2d arrays as the first argument. That looks like an
 unnecessary restriction, open a ticket and I'll fix it up.


Thanks, Chuck.

DG


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] doc string linalg.solve -- works for b is matrix

2010-07-21 Thread David Goldsmith
Take it as a reminder: when reporting an error or problem, even if it
doesn't seem relevant, always provide version number. :-)

DG

On Wed, Jul 21, 2010 at 12:50 PM, Mark Bakker mark...@gmail.com wrote:

 I am using 1.3.0.
 Glad to hear it is correct in 1.4.0
 Sorry for bothering you with an old version, but I am very happy with this
 feature!
 Mark


 What version of numpy are you using?  That docstring was updated in that
 fashion about 8 mo. ago (at least in the Wiki; I'm not sure exactly when
 it
 was merged, but it does appear that way in version 1.4.0).

 DG

 I am using linalg.solve to solve a system of linear equations.  As I have
 to
  solve multiple systems with the same matrix, but different right-hand
 sides,
  I tried to make the right-hand side a matrix and that works fine.
  So the docstring should say:
 
  Solve the equation ``a x = b`` for ``x``.
 
  Parameters
  --
  a : array_like, shape (M, M)
  Input equation coefficients.
  b : array_like, shape (M,) or array_like, shape (M,N)
  N can be arbitrary size
  Equation target values.
 
  Returns
  ---
  x : array, shape (M,) or array, shape (M,N)
 
  Thanks,
 
  Mark


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-20 Thread David Goldsmith
On Thu, Jul 15, 2010 at 9:41 AM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 On Thu, Jul 15, 2010 at 3:20 AM, Martin Raspaud martin.rasp...@smhi.sewrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 David Goldsmith skrev:
 
 
  Interesting comment: it made me run down the fftpack tutorial
  http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/fftpack.rst/
  josef has alluded to in the past to see if the suggested pointer
  could point there without having to write a lot of new content.
  What I found was that although the scipy basic fft functions don't
  support it (presumably because they're basically just wrappers for
  the numpy fft functions), scipy's discrete cosine transforms support
  an norm=ortho keyword argument/value pair that enables the
  function to return the unitary versions that you describe above.
  There isn't much narrative explanation of the issue yet, but it got
  me wondering: why don't the fft functions support this?  If there
  isn't a good reason, I'll go ahead and submit an enhancement
 ticket.
 
 
  Having seen no post of a good reason, I'm going to go ahead and file
  enhancement tickets.

 Hi,

 I have worked on fourier transforms and I think normalization is generally
 seen
 as a whole : fft + ifft should be the identity function, thus the
 necessity of a
 normalization, which often done on the ifft.

 As one of the previous poster mentioned, sqrt(len(x)) is often seen as a
 good
 compromise to split the normalization equally between fft and ifft.

 In the sound community though, the whole normalization often done after
 the fft,
 such that looking at the amplitude spectrum gives the correct amplitude
 values
 for the different components of the sound (sinusoids).

 My guess is that normalization requirements are different for every user:
 that's
 why I like the no normalization approach of fftw, such that anyone does
 whatever
 he/she/it wants.


 I get the picture: in the docstring, refer people to fftw.

 DG


I can't find this fftw function in either numpy or scipy - where is it?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-15 Thread David Goldsmith
On Thu, Jul 15, 2010 at 3:20 AM, Martin Raspaud martin.rasp...@smhi.sewrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 David Goldsmith skrev:
 
 
  Interesting comment: it made me run down the fftpack tutorial
  http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/fftpack.rst/
  josef has alluded to in the past to see if the suggested pointer
  could point there without having to write a lot of new content.
  What I found was that although the scipy basic fft functions don't
  support it (presumably because they're basically just wrappers for
  the numpy fft functions), scipy's discrete cosine transforms support
  an norm=ortho keyword argument/value pair that enables the
  function to return the unitary versions that you describe above.
  There isn't much narrative explanation of the issue yet, but it got
  me wondering: why don't the fft functions support this?  If there
  isn't a good reason, I'll go ahead and submit an enhancement
 ticket.
 
 
  Having seen no post of a good reason, I'm going to go ahead and file
  enhancement tickets.

 Hi,

 I have worked on fourier transforms and I think normalization is generally
 seen
 as a whole : fft + ifft should be the identity function, thus the necessity
 of a
 normalization, which often done on the ifft.

 As one of the previous poster mentioned, sqrt(len(x)) is often seen as a
 good
 compromise to split the normalization equally between fft and ifft.

 In the sound community though, the whole normalization often done after the
 fft,
 such that looking at the amplitude spectrum gives the correct amplitude
 values
 for the different components of the sound (sinusoids).

 My guess is that normalization requirements are different for every user:
 that's
 why I like the no normalization approach of fftw, such that anyone does
 whatever
 he/she/it wants.


I get the picture: in the docstring, refer people to fftw.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-14 Thread David Goldsmith
On Mon, Jul 12, 2010 at 8:26 PM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 2010/7/12 Jochen Schröder cycoma...@gmail.com

 On 13/07/10 08:04, Eric Firing wrote:
  On 07/12/2010 11:43 AM, David Goldsmith wrote:
 From the docstring:
 
  A[0] contains the zero-frequency term (the mean of the signal)
 
  And yet, consistent w/ the definition given in the docstring (and
  included w/ an earlier email), the code gives, e.g.:
 
  import numpy as np
  x = np.ones((16,)); x
  array([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
1.,  1.,  1.])
  y = np.fft.fft(x); y
  array([ 16.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
 0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
 0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j])
 
  i.e., the zero-th term is the sum, not the mean (which, again, is
  consistent w/ the stated defining formula).
 
  So, same ol', same ol': bug in the doc (presumably) or bug in the code?
 
  Bug in the doc.  Good catch.  mean is correct for the ifft, not for
  the fft.
 
  Eric
 
 I'd say that a pointer to a discussion about normalization of ffts would
 be good here. The issue is that numpy is doing a normalization to len(x)
 for the inverse fft. However to make ffts unitary it should actually be
 that fft and ifft are normalized by sqrt(len(x)). And some fft
 implementations don't do normalizations at all (FFTW).

 Interesting comment: it made me run down the fftpack 
 tutorialhttp://docs.scipy.org/scipy/docs/scipy-docs/tutorial/fftpack.rst/josef
  has alluded to in the past to see if the suggested pointer could point
 there without having to write a lot of new content.  What I found was that
 although the scipy basic fft functions don't support it (presumably because
 they're basically just wrappers for the numpy fft functions), scipy's
 discrete cosine transforms support an norm=ortho keyword argument/value
 pair that enables the function to return the unitary versions that you
 describe above.  There isn't much narrative explanation of the issue yet,
 but it got me wondering: why don't the fft functions support this?  If there
 isn't a good reason, I'll go ahead and submit an enhancement ticket.


Having seen no post of a good reason, I'm going to go ahead and file
enhancement tickets.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reality check

2010-07-12 Thread David Goldsmith
Thanks, both.

On Mon, Jul 12, 2010 at 5:39 AM, Fabrice Silva si...@lma.cnrs-mrs.frwrote:

 Le lundi 12 juillet 2010 à 18:14 +1000, Jochen Schröder a écrit :
  On 07/12/2010 12:36 PM, David Goldsmith wrote:
   On Sun, Jul 11, 2010 at 6:18 PM, David Goldsmith
   d.l.goldsm...@gmail.com mailto:d.l.goldsm...@gmail.com wrote:
  
   In numpy.fft we find the following:
  
   Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:]
   contains the negative-frequency terms, in order of decreasingly
   negative frequency.
  
   Just want to confirm that decreasingly negative frequency means
   ..., A[n-2] = A_(-2), A[n-1] = A_(-1), as implied by our definition
   (attached).
  
   DG
   And while I have your attention :-)
  
   For an odd number of input points, A[(n-1)/2] contains the largest
   positive frequency, while A[(n+1)/2] contains the largest [in absolute
   value] negative frequency.  Are these not also termed Nyquist
   frequencies?  If not, would it be incorrect to characterize them as
 the
   largest realizable frequencies (in the sense that the data contain no
   information about any higher frequencies)?
  
   DG
  
  I would find the term the largest realizable frequency quite
  confusing. Realizing is a too ambiguous term IMO. It's the largest
  possible frequency contained in the array, so Nyquist frequency would be
  correct IMO.

 Denoting Fs the sampling frequency (Fs/2 the Nyquist frequency):

 For even n
 A[n/2-1] stores frequency Fs/2-Fs/n, i.e. Nyquist frequency less a small
 quantity.
 A[n/2] stores frequency Fs/2, i.e. exactly Nyquist frequency.
 A[n/2+1] stores frequency -Fs/2+Fs/n, i.e. Nyquist frequency less a
 small quantity, for negative frequencies.

 For odd n
 A[(n-1)/2] stores frequency Fs/2-Fs/(2n) and A[(n+1)/2] the opposite
 negative frequency. But please pay attention that it does not compute
 the content at the exact Nyquist frequency! That justify the careful
 'largest realizable frequency'.

 Note that the equation for the inverse DFT should state for m=0...n-1
 and not for n=0...n-1...


Yeah, I already caught that, thanks!

How 'bout I just use Fabrice's formula?  It's explicit and thus, IMO,
clear.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.fft, yet again

2010-07-12 Thread David Goldsmith
From the docstring:

A[0] contains the zero-frequency term (the mean of the signal)

And yet, consistent w/ the definition given in the docstring (and included
w/ an earlier email), the code gives, e.g.:

 import numpy as np
 x = np.ones((16,)); x
array([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
1.,  1.,  1.])
 y = np.fft.fft(x); y
array([ 16.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
 0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
 0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j])

i.e., the zero-th term is the sum, not the mean (which, again, is consistent
w/ the stated defining formula).

So, same ol', same ol': bug in the doc (presumably) or bug in the code?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-12 Thread David Goldsmith
On Mon, Jul 12, 2010 at 3:04 PM, Eric Firing efir...@hawaii.edu wrote:

 On 07/12/2010 11:43 AM, David Goldsmith wrote:
   From the docstring:
 
  A[0] contains the zero-frequency term (the mean of the signal)
 
  And yet, consistent w/ the definition given in the docstring (and
  included w/ an earlier email), the code gives, e.g.:
 
import numpy as np
x = np.ones((16,)); x
  array([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
   1.,  1.,  1.])
y = np.fft.fft(x); y
  array([ 16.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j])
 
  i.e., the zero-th term is the sum, not the mean (which, again, is
  consistent w/ the stated defining formula).
 
  So, same ol', same ol': bug in the doc (presumably) or bug in the code?

 Bug in the doc.  Good catch.


Thanks.  (In case you hadn't noticed, I'm detail-oriented to a fault.) :-/

DG

 mean is correct for the ifft, not for
 the fft.


 Eric

 
  DG
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Here's what I've done to numpy.fft

2010-07-12 Thread David Goldsmith
In light of my various questions and the responses thereto, here's what I've
done (but not yet committed) to numpy.fft.

There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc.  In this implementation, the DFT is defined
as

.. math::
   A_k =  \sum_{m=0}^{n-1} a_m \exp\left\{-2\pi i{mk \over n}\right\}
   \qquad k = 0,\ldots,n-1

where `n` is the number of input points.  In general, the DFT is defined
for complex inputs and outputs, and a single-frequency component at linear
frequency :math:`f` is represented by a complex exponential
:math:`a_m = \exp\{2\pi i\,f m\Delta t\}`, where
:math:`\Delta t` is the *sampling interval*.

Note that, due to the periodicity of the exponential function, formally
:math:`A_{n-1} = A_{-1}, A_{n-2} = A_{-2}`, etc.  That said, the values in
the result are in the so-called standard order: if ``A = fft(a,n)``,
then ``A[0]`` contains the zero-frequency term (the sum of the data),
which is always purely real for real inputs.  Then ``A[1:n/2]`` contains
the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency (in the sense described above) terms, from least (most
negative) to largest (closest to zero).  In particular, for `n` even,
``A[n/2]`` represents both the positive and the negative Nyquist
frequencies, and is also purely real for real input.  For `n` odd,
``A[(n-1)/2]`` contains the largest positive frequency, while
``A[(n+1)/2]`` contains the largest (in absolute value) negative
frequency.  In both cases, i.e., `n` even or odd, ``A[n-1]`` contains the
negative frequency closest to zero.

Feedback welcome.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Here's what I've done to numpy.fft

2010-07-12 Thread David Goldsmith
On Mon, Jul 12, 2010 at 6:33 PM, Travis Oliphant oliph...@enthought.comwrote:


 On Jul 12, 2010, at 5:47 PM, David Goldsmith wrote:

  In light of my various questions and the responses thereto, here's what
 I've done (but not yet committed) to numpy.fft.
 
  There are many ways to define the DFT, varying in the sign of the
  exponent, normalization, etc.  In this implementation, the DFT is defined
  as
 
  .. math::
 A_k =  \sum_{m=0}^{n-1} a_m \exp\left\{-2\pi i{mk \over n}\right\}
 \qquad k = 0,\ldots,n-1
 
  where `n` is the number of input points.  In general, the DFT is defined
  for complex inputs and outputs, and a single-frequency component at
 linear
  frequency :math:`f` is represented by a complex exponential
  :math:`a_m = \exp\{2\pi i\,f m\Delta t\}`, where
  :math:`\Delta t` is the *sampling interval*.

 This sounds very good, but I would not mix discussions of sampling interval
 with the DFT except as an example use case.

 The FFT is an implementation of the DFT, and the DFT is self-contained for
 discrete signals without any discussion of continuous-time frequency or
 sampling interval.   Many applications of the FFT, however, use sampled
 continuous-time signals.

 So, use a_m = \exp\(2\pi j m k\) to describe the single-frequency case.
 If you want to say that k = f\Delta t for a sampled-continuous time signal,
 then that would be fine, but there are plenty of discrete signals that don't
 have any relation to continuous time where an FFT still makes sense.


This is an interesting comment, as the delta t, sampling interval, and
sampling-of-a-continuous-time-signal context are all inherited from the
original docstring (I made an effort to clarify the existing content while
still preserving it as much as possible).  If others agree that a more
general presentation is preferable, the docstring *may* require a more
extensive edit (I'll have to go back and re-read the other sections with an
eye specifically to that issue).


 
  Note that, due to the periodicity of the exponential function, formally
  :math:`A_{n-1} = A_{-1}, A_{n-2} = A_{-2}`, etc.  That said, the values
 in
  the result are in the so-called standard order: if ``A = fft(a,n)``,
  then ``A[0]`` contains the zero-frequency term (the sum of the data),
  which is always purely real for real inputs.  Then ``A[1:n/2]`` contains
  the positive-frequency terms, and ``A[n/2+1:]`` contains the
  negative-frequency (in the sense described above) terms, from least (most
  negative) to largest (closest to zero).  In particular, for `n` even,
  ``A[n/2]`` represents both the positive and the negative Nyquist
  frequencies, and is also purely real for real input.  For `n` odd,
  ``A[(n-1)/2]`` contains the largest positive frequency, while
  ``A[(n+1)/2]`` contains the largest (in absolute value) negative
  frequency.  In both cases, i.e., `n` even or odd, ``A[n-1]`` contains the
  negative frequency closest to zero.
 
  Feedback welcome.

 I would remove That said,  near the beginning of the paragraph.


Too colloquial? ;-)  NP.

Thanks for the great docs.


Thank you for the encouraging words, and for taking the time and interest.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-12 Thread David Goldsmith
2010/7/12 Jochen Schröder cycoma...@gmail.com

 On 13/07/10 08:04, Eric Firing wrote:
  On 07/12/2010 11:43 AM, David Goldsmith wrote:
 From the docstring:
 
  A[0] contains the zero-frequency term (the mean of the signal)
 
  And yet, consistent w/ the definition given in the docstring (and
  included w/ an earlier email), the code gives, e.g.:
 
  import numpy as np
  x = np.ones((16,)); x
  array([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
1.,  1.,  1.])
  y = np.fft.fft(x); y
  array([ 16.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
 0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j,
 0.+0.j,   0.+0.j,   0.+0.j,   0.+0.j])
 
  i.e., the zero-th term is the sum, not the mean (which, again, is
  consistent w/ the stated defining formula).
 
  So, same ol', same ol': bug in the doc (presumably) or bug in the code?
 
  Bug in the doc.  Good catch.  mean is correct for the ifft, not for
  the fft.
 
  Eric
 
 I'd say that a pointer to a discussion about normalization of ffts would
 be good here. The issue is that numpy is doing a normalization to len(x)
 for the inverse fft. However to make ffts unitary it should actually be
 that fft and ifft are normalized by sqrt(len(x)). And some fft
 implementations don't do normalizations at all (FFTW).

 Interesting comment: it made me run down the fftpack 
 tutorialhttp://docs.scipy.org/scipy/docs/scipy-docs/tutorial/fftpack.rst/josef
  has alluded to in the past to see if the suggested pointer could point
there without having to write a lot of new content.  What I found was that
although the scipy basic fft functions don't support it (presumably because
they're basically just wrappers for the numpy fft functions), scipy's
discrete cosine transforms support an norm=ortho keyword argument/value
pair that enables the function to return the unitary versions that you
describe above.  There isn't much narrative explanation of the issue yet,
but it got me wondering: why don't the fft functions support this?  If there
isn't a good reason, I'll go ahead and submit an enhancement ticket.

DG

 Cheers
 Jochen

 
  DG
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Here's what I've done to numpy.fft

2010-07-12 Thread David Goldsmith
2010/7/12 Jochen Schröder cycoma...@gmail.com

 On 13/07/10 08:47, David Goldsmith wrote:
  In light of my various questions and the responses thereto, here's what
  I've done (but not yet committed) to numpy.fft.
 
  There are many ways to define the DFT, varying in the sign of the
  exponent, normalization, etc.  In this implementation, the DFT is defined
  as
 
  .. math::
  A_k =  \sum_{m=0}^{n-1} a_m \exp\left\{-2\pi i{mk \over n}\right\}
  \qquad k = 0,\ldots,n-1
 
  where `n` is the number of input points.  In general, the DFT is defined
  for complex inputs and outputs, and a single-frequency component at
 linear
  frequency :math:`f` is represented by a complex exponential
  :math:`a_m = \exp\{2\pi i\,f m\Delta t\}`, where
  :math:`\Delta t` is the *sampling interval*.
 
  Note that, due to the periodicity of the exponential function, formally
  :math:`A_{n-1} = A_{-1}, A_{n-2} = A_{-2}`, etc.  That said, the values
 in
  the result are in the so-called standard order: if ``A = fft(a,n)``,
  then ``A[0]`` contains the zero-frequency term (the sum of the data),
  which is always purely real for real inputs.  Then ``A[1:n/2]`` contains
  the positive-frequency terms, and ``A[n/2+1:]`` contains the
  negative-frequency (in the sense described above) terms, from least (most
  negative) to largest (closest to zero).  In particular, for `n` even,
  ``A[n/2]`` represents both the positive and the negative Nyquist
  frequencies, and is also purely real for real input.  For `n` odd,
  ``A[(n-1)/2]`` contains the largest positive frequency, while
  ``A[(n+1)/2]`` contains the largest (in absolute value) negative
  frequency.  In both cases, i.e., `n` even or odd, ``A[n-1]`` contains the
  negative frequency closest to zero.
 
  Feedback welcome.
 
  DG
 
 Hi David,

 great work. I agree with Travis leave the sampling out. This make things
 more confusing. I'd also suggest pointing to fftshift for converting the
 standard order to order min frequency to max frequency


Thanks, Jochen.  Such a pointer was/is already in the original docstring; I
found nothing unclear about it, so I didn't modify it, so I didn't include
it in my post; indeed, the complete docstring is much longer - I only posted
that portion to which I made significant changes.  (To which I should
probably add: I haven't picked over the rest of the docstring w/ nearly
the same degree of care as that portion I did post, primarily because my
main motivation in doing what I did was for consistency w/ what I'm
borrowing from the docstring for the much more succinct narrative portion
I'm adding to the docstring for scipy.fftpack.basic.  In other words, though
numpy.fft's docstring was at Needs review status going into this, it
should probably be put back to Being written.)

DG


 Cheers
 Jochen
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Another reality check

2010-07-11 Thread David Goldsmith
In numpy.fft we find the following:

Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains
the negative-frequency terms, in order of decreasingly negative frequency.


Just want to confirm that decreasingly negative frequency means ...,
A[n-2] = A_(-2), A[n-1] = A_(-1), as implied by our definition (attached).

DG
attachment: DFTdef.PNG___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reality check

2010-07-11 Thread David Goldsmith
On Sun, Jul 11, 2010 at 6:18 PM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 In numpy.fft we find the following:

 Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:]contains 
 the negative-frequency terms, in order of decreasingly negative
 frequency.

 Just want to confirm that decreasingly negative frequency means ...,
 A[n-2] = A_(-2), A[n-1] = A_(-1), as implied by our definition (attached).

 DG


And while I have your attention :-)

For an odd number of input points, A[(n-1)/2] contains the largest positive
frequency, while A[(n+1)/2] contains the largest [in absolute value]
negative frequency.  Are these not also termed Nyquist frequencies?  If
not, would it be incorrect to characterize them as the largest realizable
frequencies (in the sense that the data contain no information about any
higher frequencies)?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: effect of shape=None (the default) in format.open_memmap

2010-07-08 Thread David Goldsmith
No reply?

-- Forwarded message --
From: David Goldsmith d.l.goldsm...@gmail.com
Date: Tue, Jul 6, 2010 at 7:03 PM
Subject: effect of shape=None (the default) in format.open_memmap
To: numpy-discussion@scipy.org


Hi, I'm trying to wrap my brain around the affect of leaving shape=None (the
default) in format.open_memmap.  First, I get that it's only even seen if
the file is opened in write mode.  Then, write_array_header_1_0 is called
with dict d as second parameter, w/, as near as I can see, d['shape'] still
= None.  write_array_header_1_0 is a little opaque to me, but as near as I
can tell, shape = None is then written as is to the file's header.  Here's
where things get a little worrisome/confusing.  Looking ahead, the next
function in the source is read_array_header_1_0, in which we see the
following comment: ...The keys are strings 'shape' : tuple of int...  Then
later in the code we see:

# Sanity-check the values.
if (not isinstance(d['shape'], tuple) or
not numpy.all([isinstance(x, (int,long)) for x in d['shape']])):
msg = shape is not valid: %r
raise ValueError(msg % (d['shape'],))

Unless I'm missing something, if shape=None, this ValueError will be raised,
correct?  So it appears as if the default value for shape in the original
function, open_memmap, will produce a header that would ultimately result in
a defective file, at least as far as read_array_header_1_0 is concerned.

A) Am I missing something (e.g., a numpy-wide default substitution for shape
if it happens to equal None) that results in this conclusion being
incorrect?

B) If I am correct, feature or bug?

DG



-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] finfo.eps v. finfo.epsneg

2010-07-06 Thread David Goldsmith
 np.finfo('float64').eps # returns a scalar
2.2204460492503131e-16
 np.finfo('float64').epsneg # returns an array
array(1.1102230246251565e-16)

Bug or feature?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #1223...

2010-07-01 Thread David Goldsmith
On Thu, Jul 1, 2010 at 9:11 AM, Charles R Harris
charlesr.har...@gmail.comwrote:


 On Thu, Jul 1, 2010 at 8:40 AM, Bruce Southey bsout...@gmail.com wrote:

  On 06/29/2010 11:38 PM, David Goldsmith wrote:

 On Tue, Jun 29, 2010 at 8:16 PM, Bruce Southey bsout...@gmail.comwrote:

 On Tue, Jun 29, 2010 at 6:03 PM, David Goldsmith
 d.l.goldsm...@gmail.com wrote:
  On Tue, Jun 29, 2010 at 3:56 PM, josef.p...@gmail.com wrote:
 
  On Tue, Jun 29, 2010 at 6:37 PM, David Goldsmith
  d.l.goldsm...@gmail.com wrote:
   ...concerns the behavior of numpy.random.multivariate_normal; if
 that's
   of
   interest to you, I urge you to take a look at the comments (esp.
 mine
   :-) );
   otherwise, please ignore the noise.  Thanks!
 
  You should add the link to the ticket, so it's faster for everyone to
  check what you are talking about.
 
  Josef
 
  Ooops!  Yes I should; here it is:
 
  http://projects.scipy.org/numpy/ticket/1223
  Sorry, and thanks, Josef.
 
  DG
 
   ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  As I recall, there is no requirement for the variance/covariance of
 the normal distribution to be positive definite.


 No, not positive definite, positive *semi*-definite: yes, the variance may
 be zero (the cov may have zero-valued eigenvalues), but the claim (and I
 actually am neutral about it, in that I wanted to reference the claim in
 the docstring and was told that doing so was unnecessary, the implication
 being that this is a well-known fact), is that, in essence (in 1-D) the
 variance can't be negative, which seems clear enough.  I don't see you
 disputing that, and so I'm uncertain as to how you feel about the proposal
 to weakly enforce symmetry and positive *semi*-definiteness.  (Now, if you
 dispute that even requiring positive *semi*-definiteness is desirable,
 you'll have to debate that w/ some of the others, because I'm taking their
 word for it that indefiniteness is unphysical.)

 DG

 From http://en.wikipedia.org/wiki/Multivariate_normal_distribution
 The covariance matrix is allowed to be singular (in which case the
 corresponding distribution has no density).

 So you must be able to draw random numbers from such a distribution.
 Obviously what those numbers really mean is another matter (I presume
 the dependent variables should be a linear function of the independent
 variables) but the user *must* know since they entered it. Since the
 function works the docstring Notes comment must be wrong.

 Imposing any restriction means that this is no longer a multivariate
 normal random number generator. If anything, you can only raise a
 warning about possible non-positive definiteness but even that will
 vary depending how it is measured and on the precision being used.


 Bruce
  ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 --
 Mathematician: noun, someone who disavows certainty when their uncertainty
 set is non-empty, even if that set has measure zero.

 Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
 lies, prevents mankind from committing a general suicide.  (As interpreted
 by Robert Graves)


 ___
 NumPy-Discussion mailing 
 listnumpy-discuss...@scipy.orghttp://mail.scipy.org/mailman/listinfo/numpy-discussion

  As you (and the theory) say, a variance should not be negative - yeah
 right :-) In practice that is not exactly true because estimation procedures
 like equating observed with expected sum of squares do lead to negative
 estimates. However, that is really a failure of the model, data and
 algorithm.

 I think the issue is really how numpy should handle input when that input
 is theoretically invalid.


 I think the svd version could be used if a check is added for the
 decomposition. That is, if cov = u*d*v, then dot(u,v) ~= identity. The
 Cholesky decomposition will be faster than the svd for large arrays, but
 that might not matter much for the common case.

 snip

 Chuck


Well, I'm not sure if what we have so far implies that consensus will
possibly be impossible to reach, so I'll just rest on my laurels (i.e., my
proposed compromise solution); just let me know if the docstring needs to be
changed (and how).

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] where support full broadcasting, right?

2010-07-01 Thread David Goldsmith
Hi.  The docstring (in the wiki) for where states:
x, y : array_like, optionalValues from which to choose. *x* and *y* need to
have the same shape as *condition*.But:

 x = np.eye(2)
 np.where(x,2,3)
array([[2, 3],
   [3, 2]])

So apparently where supports broadcasting of scalars at least; does it
provide full broadcasting support?

Thanks!

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Ticket #1223...

2010-06-29 Thread David Goldsmith
...concerns the behavior of numpy.random.multivariate_normal; if that's of
interest to you, I urge you to take a look at the comments (esp. mine :-) );
otherwise, please ignore the noise.  Thanks!

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #1223...

2010-06-29 Thread David Goldsmith
On Tue, Jun 29, 2010 at 3:56 PM, josef.p...@gmail.com wrote:

 On Tue, Jun 29, 2010 at 6:37 PM, David Goldsmith
 d.l.goldsm...@gmail.com wrote:
  ...concerns the behavior of numpy.random.multivariate_normal; if that's
 of
  interest to you, I urge you to take a look at the comments (esp. mine :-)
 );
  otherwise, please ignore the noise.  Thanks!

 You should add the link to the ticket, so it's faster for everyone to
 check what you are talking about.

 Josef


Ooops!  Yes I should; here it is:

http://projects.scipy.org/numpy/ticket/1223
Sorry, and thanks, Josef.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.all docstring reality check

2010-06-29 Thread David Goldsmith
Hi, folks.  Under Parameters, the docstring for numpy.core.fromnumeric.all
says:

out : ndarray, optionalAlternative output array in which to place the
result. It must have the same shape as the expected output and *the type is
preserved*. [emphasis added].I assume this is a
copy-and-paste-from-another-docstring typo (shouldn't it be (possibly
ndarray of) bool), but I just wanted to double check.

DG

-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.all docstring reality check

2010-06-29 Thread David Goldsmith
OK, now I understand: dtype(out) is preserved, whatever that happens to be,
not dtype(a) (which is what I thought it meant) - I better clarify.  Thanks!

DG

On Tue, Jun 29, 2010 at 7:28 PM, Skipper Seabold jsseab...@gmail.comwrote:

 On Tue, Jun 29, 2010 at 8:50 PM, David Goldsmith
 d.l.goldsm...@gmail.com wrote:
  Hi, folks.  Under Parameters, the docstring for
 numpy.core.fromnumeric.all
  says:
 
  out : ndarray, optionalAlternative output array in which to place the
  result. It must have the same shape as the expected output and the type
 is
  preserved. [emphasis added].I assume this is a
  copy-and-paste-from-another-docstring typo (shouldn't it be (possibly
  ndarray of) bool), but I just wanted to double check.
 

 Looks right to me though there is no

 In [255]: a = np.ones(10)

 In [256]: b = np.empty(1,dtype=int)

 In [257]: np.core.fromnumeric.all(a,out=b)
 Out[257]: array([1])

 In [258]: b.dtype
 Out[258]: dtype('int64')

 In [259]: b = np.empty(1,dtype=bool)

 In [260]: np.core.fromnumeric.all(a,out=b)
 Out[260]: array([ True], dtype=bool)

 In [261]: b.dtype
 Out[261]: dtype('bool')

 In [262]: b = np.empty(1)

 In [263]: np.core.fromnumeric.all(a,out=b)
 Out[263]: array([ 1.])

 In [264]: b.dtype
 Out[264]: dtype('float64')

 In [265]: a2 =
 np.column_stack((np.ones(10),np.ones(10),np.random.randint(0,2,10)))

 In [266]: b = np.empty(3,dtype=int)

 In [267]: np.core.fromnumeric.all(a2,axis=0,out=b)
 Out[267]: array([1, 1, 0])

 In [268]: b.dtype
 Out[268]: dtype('int64')

 This is interesting

 In [300]: b = np.ones(3,dtype='a3')

 In [301]: np.core.fromnumeric.all(a2,axis=0,out=b)
 Out[301]:
 array(['Tru', 'Tru', 'Fal'],
  dtype='|S3')

 Skipper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #1223...

2010-06-29 Thread David Goldsmith
On Tue, Jun 29, 2010 at 8:16 PM, Bruce Southey bsout...@gmail.com wrote:

 On Tue, Jun 29, 2010 at 6:03 PM, David Goldsmith
 d.l.goldsm...@gmail.com wrote:
  On Tue, Jun 29, 2010 at 3:56 PM, josef.p...@gmail.com wrote:
 
  On Tue, Jun 29, 2010 at 6:37 PM, David Goldsmith
  d.l.goldsm...@gmail.com wrote:
   ...concerns the behavior of numpy.random.multivariate_normal; if
 that's
   of
   interest to you, I urge you to take a look at the comments (esp. mine
   :-) );
   otherwise, please ignore the noise.  Thanks!
 
  You should add the link to the ticket, so it's faster for everyone to
  check what you are talking about.
 
  Josef
 
  Ooops!  Yes I should; here it is:
 
  http://projects.scipy.org/numpy/ticket/1223
  Sorry, and thanks, Josef.
 
  DG
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 As I recall, there is no requirement for the variance/covariance of
 the normal distribution to be positive definite.


No, not positive definite, positive *semi*-definite: yes, the variance may
be zero (the cov may have zero-valued eigenvalues), but the claim (and I
actually am neutral about it, in that I wanted to reference the claim in
the docstring and was told that doing so was unnecessary, the implication
being that this is a well-known fact), is that, in essence (in 1-D) the
variance can't be negative, which seems clear enough.  I don't see you
disputing that, and so I'm uncertain as to how you feel about the proposal
to weakly enforce symmetry and positive *semi*-definiteness.  (Now, if you
dispute that even requiring positive *semi*-definiteness is desirable,
you'll have to debate that w/ some of the others, because I'm taking their
word for it that indefiniteness is unphysical.)

DG

From http://en.wikipedia.org/wiki/Multivariate_normal_distribution
The covariance matrix is allowed to be singular (in which case the
corresponding distribution has no density).

So you must be able to draw random numbers from such a distribution.
Obviously what those numbers really mean is another matter (I presume
the dependent variables should be a linear function of the independent
variables) but the user *must* know since they entered it. Since the
function works the docstring Notes comment must be wrong.

Imposing any restriction means that this is no longer a multivariate
normal random number generator. If anything, you can only raise a
warning about possible non-positive definiteness but even that will
vary depending how it is measured and on the precision being used.


Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Strange behavior of np.sinc

2010-06-27 Thread David Goldsmith
On Sat, Jun 26, 2010 at 10:00 PM, David Goldsmith
d.l.goldsm...@gmail.comwrote:

 On Sat, Jun 26, 2010 at 9:39 PM, Robert Kern robert.k...@gmail.comwrote:

 On Sat, Jun 26, 2010 at 23:33, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  Hi!  The docstring for numpy.lib.function_base.sinc indicates that the
  parameter has to be an ndarray, and that it will return the limiting
 value 1
  for sinc(0).  Checking to see if it should actually say array_like, I
 found
  the following (Python 2.6):
 
  np.sinc(np.array((0,0.5)))
  array([ 1.,  0.63661977])
  np.sinc((0,0.5))
  array([NaN,  0.63661977])
  np.sinc([0,0.5])
  array([NaN,  0.63661977])
  np.version.version
  '1.4.1'
 
  So, it doesn't choke on non-array sequences, and appears to return
 values
  consistent w/ array input, except at 0.  Bug in code (failure at 0 if in
 a
  sequence) and in the doc (ndarray should be array_like)?

 Bug in both code and docs. There should be an x = np.asanyarray(x)
 before the rest of the code.


 Thanks Robert; I'll file a ticket and fix the docstring.

 DG


All done (patched this morning and Pauli's checked it in already - nice when
they're easy).

DG

-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arr.copy(order='F') doesn't agree with docstring: what is intended behavior?

2010-06-27 Thread David Goldsmith
On Sun, Jun 27, 2010 at 10:38 AM, Kurt Smith kwmsm...@gmail.com wrote:

 On Sat, Jun 26, 2010 at 7:34 PM, Warren Weckesser
 warren.weckes...@enthought.com wrote:
  Kurt Smith wrote:
  I'd really like arr.copy(order='F') to work -- is it supposed to as
  its docstring says, or is it supposed to raise a TypeError as it does
  now?
 
 
  It works for me if I don't use the keyword.  That is,
 
b = a.copy('F')

 Great!  At least the functionality is there.

 
  But I get the same error if I use order='F', so there is a either a bug
  in the docstring or a bug in the code.

 I certainly hope it's a docstring bug and not otherwise.

 Any pointers on submitting documentation bugs?

 Kurt


Same as filing a code bug: file a ticket at projects.scipy.org/numpy,  But
the policy is to document desired behavior, not actual behavior (if the code
isn't behaving as advertised but it should, obviously that's a code bug), so
you can do one of two things: a) wait 'til someone replies here clarifying
which it is, or b) file a ticket which describes the inconsistency and let
the issue be worked out over there (preferred IMO 'cause it gets the ticket
filed while the issue is fresh in your mind, and any discussion of what kind
of bug it is gets recorded as part of the ticket history).  Thanks for
reporting/filing!

DG



 
  Warren
 
 
  This is on numpy 1.4
 
 
  import numpy as np
  a = np.arange(10).reshape(5,2)
  a
 
  array([[0, 1],
 [2, 3],
 [4, 5],
 [6, 7],
 [8, 9]])
 
  print a.copy.__doc__
 
  a.copy(order='C')
 
  Return a copy of the array.
 
  Parameters
  --
  order : {'C', 'F', 'A'}, optional
  By default, the result is stored in C-contiguous (row-major)
 order in
  memory.  If `order` is `F`, the result has 'Fortran'
 (column-major)
  order.  If order is 'A' ('Any'), then the result has the same
 order
  as the input.
 
  Examples
  
   x = np.array([[1,2,3],[4,5,6]], order='F')
 
   y = x.copy()
 
   x.fill(0)
 
   x
  array([[0, 0, 0],
 [0, 0, 0]])
 
   y
  array([[1, 2, 3],
 [4, 5, 6]])
 
   y.flags['C_CONTIGUOUS']
  True
 
  a.copy(order='C')
 
  Traceback (most recent call last):
File stdin, line 1, in module
  TypeError: copy() takes no keyword arguments
 
  a.copy(order='F')
 
  Traceback (most recent call last):
File stdin, line 1, in module
  TypeError: copy() takes no keyword arguments
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.poisson docs missing Returns

2010-06-27 Thread David Goldsmith
On Sun, Jun 27, 2010 at 3:44 AM, Pauli Virtanen p...@iki.fi wrote:

 Sat, 26 Jun 2010 17:37:22 -0700, David Goldsmith wrote:
  On Sat, Jun 26, 2010 at 3:22 PM, josef.p...@gmail.com wrote:
 [clip]
  Is there a chance that some changes got lost?
 
  (Almost) anything's possible... :-(

 There's practically no change of edits getting lost.


But there is a chance of edits not being saved (due to operator error, e.g.,
inadvertently clicking cancel): happened to me just yesterday while making
edits to gumbel; reminded me -the hard way- of another reason to make
extensive edits on one's own machine, then cut/paste them into the edit
window in the Wiki when done.


 There's a change of
 them being hidden if things are moved around in the source code, causing
 duplicate work, but that's not the case here.

  Well, here's what happened in the particular case of numpy's pareto:
 
  The promotion to Needs review took place - interestingly - 2008-06-26
  (yes, two years ago today), despite the lack of a Returns section; the
  initial check-in of HOWTO_DOCUMENT.txt - which does specify that a
  Returns section be included (when applicable) - was one week before,
  2008-06-19. So, it's not that surprising that this slipped through the
  cracks.
 
  Pauli (or anyone): is there a way to search the Wiki, e.g., using a
  SQL-like query, for docstrings that saw a change in status before a
  date, or between two dates?


Thanks Pauli.  Anyway, I figured out another way: I'm using the stats page,
and checking anything that was Needs review or better before 2008-06-19
and up to a month after - after that, we'll just have to trust that the
review process will detect it.  FWIW, the only docs I've found so far w/
that particular error are the ones that Vincent found -you did say you found
three, right Vincent?- but I've found other problems w/ other docstrings
(some of which have since advanced past the state they were in back then,
i.e., the errors went undetected even though someone has ostensibly reviewed
them.)

DG

DG


 No. The review status is not versioned, so the necessary information is
 not there. The only chance would be to search for docstrings that haven't
 been edited after a certain date.

 --
 Pauli Virtanen

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Documentation error in numpy.random.logseries

2010-06-26 Thread David Goldsmith
On Sat, Jun 26, 2010 at 1:41 PM, Vincent Davis vinc...@vincentdavis.netwrote:

 numpy.random.logseries(p, size=None)

 but the parameters section,
 Parameters:
 loc : float
 scale : float  0.
 size : {tuple, int}
 Output shape. If the given shape is, e.g., (m, n, k), then m * n * k
 samples are drawn.

 Notice that p  loc and what about scale.

 I'll file a ticket unless I am mission something,
 Which should it be loc or p
 What about scale.


The source is opaque (to me; Cython?) so unless you can decipher it, test
actual behavior and document that - my guess is that p is short for
parameters and is intended to be a two-element array_like containing both
the loc and scale parameters, but that's the way it should be documented,
not with some unprecedented reference to loc and scale when the signature
specifies p, but as I said, check that first.

There is no numpy-dev list right? Should this list be used or the
 scipy-dev list


That's a good Q: this is definitely a bug in the doc (loc and scale
shouldn't be documented as such when they're not explicitly in the function
signature), in which case scipy-dev is the proper place to post, but if it
turns out to be a bug in the code also, then this is the proper place, since
all numpy devs are subscribed here, and numpy users should know about
potential bugs (numpy devs will correct me if I'm wrong).

DG

 Vincent
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.poisson docs missing Returns

2010-06-26 Thread David Goldsmith
Something is systematically wrong if there are this many problems in the
numpy.stats docstrings: numpy is supposed to be (was) almost completely
ready for review; please focus on scipy unless/until the reason why there
are now so many problems in numpy.stats can be determined (I suspect the
numpy.stats code has been made to call the scipy.stats.distributions module,
and all those docstrings have been marked Unimportant - meaning do not
edit - either permanently, in the case of the instances, or temporarily in
the case of the base classes from which the instances are created).

Bottom line: if it doesn't start w/ scipy, leave it alone (for now).

DG

On Sat, Jun 26, 2010 at 2:40 PM, Vincent Davis vinc...@vincentdavis.netwrote:

 numpy.random.poisson docs missing Returns

 http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.poisson.html#numpy.random.poisson
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.at

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.poisson docs missing Returns

2010-06-26 Thread David Goldsmith
On Sat, Jun 26, 2010 at 3:03 PM, josef.p...@gmail.com wrote:

 On Sat, Jun 26, 2010 at 5:56 PM, David Goldsmith
 d.l.goldsm...@gmail.com wrote:
  Something is systematically wrong if there are this many problems in the
  numpy.stats docstrings: numpy is supposed to be (was) almost completely
  ready for review; please focus on scipy unless/until the reason why there
  are now so many problems in numpy.stats can be determined (I suspect the
  numpy.stats code has been made to call the scipy.stats.distributions
 module,
  and all those docstrings have been marked Unimportant - meaning do not
  edit - either permanently, in the case of the instances, or temporarily
 in
  the case of the base classes from which the instances are created).
 
  Bottom line: if it doesn't start w/ scipy, leave it alone (for now).

 It's missing in several functions and incorrect docstrings have to be
 corrected. Look at the log of e.g. pareto in the editor, the returns
 have never been added, unless you find any missing revisions that are
 not in the doc editor.

 Josef


OK, I see it was promoted to Needs review very early in the first Marathon
- before the Standard had been finalized?  God help us: how many other numpy
docstrings are improperly at Needs review because of this?  Scheisse,
numpy may not be as close to Ready For Review as we thought...

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.poisson docs missing Returns

2010-06-26 Thread David Goldsmith
On Sat, Jun 26, 2010 at 3:28 PM, Vincent Davis vinc...@vincentdavis.netwrote:

 On Sat, Jun 26, 2010 at 4:22 PM,  josef.p...@gmail.com wrote:
  On Sat, Jun 26, 2010 at 6:11 PM, David Goldsmith
  d.l.goldsm...@gmail.com wrote:
  On Sat, Jun 26, 2010 at 3:03 PM, josef.p...@gmail.com wrote:
 
  On Sat, Jun 26, 2010 at 5:56 PM, David Goldsmith
  d.l.goldsm...@gmail.com wrote:
   Something is systematically wrong if there are this many problems in
 the
   numpy.stats docstrings: numpy is supposed to be (was) almost
 completely
   ready for review; please focus on scipy unless/until the reason why
   there
   are now so many problems in numpy.stats can be determined (I suspect
 the
   numpy.stats code has been made to call the scipy.stats.distributions
   module,
   and all those docstrings have been marked Unimportant - meaning do
 not
   edit - either permanently, in the case of the instances, or
 temporarily
   in
   the case of the base classes from which the instances are created).
  
   Bottom line: if it doesn't start w/ scipy, leave it alone (for now).
 
  It's missing in several functions and incorrect docstrings have to be
  corrected. Look at the log of e.g. pareto in the editor, the returns
  have never been added, unless you find any missing revisions that are
  not in the doc editor.
 
  Josef
 
  OK, I see it was promoted to Needs review very early in the first
 Marathon
  - before the Standard had been finalized?  God help us: how many other
 numpy
  docstrings are improperly at Needs review because of this?  Scheisse,
  numpy may not be as close to Ready For Review as we thought...
 
  Is there a chance that some changes got lost?
 
  I thought I had edited random.pareto to note that it is actually Lomax
  or Pareto II. But I'm not completely sure I actually did it, and not
  just intended to do it. I don't see any record in the doc editor, so
  maybe I never did edit it.

 Also several are missing examples but this is easy (copy past) with
 the tests I just added.
 Vincent

 I'm busy right now, but in a little bit I'll check when the Standard was
finalized and demote - until they can be thoroughly checked for Standard
compliance - to Being Written everything promoted to Needs review prior
to that time.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.poisson docs missing Returns

2010-06-26 Thread David Goldsmith
On Sat, Jun 26, 2010 at 3:22 PM, josef.p...@gmail.com wrote:

 On Sat, Jun 26, 2010 at 6:11 PM, David Goldsmith
 d.l.goldsm...@gmail.com wrote:
  On Sat, Jun 26, 2010 at 3:03 PM, josef.p...@gmail.com wrote:
 
  On Sat, Jun 26, 2010 at 5:56 PM, David Goldsmith
  d.l.goldsm...@gmail.com wrote:
   Something is systematically wrong if there are this many problems in
 the
   numpy.stats docstrings: numpy is supposed to be (was) almost
 completely
   ready for review; please focus on scipy unless/until the reason why
   there
   are now so many problems in numpy.stats can be determined (I suspect
 the
   numpy.stats code has been made to call the scipy.stats.distributions
   module,
   and all those docstrings have been marked Unimportant - meaning do
 not
   edit - either permanently, in the case of the instances, or
 temporarily
   in
   the case of the base classes from which the instances are created).
  
   Bottom line: if it doesn't start w/ scipy, leave it alone (for now).
 
  It's missing in several functions and incorrect docstrings have to be
  corrected. Look at the log of e.g. pareto in the editor, the returns
  have never been added, unless you find any missing revisions that are
  not in the doc editor.
 
  Josef
 
  OK, I see it was promoted to Needs review very early in the first
 Marathon
  - before the Standard had been finalized?  God help us: how many other
 numpy
  docstrings are improperly at Needs review because of this?  Scheisse,
  numpy may not be as close to Ready For Review as we thought...

 Is there a chance that some changes got lost?


(Almost) anything's possible... :-(

Well, here's what happened in the particular case of numpy's pareto:

The promotion to Needs review took place - interestingly - 2008-06-26
(yes, two years ago today), despite the lack of a Returns section; the
initial check-in of HOWTO_DOCUMENT.txt - which does specify that a Returns
section be included (when applicable) - was one week before, 2008-06-19.
So, it's not that surprising that this slipped through the cracks.

Pauli (or anyone): is there a way to search the Wiki, e.g., using a SQL-like
query, for docstrings that saw a change in status before a date, or between
two dates?

Thanks!

DG


 I thought I had edited random.pareto to note that it is actually Lomax
 or Pareto II. But I'm not completely sure I actually did it, and not
 just intended to do it. I don't see any record in the doc editor, so
 maybe I never did edit it.

 Josef


 
  DG
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Strange behavior of np.sinc

2010-06-26 Thread David Goldsmith
Hi!  The docstring for numpy.lib.function_base.sinc indicates that the
parameter has to be an ndarray, and that it will return the limiting value 1
for sinc(0).  Checking to see if it should actually say array_like, I found
the following (Python 2.6):

 np.sinc(np.array((0,0.5)))
array([ 1.,  0.63661977])
 np.sinc((0,0.5))
array([NaN,  0.63661977])
 np.sinc([0,0.5])
array([NaN,  0.63661977])
 np.version.version
'1.4.1'

So, it doesn't choke on non-array sequences, and appears to return values
consistent w/ array input, except at 0.  Bug in code (failure at 0 if in a
sequence) and in the doc (ndarray should be array_like)?

DG
-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Strange behavior of np.sinc

2010-06-26 Thread David Goldsmith
On Sat, Jun 26, 2010 at 9:39 PM, Robert Kern robert.k...@gmail.com wrote:

 On Sat, Jun 26, 2010 at 23:33, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  Hi!  The docstring for numpy.lib.function_base.sinc indicates that the
  parameter has to be an ndarray, and that it will return the limiting
 value 1
  for sinc(0).  Checking to see if it should actually say array_like, I
 found
  the following (Python 2.6):
 
  np.sinc(np.array((0,0.5)))
  array([ 1.,  0.63661977])
  np.sinc((0,0.5))
  array([NaN,  0.63661977])
  np.sinc([0,0.5])
  array([NaN,  0.63661977])
  np.version.version
  '1.4.1'
 
  So, it doesn't choke on non-array sequences, and appears to return values
  consistent w/ array input, except at 0.  Bug in code (failure at 0 if in
 a
  sequence) and in the doc (ndarray should be array_like)?

 Bug in both code and docs. There should be an x = np.asanyarray(x)
 before the rest of the code.


Thanks Robert; I'll file a ticket and fix the docstring.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy.linalg.eig oddity

2010-06-23 Thread David Goldsmith
Is it not possible to update your versions to see if that solves the
problem?

DG

On Wed, Jun 23, 2010 at 11:25 AM, Salim, Fadhley (CA-CIB) 
fadhley.sa...@ca-cib.com wrote:

 I've been investigating a truly bizarre bug related to the use of
 numpy.linalg.eig.

 I have two classes which both use numpy.linalg.eig. These classes are
 used at very different times and are not connected in any way other than
 the fact that they both share this particular dependancy.

 I have found that whichever class is called second will produce a
 slightly different answer if numpy.linalg.eig is used sometime earlier.
 I've eliminated all other variables besides the call to eig(). This
 seems completely implausible, and yet I have the data.

 As far as I am aware, eig() is wholly stateless and therefore using it
 should not affect any subsequent calls to the function, right?

 Numpy==1.2.1, Scipy==0.7.0

 I've checked the bug-trac for this function and can find no references
 to bugs which cause it to hold-state, even in the somewhat out of date
 version of numpy. Can somebody let me know if there's something that I'm
 missing.

 This email does not create a legal relationship between any member of the
 Crédit Agricole group and the recipient or constitute investment advice.
 The content of this email should not be copied or disclosed (in whole or
 part) to any other person. It may contain information which is
 confidential, privileged or otherwise protected from disclosure. If you are
 not the intended recipient, you should notify us and delete it from your
 system. Emails may be monitored, are not secure and may be amended,
 destroyed or contain viruses and in communicating with us such conditions
 are accepted. Any content which does not relate to business matters is not
 endorsed by us.

 Crédit Agricole Corporate  Investment Bank is authorised by the Autorité
 de Contrôle Prudentiel (ACP) and supervised by the ACP and the Autorité des
 Marchés Financiers (AMF) in France and subject to limited regulation by the
 Financial Services Authority. Details about the extent of our regulation by
 the Financial Services Authority are available from us on request. Crédit
 Agricole Corporate  Investment Bank is incorporated in France with limited
 liability and registered in England  Wales. Registration number: FC008194.
 Registered office: Broadwalk House, 5 Appold Street, London, EC2A 2DA.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy.linalg.eig oddity

2010-06-23 Thread David Goldsmith
On Wed, Jun 23, 2010 at 2:17 PM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Jun 23, 2010 at 13:25, Salim, Fadhley (CA-CIB)
 fadhley.sa...@ca-cib.com wrote:
  I've been investigating a truly bizarre bug related to the use of
  numpy.linalg.eig.
 
  I have two classes which both use numpy.linalg.eig. These classes are
  used at very different times and are not connected in any way other than
  the fact that they both share this particular dependancy.
 
  I have found that whichever class is called second will produce a
  slightly different answer if numpy.linalg.eig is used sometime earlier.
  I've eliminated all other variables besides the call to eig(). This
  seems completely implausible, and yet I have the data.
 
  As far as I am aware, eig() is wholly stateless and therefore using it
  should not affect any subsequent calls to the function, right?
 
  Numpy==1.2.1, Scipy==0.7.0
 
  I've checked the bug-trac for this function and can find no references
  to bugs which cause it to hold-state, even in the somewhat out of date
  version of numpy. Can somebody let me know if there's something that I'm
  missing.

 I don't think we've seen anything like that before. If you can come up
 with a small, self-contained script that demonstrates the problem, we
 will take a look at it.

 --
 Robert Kern


Of course providing what Robert requests would be optimal, but at the very
least/in the mean time, if you can provide the data you say you have, that
might also be helpful.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SciPy docs marathon: a little more info

2010-06-22 Thread David Goldsmith
On Mon, Jun 14, 2010 at 2:05 AM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 Hi, all!  The scipy doc marathon has gotten off to a very slow start this
 summer.  We are producing less than 1000 words a week, perhaps because
 many universities are still finishing up spring classes.  So, this is
 a second appeal to everyone to pitch in and help get scipy documented
 so that it's easy to learn how to use it.  Because some of the
 packages are quite specialized, we need both regular contributors to
 write lots of pages, and some people experienced in using each module
 (and the mathematics behind the software) to make sure we don't water
 it down or make it wrong in the process.  If you can help, please, now is
 the
 time to step forward.  Thanks!

 On behalf of Joe and myself,

 David Goldsmith
 Olympia, WA


OK, a few people have come forward.  Let me enumerate the categories that
still have no declared volunteer writer-editors (all categories are in
need of leaders):

Max. Entropy, Misc., Image Manip. (Milestone 6)
Signal processing (Milestone 8)
Sparse Matrices (Milestone 9)
Spatial Algorithms., Special funcs. (Milestone 10)
C/C++ Integration (Milestone 13)

As for the rest, only Interpolation (Milestone 3) has more than one person
(but I'm one of the two), and I'm the only person on four others.

So, hopefully, knowing specifically which areas are in dire need will
inspire people skilled in those areas to sign up.  Thanks for your time and
help,

DG

PS: For your convenience, here's the link to the scipy
Milestoneshttp://docs.scipy.org/scipy/Milestones/page.  (Note that
the Milestones link at the top of each Wiki page links,
incorrectly in the case of the SciPy pages, to the NumPy Milestones page,
which we are not actively working on in this Marathon; this is a known,
reported bug in the Wiki program.)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SciPy docs marathon: a little more info

2010-06-18 Thread David Goldsmith
On Mon, Jun 14, 2010 at 2:05 AM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 Hi, all!  The scipy doc marathon has gotten off to a very slow start this
 summer.  We are producing less than 1000 words a week, perhaps because
 many universities are still finishing up spring classes.  So, this is
 a second appeal to everyone to pitch in and help get scipy documented
 so that it's easy to learn how to use it.  Because some of the
 packages are quite specialized, we need both regular contributors to
 write lots of pages, and some people experienced in using each module
 (and the mathematics behind the software) to make sure we don't water
 it down or make it wrong in the process.  If you can help, please, now is
 the
 time to step forward.  Thanks!

 On behalf of Joe and myself,

 David Goldsmith
 Olympia, WA


(Apparently this didn't go through the first time.)

OK, a few people have come forward - thanks!

Let me enumerate the categories that still have no declared volunteer
writer-editors (all categories are in need of leaders):

Max. Entropy, Misc., Image Manip. (Milestone 6)
Signal processing (Milestone 8)
Sparse Matrices (Milestone 9)
Spatial Algorithms., Special funcs. (Milestone 10)
C/C++ Integration (Milestone 13)

As for the rest, only Interpolation (Milestone 3) has more than one person
(but I'm one of the two), and I'm the only person on four others.

So, hopefully, knowing specifically which areas are in dire need will
inspire people skilled in those areas to sign up.  Thanks for your time and
help,

DG

PS: For your convenience, here's the link to the scipy
Milestoneshttp://docs.scipy.org/scipy/Milestones/page.  (Note that
the Milestones link at the top of each Wiki page links,
incorrectly in the case of the SciPy pages, to the NumPy Milestones page,
which we are not actively working on in this Marathon; this is a known,
reported bug in the Wiki program.)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SciPy docs marathon

2010-06-15 Thread David Goldsmith
Hi, all!  The scipy doc marathon has gotten off to a very slow start this
summer.  We are producing less than 1000 words a week, perhaps because
many universities are still finishing up spring classes.  So, this is
a second appeal to everyone to pitch in and help get scipy documented
so that it's easy to learn how to use it.  Because some of the
packages are quite specialized, we need both regular contributors to
write lots of pages, and some people experienced in using each module
(and the mathematics behind the software) to make sure we don't water
it down or make it wrong in the process.  If you can help, please, now is
the
time to step forward.  Thanks!

On behalf of Joe and myself,

David Goldsmith
Olympia, WA
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tensor contraction

2010-06-13 Thread David Goldsmith
Is this not what
core.numeric.tensordothttp://docs.scipy.org/numpy/docs/numpy.core.numeric.tensordot/does?

DG

On Sun, Jun 13, 2010 at 12:37 PM, Friedrich Romstedt 
friedrichromst...@gmail.com wrote:

 2010/6/13 Alan Bromborsky abro...@verizon.net:
  I am writing symbolic tensor package for general relativity.  In making
  symbolic tensors concrete
  I generate numpy arrays stuffed with sympy functions and symbols.

 That sound's interesting.

  The
  operations are tensor product
  (numpy.multiply.outer), permutation of indices (swapaxes),  partial and
  covariant (both vector operators that
  increase array dimensions by one) differentiation, and contraction.

 I would like to know more precisely what this differentiations do, and
 how it comes that they add an index to the tensor.

  I think I need to do the contraction last
  to make sure everything comes out correctly.  Thus in many cases I would
  be performing multiple contractions
  on the tensor resulting from all the other operations.

 Hm, ok, so I guess I shall give my 1 cent now.

 Ok.

# First attempt (FYI, failed):

# The general procedure is, to extract a multi-dimensional diagonal
 array.
# The sum \sum_{ij = 0}^{M} \sum_{kl = 0}^{N} is actually the sum over a
# 2D array with indices I \equiv i \equiv j and K \equiv k \equiv
 l.  Meaning:
# \sum_{(I, K) = (0, 0)}^{(M, N)}.
# Thus, if we extract the indices with 2D arrays [[0], [1], ...,
 [N - 1]] for I and
# [[0, 1, ..., M - 1]] on the other side for K, then numpy's
 broadcasting
# mechanism will broadcast them to the same shape, yielding (N, M)
 arrays.
# Then finally we sum over this X last dimensions when there were X
# contractions, and we're done.

# Hmmm, when looking closer at the problem, it seems that this isn't
# adequate.  Because we would have to insert open slices, but cannot
# create them outside of the [] operator ...

# So now follows second attemt:

 def contract(arr, *contractions):
*CONTRACTIONS is e.g.:
(0, 1), (2, 3)
meaning two contractions, one of 0  1, and one of 2  2,
but also:
(0, 1, 2),
is allowed, meaning contract 0  1  2.

# First, we check if we can contract using the *contractions* given ...

for contraction in contractions:
# Extract the dimensions used.
dimensions = numpy.asarray(arr.shape)[list(contraction)]

# Check if they are all the same.
dimensionsdiff = dimensions - dimensions[0]
if numpy.abs(dimensionsdiff).sum() != 0:
raise ValueError('Contracted indices must be of same
 dimension.')

# So now, we can contract.
#
# First, pull the contracted dimensions all to the front ...

# The names of the indices.
names = range(arr.ndim)

# Pull all of the contractions.
names_pulled = []
for contraction in contractions:
names_pulled = names_pulled + list(contraction)
# Remove the pulled index names from the pool:
for used_index in contraction:
# Some more sanity check
if used_index not in names:
raise ValueError('Each index can only be used in one
 contraction.')
names.remove(used_index)

# Concatenate the pulled indices and the left-over indices.
names_final = names_pulled + names

# Perform the swap.
arr = arr.transpose(names_final)

# Perform the contractions ...

for contraction in contractions:
# The respective indices are now, since we pulled them, the
 frontmost indices:
ncontraction = len(contraction)
# The index array:
# shape[0] = shape[1] = ... = shape[ncontraction - 1]
I = numpy.arange(0, arr.shape[0])
# Perform contraction:
index = [I] * ncontraction
arr = arr[tuple(index)].sum(axis=0)

# If I made no mistake, we should be done now.
return arr

 Ok, it didn't get much shorter than Pauli's solution, so you decide ...

  One question to
  ask would be considering that I am stuffing
  the arrays with symbolic objects and all the operations on the objects
  would be done using the sympy modules,
  would using numpy operations to perform the contractions really save any
  time over just doing the contraction in
  python code with a numpy array.

 I don't know anything about sympy.  I think there's some typo around:
 I guess you mean creating some /sympy/ array and doing the operations
 using that instead of using a numpy array having sympy dtype=object
 content?

 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As 

Re: [Numpy-discussion] C vs. Fortran order -- misleading documentation?

2010-06-09 Thread David Goldsmith
On Wed, Jun 9, 2010 at 9:00 AM, David Cournapeau courn...@gmail.com wrote:

 On Thu, Jun 10, 2010 at 12:09 AM, Benjamin Root ben.r...@ou.edu wrote:
  I think that arrays are just syntax on pointer is indeed the key
  reason for how C works here. Since a[b] really means a + b (which is
  why 5[a] and a[5] are the same), I don't see how to do it differently.
 
  Holy crap!  You can do that in C?!

 Yes:

 #include stdio.h

 int main()
 {
float a[2] = {1.0, 2.0};

printf(%f %f %f\n, a[1], *(a+1), 1[a]);
 }


This is all _very_ educational (and I mean that sincerely), but can we
please get back to the topic at hand ( :-) ).  A specific proposal is on the
table: we remove discussion of the whole C/Fortran ordering issue from
basics.indexing.rst and promote it to a more advanced document TBD.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C vs. Fortran order -- misleading documentation?

2010-06-08 Thread David Goldsmith
On Mon, Jun 7, 2010 at 4:52 AM, Pavel Bazant maxpla...@seznam.cz wrote:

 Correct me if I am wrong, but the paragraph

 Note to those used to IDL or Fortran memory order as it relates to
 indexing. Numpy uses C-order indexing. That means that the last index
 usually (see xxx for exceptions) represents the most rapidly changing memory
 location, unlike Fortran or IDL, where the first index represents the most
 rapidly changing location in memory. This difference represents a great
 potential for confusion.

 in

 http://docs.scipy.org/doc/numpy/user/basics.indexing.html

 is quite misleading, as C-order means that the last index changes rapidly,
 not the
 memory location.

 Pavel


Sounds correct (your criticism, that is) but I'm no expert, so I'm going to
wait another 12 hours or so - to give others a chance to chime in - before
correcting it.

DG
-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is there really a moderator that reviews posts?

2010-06-08 Thread David Goldsmith
On Tue, Jun 8, 2010 at 12:10 AM, Sebastian Haase seb.ha...@gmail.comwrote:

 I don't want to complain 
 But what is wrong with a limit of 40kB ? There are enough places where
 one could upload larger files for everyone interested...


Not everyone knows about 'em, though - can you list some here, please.
Thanks!

DG



 My 2 cents,

 Sebastian Haase

 PS: what is the limit now set to ?



 On Mon, Jun 7, 2010 at 11:24 PM, Vincent Davis vinc...@vincentdavis.net
 wrote:
  On Mon, Jun 7, 2010 at 3:04 PM, PostMaster postmas...@enthought.com
 wrote:
  On Mon, Jun 7, 2010 at 14:32,  josef.p...@gmail.com wrote:
  On Mon, Jun 7, 2010 at 3:14 PM, Vincent Davis 
 vinc...@vincentdavis.net wrote:
  I just tried a post and got this. Should I repost without the long
  section of the terminal output I pasted in ?
 
  Your mail to 'NumPy-Discussion' with the subject
 
Installing numpy from source on py 3.1.2, osx
 
  Is being held until the list moderator can review it for approval.
 
  The reason it is being held:
 
Message body is too big: 72148 bytes with a limit of 40 KB
 
  Either the message will get posted to the list, or you will receive
  notification of the moderator's decision.  If you would like to cancel
  this posting, please visit the following URL:
 
  There is no moderator, and, I think, messages that are held because
  they are too large are in permanent limbo. The only way to get through
  is with smaller messages, maybe paste it somewhere.
 
  Not only is there a moderator,
  the moderator is also subscribed to the list,
  and that moderator is ...
  ...
  ... {oh, i see, it's a drum-roll.}
  ...
  ... {seriously?}
  ...
  ... {oh, yeah, great use of everyone's time.}
  ...
  ... {great. i said drum-roll, and now i want sushi.}
  ...
  me!
 
  I also bumped up the archaically small message size limit a bit.
 
  Yay!
 
  --
  Aaron
 
  Well thanks Aaron for keeping track of all our email it must be tough
  to keep it all straight. :)
 
  But really thanks
  Vincent
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C vs. Fortran order -- misleading documentation?

2010-06-08 Thread David Goldsmith
On Tue, Jun 8, 2010 at 8:27 AM, Pavel Bazant maxpla...@seznam.cz wrote:


   Correct me if I am wrong, but the paragraph
  
   Note to those used to IDL or Fortran memory order as it relates to
   indexing. Numpy uses C-order indexing. That means that the last index
   usually (see xxx for exceptions) represents the most rapidly changing
 memory
   location, unlike Fortran or IDL, where the first index represents the
 most
   rapidly changing location in memory. This difference represents a great
   potential for confusion.
  
   in
  
   http://docs.scipy.org/doc/numpy/user/basics.indexing.html
  
   is quite misleading, as C-order means that the last index changes
 rapidly,
   not the
   memory location.
  
  
  Any index can change rapidly, depending on whether is in an inner loop or
  not. The important distinction between C and Fortran order is how indices
  translate to memory locations. The documentation seems correct to me,
  although it might make more sense to say the last index addresses a
  contiguous range of memory. Of course, with modern processors, actual
  physical memory can be mapped all over the place.
 
  Chuck

 To me, saying that the last index represents the most rapidly changing
 memory
 location means that if I change the last index, the memory location changes
 a lot, which is not true for C-order. So for C-order, supposed one scans
 the memory
 linearly (the desired scenario),  it is the last *index* that changes most
 rapidly.

 The inverted picture looks like this: For C-order,  changing the first
 index
 leads to the most rapid jump in *memory*.

 Still have the feeling the doc is very misleading at this important issue.

 Pavel


The distinction between your two perspectives is that one is using for-loop
traversal of indices, the other is using pointer-increment traversal of
memory; from each of your perspectives, your conclusions are correct, but
my inclination is that the pointer-increment traversal of memory perspective
is closer to the spirit of the docstring, no?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is there really a moderator that reviews posts?

2010-06-08 Thread David Goldsmith
On Tue, Jun 8, 2010 at 8:43 AM, John Hunter jdh2...@gmail.com wrote:

 On Tue, Jun 8, 2010 at 10:33 AM, Sebastian Haase seb.ha...@gmail.com
 wrote:
  On Tue, Jun 8, 2010 at 5:23 PM, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  On Tue, Jun 8, 2010 at 12:10 AM, Sebastian Haase seb.ha...@gmail.com
  wrote:
 
  I don't want to complain 
  But what is wrong with a limit of 40kB ? There are enough places where
  one could upload larger files for everyone interested...
 
  Not everyone knows about 'em, though - can you list some here, please.
  Thanks!
 
  I likehttp://drop.io - easy to use - up to
 100MB
also seehttp://yousendit.com(have
  not used it)
  for longish text there is  http://pastebin.com  (have not
 used it)



 For archival purposes (eg future googlers) having the data/images
 on-list is preferable to off-list,


Excellent point!

DG


 since drop.io and friends have less
 persistence in my experience.  I agree there should be limits and the
 threshold should be fairly small, but the occasional small image or
 dataset shouldn't be too onerous.  I admin the mpl list and upped the
 default to 200K, but we get more legitimate image submissions than
 most lists

 JDH
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C vs. Fortran order -- misleading documentation?

2010-06-08 Thread David Goldsmith
On Tue, Jun 8, 2010 at 12:05 PM, Anne Archibald
aarch...@physics.mcgill.cawrote:

 On 8 June 2010 14:16, Eric Firing efir...@hawaii.edu wrote:
  On 06/08/2010 05:50 AM, Charles R Harris wrote:
 
 
  On Tue, Jun 8, 2010 at 9:39 AM, David Goldsmith 
 d.l.goldsm...@gmail.com
  mailto:d.l.goldsm...@gmail.com wrote:
 
  On Tue, Jun 8, 2010 at 8:27 AM, Pavel Bazant maxpla...@seznam.cz
  mailto:maxpla...@seznam.cz wrote:
 
 
 Correct me if I am wrong, but the paragraph

 Note to those used to IDL or Fortran memory order as it
  relates to
 indexing. Numpy uses C-order indexing. That means that the
  last index
 usually (see xxx for exceptions) represents the most
  rapidly changing memory
 location, unlike Fortran or IDL, where the first index
  represents the most
 rapidly changing location in memory. This difference
  represents a great
 potential for confusion.

 in

 http://docs.scipy.org/doc/numpy/user/basics.indexing.html

 is quite misleading, as C-order means that the last index
  changes rapidly,
 not the
 memory location.


Any index can change rapidly, depending on whether is in an
  inner loop or
not. The important distinction between C and Fortran order is
  how indices
translate to memory locations. The documentation seems
  correct to me,
although it might make more sense to say the last index
  addresses a
contiguous range of memory. Of course, with modern
  processors, actual
physical memory can be mapped all over the place.
   
Chuck
 
  To me, saying that the last index represents the most rapidly
  changing memory
  location means that if I change the last index, the memory
  location changes
  a lot, which is not true for C-order. So for C-order, supposed
  one scans the memory
  linearly (the desired scenario),  it is the last *index* that
  changes most rapidly.
 
  The inverted picture looks like this: For C-order,  changing the
  first index
  leads to the most rapid jump in *memory*.
 
  Still have the feeling the doc is very misleading at this
  important issue.
 
  Pavel
 
 
  The distinction between your two perspectives is that one is using
  for-loop traversal of indices, the other is using pointer-increment
  traversal of memory; from each of your perspectives, your
  conclusions are correct, but my inclination is that the
  pointer-increment traversal of memory perspective is closer to the
  spirit of the docstring, no?
 
 
  I think the confusion is in most rapidly changing memory location,
  which is kind of ambiguous because a change in the indices is always a
  change in memory location if one hasn't used index tricks and such. So
  from a time perspective it means nothing, while from a memory
  perspective the largest address changes come from the leftmost indices.
 
  Exactly.  Rate of change with respect to what, or as you do what?
 
  I suggest something like the following wording, if you don't mind the
  verbosity as a means of conjuring up an image (although putting in
  diagrams would make it even clearer--undoubtedly there are already good
  illustrations somewhere on the web):
 
  
 
  Note to those used to Matlab, IDL, or Fortran memory order as it relates
  to indexing. Numpy uses C-order indexing by default, although a numpy
  array can be designated as using Fortran order. [With C-order,
  sequential memory locations are accessed by incrementing the last
  index.]  For a two-dimensional array, think if it as a table.  With
  C-order indexing the table is stored as a series of rows, so that one is
  reading from left to right, incrementing the column (last) index, and
  jumping ahead in memory to the next row by incrementing the row (first)
  index. With Fortran order, the table is stored as a series of columns,
  so one reads memory sequentially from top to bottom, incrementing the
  first index, and jumps ahead in memory to the next column by
  incrementing the last index.
 
  One more difference to be aware of: numpy, like python and C, uses
  zero-based indexing; Matlab, [IDL???], and Fortran start from one.
 
  -
 
  If you want to keep it short, the key wording is in the sentence in
  brackets, and you can chop out the table illustration.

 I'd just like to point out a few warnings to keep in mind while
 rewriting this section:

 Numpy arrays can have any configuration of memory strides, including
 some that are zero; C and Fortran contiguous arrays are simply those
 that have special arrangements

Re: [Numpy-discussion] Introduction to Scott, Jason, and (possibly) others from Enthought

2010-05-31 Thread David Goldsmith
On Mon, May 31, 2010 at 3:54 AM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:



 On Mon, May 31, 2010 at 8:23 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Sun, May 30, 2010 at 5:53 PM, Ralf Gommers 
 ralf.gomm...@googlemail.com wrote:



 On Mon, May 31, 2010 at 2:06 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:


 Hey, I thought that was your job ;) Maybe the best thing is to start by
 making a branch for the removal and we can argue about who gets the short
 end of the stick later...

 Manage the release yes, do the heavy lifting not necessarily:) OK, I'll
 make the branch tonight.

 If the removal just involves the same changes that were made for 1.4.1
 then I can do it. But I'm not familiar with this code and if it's more work,
 I probably won't have time for it between the scipy 0.8.0 release and my
 'real job', like you call it.


 I think it may be a bit trickier because Travis made more changes. I think
 the relevant commits are r8113..r8115 and 8107..8108. After removing those
 we still need to remove the same stuff as we did for the 1.4.1 release. We
 will need some way of testing if the removal was successful.


 That still looks like it's not an insane amount of work.

 We probably want to make sure there is a documentation update also, maybe
 before making the branch.

 I checked and there's not too much to merge, most of the changes (
 http://docs.scipy.org/numpy/patch/) don't apply cleanly or at all. The
 latter because they're docs for constants, lists, etc.

 The biggest chuck of recent changes is for the polynomial and chebyshev
 docs, can you give your opinion on those Charles? The OK to Apply is set to
 True for all of them, but I'm not sure who did that. The patch generation
 won't work for many of those docs, so if you could check if any docs should
 be merged manually right now that would be useful.

 @ David G: there are some conflicts in docs you recently edited,
 http://docs.scipy.org/numpy/merge/. Would you mind resolving those?


Nothing to resolve: those docstrings are generated via a template and should
not be modified via the wiki at all - any differences between what's in the
wiki and svn are inadvertent and can be ignored.

DG



 Thanks,
 Ralf


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Finding Star Images on a Photo (Video chip) Plate?

2010-05-28 Thread David Goldsmith
On Fri, May 28, 2010 at 8:31 PM, Anne Archibald
aarch...@physics.mcgill.cawrote:

 On 28 May 2010 23:59, Wayne Watson sierra_mtnv...@sbcglobal.net wrote:
  That opened a few avenues. After reading this, I went on a merry search
 with
  Google. I hit upon one interesting book, Handbook of CCD astronomy (Steve
 B.
  Howell), that discusses PSFs. A Amazon Look Inside suggests this is
 mostly
  about h/w. I tried to figure out how to reach the scipy mail list, but,
 as
  once a year ago, couldn't figure out the newsgroup GMANE connection. This
  search recalled to mind my Handbook of Astro Image  Processing by Berry
 and
  Burnell. It has a few pages on the PSF. In the ref section for that
  material(PSFs) there's another ref to Steve Howell that may be of use:
 Astro
  CCD Observing and Reduction Techniques, ASP, Pacific Conf. Series, vol.
 23,
  1992. There are further Berry and Burnell refs that may be applicable.

 Ah, sorry, I've been at an astro conference all week, I should have
 expanded that acronym. PSF is short for Point Spread Function; the
 idea is that with an optically good telescope, a point source anywhere
 in the field of view produces a blob of characteristic shape (often
 roughly a two-dimensional Gaussian) in your detector. The shape and
 size of this blob is set by your optics (including diffraction) and
 the atmospheric seeing. A star, being intrinsically a point source,
 produces a brighter or less bright version of this blob centered on
 the star's true position. To accurately measure the star's position
 (and often brightness) one usually fits a model blob to the noisy blob
 coming from the star of interest.

 I should note that this requires you to have more pixels than you
 need, so that even a point source is spread over many pixels;
 without this it's impossible to get subpixel positioning (among other
 things). Older consumer digital cameras often lacked this, since it
 was difficult to put enough pixels on a CCD, but fortunately megapixel
 mania has helpfully ensured that no matter how sharp the focus, every
 feature in your image is smeared over many pixels.

  I probed IRAF, SciPy, and Python, but it looks like a steep learning
 curve.
  The SciPy tutorial page looks like overkill. They have what looks like
 very
  large tutorials. Perhaps daunting. I did a quick shot at pyraf, a
 tutorial
  page, but note it has a prereq of IRAF. Another daunting path.

 Wait, you think SciPy has too many tutorials? Or that they're too
 detailed? Just pick a short, easy, or sketchy one then. Here's one
 that's all three:

  import scipy.stats
  scipy.stats.norm.cdf(3)
 0.9986501019683699

 That's the value of the CDF of a standard normal at three sigma, i.e.,
 one minus the false positive probability for a one-sided three sigma
 detection.

  Well, maybe a DIY approach will do the trick for me.

 I haven't used IRAF yet (though I have data sets waiting), and I do
 understand the urge to write your own code rather than understanding
 someone else's, but let me point out that reliably extracting source
 parameters from astronomical images is *hard* and requires cleverness,
 attention to countless special cases, troubleshooting, and experience.
 But it's an old problem, and astronomers have taken all of the needed
 things listed above and built them into IRAF. Do consider using it.

 Anne


Plus, if you're in the field of astronomy, knowing py/IRAF will be a *big*
gold star on your resume. :-)

DG

 On 5/28/2010 5:41 PM, Anne Archibald wrote:
 
  On 28 May 2010 21:09, Charles R Harris charlesr.har...@gmail.com
 wrote:
 
 
  On Fri, May 28, 2010 at 5:45 PM, Wayne Watson 
 sierra_mtnv...@sbcglobal.net
  wrote:
 
 
  Suppose I have a 640x480 pixel video chip and would like to find star
  images on it, possible planets and the moon. A possibility of noise
  exits, or bright pixels. Is there a known method for finding the
  centroids of these astro objects?
 
 
 
  You can threshold the image and then cluster the pixels in objects. I've
  done this on occasion using my own software, but I think there might be
  something in scipy/ndimage that does the same. Someone here will know.
 
 
  There are sort of two passes here - the first is to find all the
  stars, and the second is to fine down their positions, ideally to less
  than a pixel. For the former, thresholding and clumping is probably
  the way to go.
 
  For the latter I think a standard approach is PSF fitting - that is,
  you fit (say) a two-dimensional Gaussian to the pixels near your star.
  You'll fit for at least central (subpixel) position, probably radius,
  and maybe eccentricity and orientation. You might even fit for a more
  sophisticated PSF (doughnuts are natural for Schmidt-Cassegrain
  telescopes, or the diffraction pattern of your spider). Any spot whose
  best-fit PSF is just one pixel wide is noise or a cosmic ray hit or a
  hotpixel; any spot whose best-fit PSF is huge is a detector splodge or
  a planet or galaxy.
 
  All 

Re: [Numpy-discussion] Extending documentation to c code

2010-05-27 Thread David Goldsmith
On Thu, May 27, 2010 at 9:18 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:


 On Wed, May 26, 2010 at 8:14 AM, Pauli Virtanen p...@iki.fi wrote:

 Wed, 26 May 2010 07:15:08 -0600, Charles R Harris wrote:
  On Wed, May 26, 2010 at 2:59 AM, Pauli Virtanen p...@iki.fi wrote:
 
  Wed, 26 May 2010 06:57:27 +0900, David Cournapeau wrote: [clip:
  doxygen]
   It is yet another format to use inside C sources (I don't think
   doxygen supports rest), and I would rather have something that is
   similar, ideally integrated into sphinx. It also generates rather
   ugly doc by default,
 
  Anyway, we can probably nevertheless just agree on a readable
  plain-text/ rst format, and then just use doxygen to generate the docs,
  as a band-aid.
 
  http://github.com/pv/numpycdoc
 
  Neat. I didn't quite see the how how you connected the rst documentation
  and doxygen.

 I didn't :)

 But I just did: doing this it was actually a 10 min job since Doxygen
 accepts HTML -- now it parses the comments as RST and renders it properly
 as HTML in the Doxygen output. Of course getting links etc. to work would
 require more effort, but that's left as an exercise for someone else to
 finish.


 Why don't you go ahead and merge this. If someone wants to substitute
 something else for doxygen at some point, then that is still open, meanwhile
 we can get started on writing some cdocs. In particular, it would be nice if
 the folks doing the code refactoring also documented any new functions.


Thanks for being a voice for change! :-)


 We can also put together a numpycdoc standard to go with it. I think your
 idea of combining the standard numpy doc format with the usual c code
 comment style is the way to go.


And certainly at this early stage something is better than nothing.

DG


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Introduction to Scott, Jason, and (possibly) others from Enthought

2010-05-26 Thread David Goldsmith
On Tue, May 25, 2010 at 9:22 PM, Travis Oliphant oliph...@enthought.comwrote:


 On May 25, 2010, at 4:49 PM, David Goldsmith wrote:

 Travis: do you already have a place on the NumPy Development 
 Wikihttp://wiki.numpy.org/where you're (b)logging your design decisions?  
 Seems like a good way for
 concerned parties to monitor your choices in more or less real time and thus
 provide comment in a timely fashion.


 This is a great idea of course and we will definitely post progess there.



Thanks; specific URL please, when available; plus, prominently feature (a
link to) the location on the Development Wiki home page, at the very least
(i.e., if not also on the NumPy home page).


 So far, the code has been reviewed,


I.e., the existing code, yes?


 and several functions identified for re-factoring.


Please enumerate on the Wiki Refactoring Log (name tentative - I don't
care what we call it, just so long as it exists, its purpose is clear, and
we all know where it is).

This is taking place in a github branch of numpy called numpy refactor.


This = the actual creation/modification of code, yes?

DG


 -Travis


 DG

 On Tue, May 25, 2010 at 2:19 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Tue, May 25, 2010 at 2:54 PM, Travis Oliphant 
 oliph...@enthought.comwrote:


 On May 25, 2010, at 2:50 PM, Charles R Harris wrote:



 On Tue, May 25, 2010 at 1:37 PM, Travis Oliphant oliph...@enthought.com
  wrote:


 Hi everyone,

 There has been some talk about re-factoring NumPy to separate out the
 Python C-API layer and make NumPy closer to a C-library.   I know
 there are a few different ideas about what this means, and also that
 people are very busy.  I also know there is a NumPy 2.0 release that
 is in the works.

 I'm excited to let everyone know that we (at Enthought) have been able
 to find resources (about 3 man months) to work on this re-factoring
 project and Scott and Jason (both very experienced C and Python
 programmers) are actively pursuing it.My hope is that NumPy 2.0
 will contain this re-factoring (which should be finished just after
 SciPy 2010 --- where I'm going to organize a Sprint on NumPy which
 will include at least date-time improvements and re-factoring work).

 While we have specific goals for the re-factoring, we want this
 activity to be fully integrated with the NumPy community and Scott and
 Jason want to interact with the community as much as feasible as they
 suggest re-factoring changes (though they both have more experience
 with phone-conversations to resolve concerns than email chains and so
 some patience from everybody will be appreciated).

 Because Jason and Scott are new to this mailing list (but not new to
 NumPy),  I wanted to introduce them so they would feel more
 comfortable posting questions and people would have some context as to
 what they were trying to do.

 Scott and Jason are both very proficient and skilled programmers and I
 have full confidence in their abilities.   That said, we very much
 want the input of as many people as possible as we pursue the goal of
 grouping together more tightly the Python C-API interface layer to
 NumPy.

 I will be involved in some of the discussions, but am currently on a
 different project which has tight schedules and so I will only be able
 to provide limited mailing-list visibility.


 I think 2.0 would be a bit early for this. Is there any reason it
 couldn't be done in 2.1? What is the planned policy with regards to the
 visible interface for extensions? It would also be nice to have a rough idea
 of how the resulting code would be layered, i.e., what is the design for
 this re-factoring. Simply having a design would be a major step forward.


 The problem with doing it in 2.1 is that this re-factoring will require
 extensions to be re-built.   The visible interface to extensions will not
 change, but there will likely be ABI incompatibility.It seems prudent to
 do this in NumPy 2.0.   Perhaps we can also put in place the ABI-protecting
 indirection approaches that David C. was suggesting earlier.

 Some aspects of the design are still being fleshed out, but the basic
 idea is to separate out a core library that is as independent of the Python
 C-API as possible.There will likely be at least some dependency on the
 Python C-API (reference counting and error handling and possibly others)
 which any interface would have to provide in a very simple Python.h --
 equivalent, for example.

 Our purpose is to allow NumPy to be integrated with other languages or
 other frameworks systems without explicitly relying on CPython.There are
 a lot of questions as to how this will work, and so much of that is being
 worked out.   Part of the reason for this mail is to help ensure that as
 much of this discussion as possible takes place in public.


 Sounds good, but what if it doesn't get finished in a few months? I think
 we should get 2.0.0 out pronto, ideally it would already have been

Re: [Numpy-discussion] numpy and the Google App Engine

2010-05-26 Thread David Goldsmith
On Wed, May 26, 2010 at 10:37 AM, Christopher Hanley chan...@stsci.eduwrote:

 On Wed, May 26, 2010 at 12:49 PM, Dag Sverre Seljebotn
 da...@student.matnat.uio.no wrote:
  Christopher Hanley wrote:
  Greetings,
 
  Google provides a product called App Engine.  The description from
  their site follows,
 
  Google App Engine enables you to build and host web apps on the same
  systems that power Google applications.
  App Engine offers fast development and deployment; simple
  administration, with no need to worry about hardware,
  patches or backups; and effortless scalability. 
 
  You can deploy applications written in either Python or JAVA.  There
  are free and paid versions of the service.
 
  The Google App Engine would appear to be a powerful source of CPU
  cycles for scientific computing.  Unfortunately this is currently not
  the case because numpy is not one of the supported libraries.  The
  Python App Engine allows only the installation of user supplied pure
  Python code.
 
  I have recently returned from attending the Google I/O conference in
  San Francisco.  While there I inquired into the possibility of getting
  numpy added.  The basic response was that there doesn't appear to be
  much interest from the community given the amount of work it would
  take to vet and add numpy.
 
  Something to keep in mind: It's rather trivial to write code to
  intentionally crash the Python interpreter using pure Python code and
  NumPy (or overwrite data in it, run custom assembly code...in short,
  NumPy is a big gaping security hole in this context). This obviously
  can't go on in the AppEngine. So this probably involves a considerable
  amount of work in the NumPy source code base as well, it's not simply
  about verifying.
 

 Agreed.  Perhaps the recently discussed rework of the C internals will
 better allow a security audit of numpy.


My guess is that when the fur begins to fly, submitted tickets will
receive more attention, i.e., if you really want to see this done...file a
ticket.  (IMO, it's *never* wasted effort to do this: the worst that can
happen is that some - recorded - person will close it as will not do, and
if for some unforeseeable reason they're unwilling to include an explanation
as to why, well, you'll know where they live, so to speak.)

DG


 At that point perhaps the
 numpy community could more easily work with Google to fix security
 problems.


  --
  Dag Sverre
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 



 --
 Christopher Hanley
 Senior Systems Software Engineer
 Space Telescope Science Institute
 3700 San Martin Drive
 Baltimore MD, 21218
 (410) 338-4338
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Introduction to Scott, Jason, and (possibly) others from Enthought

2010-05-25 Thread David Goldsmith
Travis: do you already have a place on the NumPy Development
Wikihttp://wiki.numpy.org/where you're (b)logging your design
decisions?  Seems like a good way for
concerned parties to monitor your choices in more or less real time and thus
provide comment in a timely fashion.

DG

On Tue, May 25, 2010 at 2:19 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Tue, May 25, 2010 at 2:54 PM, Travis Oliphant 
 oliph...@enthought.comwrote:


 On May 25, 2010, at 2:50 PM, Charles R Harris wrote:



 On Tue, May 25, 2010 at 1:37 PM, Travis Oliphant 
 oliph...@enthought.comwrote:


 Hi everyone,

 There has been some talk about re-factoring NumPy to separate out the
 Python C-API layer and make NumPy closer to a C-library.   I know
 there are a few different ideas about what this means, and also that
 people are very busy.  I also know there is a NumPy 2.0 release that
 is in the works.

 I'm excited to let everyone know that we (at Enthought) have been able
 to find resources (about 3 man months) to work on this re-factoring
 project and Scott and Jason (both very experienced C and Python
 programmers) are actively pursuing it.My hope is that NumPy 2.0
 will contain this re-factoring (which should be finished just after
 SciPy 2010 --- where I'm going to organize a Sprint on NumPy which
 will include at least date-time improvements and re-factoring work).

 While we have specific goals for the re-factoring, we want this
 activity to be fully integrated with the NumPy community and Scott and
 Jason want to interact with the community as much as feasible as they
 suggest re-factoring changes (though they both have more experience
 with phone-conversations to resolve concerns than email chains and so
 some patience from everybody will be appreciated).

 Because Jason and Scott are new to this mailing list (but not new to
 NumPy),  I wanted to introduce them so they would feel more
 comfortable posting questions and people would have some context as to
 what they were trying to do.

 Scott and Jason are both very proficient and skilled programmers and I
 have full confidence in their abilities.   That said, we very much
 want the input of as many people as possible as we pursue the goal of
 grouping together more tightly the Python C-API interface layer to
 NumPy.

 I will be involved in some of the discussions, but am currently on a
 different project which has tight schedules and so I will only be able
 to provide limited mailing-list visibility.


 I think 2.0 would be a bit early for this. Is there any reason it couldn't
 be done in 2.1? What is the planned policy with regards to the visible
 interface for extensions? It would also be nice to have a rough idea of how
 the resulting code would be layered, i.e., what is the design for this
 re-factoring. Simply having a design would be a major step forward.


 The problem with doing it in 2.1 is that this re-factoring will require
 extensions to be re-built.   The visible interface to extensions will not
 change, but there will likely be ABI incompatibility.It seems prudent to
 do this in NumPy 2.0.   Perhaps we can also put in place the ABI-protecting
 indirection approaches that David C. was suggesting earlier.

 Some aspects of the design are still being fleshed out, but the basic idea
 is to separate out a core library that is as independent of the Python C-API
 as possible.There will likely be at least some dependency on the Python
 C-API (reference counting and error handling and possibly others) which any
 interface would have to provide in a very simple Python.h -- equivalent, for
 example.

 Our purpose is to allow NumPy to be integrated with other languages or
 other frameworks systems without explicitly relying on CPython.There are
 a lot of questions as to how this will work, and so much of that is being
 worked out.   Part of the reason for this mail is to help ensure that as
 much of this discussion as possible takes place in public.


 Sounds good, but what if it doesn't get finished in a few months? I think
 we should get 2.0.0 out pronto, ideally it would already have been released.
 I think a major refactoring like this proposal should get the 3.0.0 label.
 Admittedly that makes keeping a refactored branch current with fixes going
 into the trunk a hassle, but perhaps that can be worked around somewhat by
 clearly labeling what files will be touched in the refactoring and possibly
 rearranging the content of the existing files. This requires a game plan and
 a clear idea of the goal. Put simply, I think the proposed schedule is too
 ambitious and needs to be fleshed out.  This refactoring isn't going to be
 as straight forward as the python3k port because a lot of design decisions
 need to be made along the way.

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone 

Re: [Numpy-discussion] Extending documentation to c code

2010-05-24 Thread David Goldsmith
On Mon, May 24, 2010 at 11:01 AM, Charles R Harris 
charlesr.har...@gmail.com wrote:

 Hi All,

 I'm wondering if we could extend the current documentation format to the c
 source code. The string blocks would be implemented something like

 /**NpyDoc
 The Answer.

 Answer the Ultimate Question of Life, the Universe, and Everything.

 Parameters
 --
 We don't need no stinkin' parameters.

 Notes
 -
 The run time of this routine may be excessive.

 
 */
 int
 answer_ultimate_question(void)
 {
 return 42;
 }

 and the source scanned to generate the usual documentation. Thoughts?

 Chuck


IMO it would be necessary to make such doc have the same status w.r.t. the
Wiki as the Python source; how much tweaking of pydocweb would that require
(Pauli is already over-committed in that regard; Joe, Perry, and I are
taking steps to try to alleviate this, but nothing is close to materializing
yet).  I know that as far as Joe and I are concerned, getting pydocweb to
support a dual review process is a much higher, longer-standing priority.

Also, quoting from the docstring standard: An optional section for
examples...while optional, this section is very strongly encouraged.
(Personally, I think this section should be required, not optional, for
functions, and methods which require their own docstrings.)  But requiring
docwriters to supply working (i.e., compilable, linkable, runable) c code
examples (which would appear to be necessary because the coders appear to be
loath to provide their docstrings with examples) might be asking too much
(since we try to keep the doc writing effort open to persons at least
comfortable w/ Python, though not necessarily w/ c).

Unless and until these concerns can be realistically and successfully
addressed, I'm a strong -1.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Extending documentation to c code

2010-05-24 Thread David Goldsmith
On Mon, May 24, 2010 at 4:59 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Mon, May 24, 2010 at 2:11 PM, David Goldsmith 
 d.l.goldsm...@gmail.comwrote:

 On Mon, May 24, 2010 at 11:01 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Hi All,

 I'm wondering if we could extend the current documentation format to the
 c source code. The string blocks would be implemented something like

 /**NpyDoc
 The Answer.

 Answer the Ultimate Question of Life, the Universe, and Everything.

 Parameters
 --
 We don't need no stinkin' parameters.

 Notes
 -
 The run time of this routine may be excessive.

 
 */
 int
 answer_ultimate_question(void)
 {
 return 42;
 }

 and the source scanned to generate the usual documentation. Thoughts?

 Chuck


 IMO it would be necessary to make such doc have the same status w.r.t. the
 Wiki as the Python source; how much tweaking of pydocweb would that require
 (Pauli is already over-committed in that regard; Joe, Perry, and I are
 taking steps to try to alleviate this, but nothing is close to materializing
 yet).  I know that as far as Joe and I are concerned, getting pydocweb to
 support a dual review process is a much higher, longer-standing priority.

 Also, quoting from the docstring standard: An optional section for
 examples...while optional, this section is very strongly encouraged.
 (Personally, I think this section should be required, not optional, for
 functions, and methods which require their own docstrings.)  But requiring
 docwriters to supply working (i.e., compilable, linkable, runable) c code
 examples (which would appear to be necessary because the coders appear to be
 loath to provide their docstrings with examples) might be asking too much
 (since we try to keep the doc writing effort open to persons at least
 comfortable w/ Python, though not necessarily w/ c).

 Unless and until these concerns can be realistically and successfully
 addressed, I'm a strong -1.


 I'm not interested in having this part of the standard user documentation
 since the  c functions are mostly invisible to the user. What I want is
 documentation for maintainers/developers of the c code. The c code is
 essentially undocumented and that makes it difficult to work with,
 especially for new people. At one time in the past I suggested using doxygen
 but that didn't seem to arouse much interest. I've also tried generating a
 call graph but only managed to crash the system... Anyway, it needs to be
 done at some point and I'm looking for suggestions.

 Chuck

 Not checking in un-/poorly documented new code would be a good start.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Extending documentation to c code

2010-05-24 Thread David Goldsmith
On Mon, May 24, 2010 at 8:06 PM, David Cournapeau courn...@gmail.comwrote:

 On Tue, May 25, 2010 at 3:01 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:
  Hi All,
 
  I'm wondering if we could extend the current documentation format to the
 c
  source code. The string blocks would be implemented something like
 
  /**NpyDoc
  The Answer.
 
  Answer the Ultimate Question of Life, the Universe, and Everything.
 
  Parameters
  --
  We don't need no stinkin' parameters.
 
  Notes
  -
  The run time of this routine may be excessive.
 
  
  */
  int
  answer_ultimate_question(void)
  {
  return 42;
  }
 
  and the source scanned to generate the usual documentation. Thoughts?

 I have thought about this for quite some time, but it is not easy.
 Docstrings are useful because of cross references, etc... and
 documentation for compiled code should contain signature extraction.
 For those reasons, I think a doc tool would need to parse C, which
 makes the problem that much harder.

 Last time I looked, synopsis was interesting, but it does not seem to
 have caught up. Synopsis was interesting because it was modular,
 scriptable in python, and supported rest as a markup language within C
 code. OTOH, I hope that clang will change the game here - it gives a
 modular, robust C (and soon C++) parser, and having a documentation
 tool written from that is just a question of time I think.

 Maybe as a first step, something that could extract function signature
 would be enough, and writing this should not take too much time
 (Sebastien B wrote something which could be a start, to autogenerate
 cython code from header:http://bitbucket.org/binet/cylon).

 David


This does sound promising/a good first step.  But it doesn't really answer
Charles' question about a standard (which would be useful to have to help
guide doc editor design).  My proposal is that we start w/ what we have -
the standard for our Python code - and figure out what makes sense to keep,
add, change, and throw out.  If we don't yet have an SEP process, perhaps
this need could serve as a first test case; obviously, if we already do have
an SEP, then we should follow that.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Runtime error in numpy.polyfit

2010-05-19 Thread David Goldsmith
Charles H.: is this happening because he's calling the old version of
polyfit?

William: try using numpy.polynomial.polyfit instead, see if that works.

DG

On Wed, May 19, 2010 at 11:03 AM, William Carithers wccarith...@lbl.govwrote:

 I'm trying to do a simple 2nd degree polynomial fit to two arrays of 5
 entries. I get a runtime error:
 RuntimeError: more argument specifiers than keyword list entries (remaining
 format:'|:calc_lwork.gelss')  in the lstsq module inside numpy.polyfit.

 Here's the code snippet:
 def findPeak(self, ydex, xdex):
# take a vertical slice
vslice = []
for i in range(-1,10,1) :
vslice.append(arcImage[ydex+i][xdex])
vdex = n.where(vslice == max(vslice))
ymax = ydex -1 + vdex[0][0]
# approximate gaussian fit by parabolic fit to logs
yvalues = n.array([ymax-2, ymax-1, ymax, ymax+1, ymax+2])


 svalues=n.array([arcImage[ymax-2][xdex],arcImage[ymax-1][xdex],arcImage[ymax
 ][xdex],arcImage[ymax+1][xdex], arcImage[ymax+2][xdex]])
avalues = n.log(svalues)
ypoly = n.polyfit(yvalues, avalues, 2)

 And the traceback:
 File /Users/williamcarithers/BOSS/src/calibrationModel.py, line 345, in
 findPeak
ypoly = n.polyfit(yvalues, avalues, 2)
  File

 /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/
 numpy/lib/polynomial.py, line 503, in polyfit
c, resids, rank, s = _lstsq(v, y, rcond)
  File

 /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/
 numpy/lib/polynomial.py, line 46, in _lstsq
return lstsq(X, y, rcond)
  File

 /Library/Python/2.6/site-packages/scipy-0.7.1-py2.6-macosx-10.6-universal.e
 gg/scipy/linalg/basic.py, line 545, in lstsq
lwork = calc_lwork.gelss(gelss.prefix,m,n,nrhs)[1]
 RuntimeError: more argument specifiers than keyword list entries (remaining
 format:'|:calc_lwork.gelss')

 This is such a simple application of polyfit and the error occurs in the
 guts of lstsq, so I'm completely stumped. Any help would be greatly
 appreciated.

 Thanks,
 Bill Carithers


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As interpreted
by Robert Graves)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Runtime error in numpy.polyfit

2010-05-19 Thread David Goldsmith
The polynomial module definitely postdates 1.2.1; I echo Josef's rec. that
you update if possible.

On Wed, May 19, 2010 at 1:24 PM, William Carithers wccarith...@lbl.govwrote:

 Hi Josef,
  I didn't know numpy will use the scipy version of linalg for this.


Right, that's what told me he must be using an old (and to-be-deprecated)
version of polyfit; IIRC, the new polynomial module is all-numpy, right
Charles?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Runtime error in numpy.polyfit

2010-05-19 Thread David Goldsmith
On Wed, May 19, 2010 at 3:50 PM, William Carithers wccarith...@lbl.govwrote:

 Hi David and Josef,

 OK, I updated to numpy-1.4.1 and scipy-0.7.2 and this problem went away.
 Thanks for your help.

 BTW, trying to upgrade using the .dmg files from Sourceforge didn't work.
 It
 kept saying that it needed System Python 2.6 even though Python 2.6 is
 already installed. In fact, it was packaged with the OSX 10.6 upgrade. I
 had
 to download the tarballs and install from source.


Yeah, Chris Barker typically recommends that Mac users get the dmgs from
here:

http://wiki.python.org/moin/MacPython/Packages

but they don't appear to have anything for Python 2.6 yet. :-(

Chris, any ideas?

DG


 Cheers,
 Bill


 On 5/19/10 1:35 PM, josef.p...@gmail.com josef.p...@gmail.com wrote:

  On Wed, May 19, 2010 at 4:24 PM, William Carithers wccarith...@lbl.gov
  wrote:
  Hi Josef,
 
  I did the same test, namely opening a new window and plugging in the
  printout values by hand and polyfit worked just fine. Here's the
 terminal
  output:
 
  import numpy as n
  y = n.array([ 864.,  865.,  866.,  867.,  868.])
  a = n.array([ 5.24860191,  6.0217514 ,  6.11434555,  6.09198856,
  5.73753977])
 
  here you dropped the  ,dtype=np.float32)  from your previous numbers
 
  ypoly = n.polyfit(y,a,2)
  ypoly
  array([ -1.69296264e-01,   2.93325941e+02,  -1.27049334e+05])
 
  I wonder if the step of printing plus cut and paste is doing some kind
 of
  implicit type conversion. Maybe the original problem has to do with data
  types? In the original code arcImage is integer data so the avalues
 array is
  constructed from
   avalues = n.log(n.array([...list of integers...]))
 
  Should I be doing some kind of casting first?
 
  That's what I thought when I saw your dtype=np.float32
  but from your repr print it looks like the y array is float64, and
  only the second is non-standard
 
  You could try to cast inside your function to float (float64)
 
  However, I think this is only a short-term solution, my guess is that
  your exception is a symptom for more serious/pervasive problems.
 
  Also, I don't know why in your example (if I interpret it correctly)
  np.log results in float32
 
  np.log(np.array([5,2],int)).dtype
  dtype('float64')
 
  Josef
 
 
  Thanks,
  Bill
 
 
 
 
 
  On 5/19/10 1:09 PM, josef.p...@gmail.com josef.p...@gmail.com
 wrote:
 
  On Wed, May 19, 2010 at 3:51 PM, William Carithers 
 wccarith...@lbl.gov
  wrote:
  Thanks David and Josef. Replies interspersed below.
 
 
  On 5/19/10 12:24 PM, josef.p...@gmail.com josef.p...@gmail.com
 wrote:
 
  On Wed, May 19, 2010 at 3:18 PM, David Goldsmith
  d.l.goldsm...@gmail.com wrote:
  Charles H.: is this happening because he's calling the old version
 of
  polyfit?
 
  William: try using numpy.polynomial.polyfit instead, see if that
 works.
 
  It says  ypoly = n.polynomial.polyfit(yvalues, avalues, 2)
  AttributeError: 'module' object has no attribute 'polynomial'
 
  Is this because I'm using a relatively old (numpy-1.2.1) version?
 
  DG
 
  On Wed, May 19, 2010 at 11:03 AM, William Carithers 
 wccarith...@lbl.gov
  wrote:
 
  I'm trying to do a simple 2nd degree polynomial fit to two arrays
 of 5
  entries. I get a runtime error:
  RuntimeError: more argument specifiers than keyword list entries
  (remaining
  format:'|:calc_lwork.gelss')  in the lstsq module inside
 numpy.polyfit.
 
  Here's the code snippet:
  def findPeak(self, ydex, xdex):
 # take a vertical slice
 vslice = []
 for i in range(-1,10,1) :
 vslice.append(arcImage[ydex+i][xdex])
 vdex = n.where(vslice == max(vslice))
 ymax = ydex -1 + vdex[0][0]
 # approximate gaussian fit by parabolic fit to logs
 yvalues = n.array([ymax-2, ymax-1, ymax, ymax+1, ymax+2])
 
 
 
 svalues=n.array([arcImage[ymax-2][xdex],arcImage[ymax-1][xdex],arcImage[
  ym
  ax
  ][xdex],arcImage[ymax+1][xdex], arcImage[ymax+2][xdex]])
 avalues = n.log(svalues)
 ypoly = n.polyfit(yvalues, avalues, 2)
 
  And the traceback:
  File /Users/williamcarithers/BOSS/src/calibrationModel.py, line
 345,
  in
  findPeak
 ypoly = n.polyfit(yvalues, avalues, 2)
   File
 
 
 /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/pyt
  ho
  n/
  numpy/lib/polynomial.py, line 503, in polyfit
 c, resids, rank, s = _lstsq(v, y, rcond)
   File
 
 
 /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/pyt
  ho
  n/
  numpy/lib/polynomial.py, line 46, in _lstsq
 return lstsq(X, y, rcond)
   File
 
 
 /Library/Python/2.6/site-packages/scipy-0.7.1-py2.6-macosx-10.6-univers
  al
  .e
  gg/scipy/linalg/basic.py, line 545, in lstsq
 lwork = calc_lwork.gelss(gelss.prefix,m,n,nrhs)[1]
  RuntimeError: more argument specifiers than keyword list entries
  (remaining
  format:'|:calc_lwork.gelss')
 
  This is such a simple application of polyfit and the error occurs
 in the
  guts of lstsq, so I'm completely stumped. Any help would

Re: [Numpy-discussion] pareto docstring

2010-05-11 Thread David Goldsmith
On Tue, May 11, 2010 at 12:23 AM, T J tjhn...@gmail.com wrote:

 On Mon, May 10, 2010 at 8:37 PM,  josef.p...@gmail.com wrote:
 
  I went googling and found a new interpretation
 
  numpy.random.pareto is actually the Lomax distribution also known as
 Pareto 2,
  Pareto (II) or Pareto Second Kind distribution
 

 Great!

 
  So, from this it looks like numpy.random does not have a Pareto
  distribution, only Lomax, and the confusion came maybe because
  somewhere in the history the (II) (second kind) got dropped in the
  explanations.
 
  and actually it is in scipy.stats.distributions, but without rvs
 
  # LOMAX (Pareto of the second kind.)
  #  Special case of Pareto of the first kind (location=-1.0)
 

 I understand the point with this last comment, but I think it can be
 confusing in that the Pareto (of the first kind) has no location
 parameter and people might think you are referring to the Generalized
 Pareto distribution.  I think its much clearer to say:

  # Special case of the Pareto of the first kind, but shifted to the
 left by 1.x -- x + 1

 
 
   2) Modify numpy/random/mtrand/distributions.c in the following way:
 
  double rk_pareto(rk_state *state, double a)
  {
//return exp(rk_standard_exponential(state)/a) - 1;
return 1.0 / rk_double(state)**(1.0 / a);
  }
 
  I'm not an expert on random number generator, but using the uniform
 distribution
  as in
 
 http://en.wikipedia.org/wiki/Pareto_distribution#Generating_a_random_sample_from_Pareto_distribution
  and your Devroy reference seems better, than based on the relationship to
  the exponential distribution
 
 http://en.wikipedia.org/wiki/Pareto_distribution#Relation_to_the_exponential_distribution
 
 

 Correct.  The exp relationship was for the existing implementation
 (which corresponds to the Lomax).  I commented that line out and just
 used 1/U^(1/a).


  I think without changing the source we can rewrite the docstring that
  this is Lomax (or
  Pareto of the Second Kind), so that at least the documentation is less
  misleading.
 
  But I find calling it Pareto very confusing, and I'm not the only one
 anymore,
  (and I don't know if anyone has used it assuming it is classical Pareto),
  so my preferred solution would be
 
  * rename numpy.random.pareto to numpy.random.lomax
  * and create a real (classical, first kind) pareto distribution (even
  though it's just
   adding or subtracting 1, ones we know it)
 

 I personally have used numpy.random.pareto thinking it was the Pareto
 distribution of the first kind---which led to this post in the first
 place.  So, I'm in strong agreement.  While doing this, perhaps we
 should increase functionality and allow users the ability to specify
 the scale of the distribution (instead of just the shape)?

 I can make a ticket for this and give a stab at creating the necessary
 patch.


 
  What's the backwards compatibility policy with very confusing names in
 numpy?
 

 It seems reasonable that we might have to follow the deprecation
 route, but I'd be happier with a faster fix.

 1.5
  - Provide numpy.random.lomax.  Make numpy.random.pareto raise a
 DeprecationWarning and then call lomax.
 2.0 (if there is no 1.6)
  - Make numpy.random.pareto behave as Pareto distribution of 1st kind.

 Immediately though, we can modify the docstring that is currently in
 there to make the situation clear, instructing users how they can
 generate samples from the standard Pareto distribution.  This is the
 first patch I'll submit.  Perhaps it is better to only change the
 docstring and then save all changes in functionality for 2.0.
 Deferring to others on this one...


Elsewhere in the mailing list, it has been stated that our policy is to
document desired/intended behavior, when such differs from actual (current)
behavior.  This can be done in advance of a code fix to implement the
desired behavior, but we have discouraged (to the point of saying don't do
it) documenting current behavior when it is known that this should (and
presumably will) be changed.

DG


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] pareto docstring

2010-05-10 Thread David Goldsmith
On Mon, May 10, 2010 at 11:14 AM, T J tjhn...@gmail.com wrote:

 On Sun, May 9, 2010 at 4:49 AM,  josef.p...@gmail.com wrote:
 
  I think this is the same point, I was trying to make last year.
 
  Instead of renormalizing, my conclusion was the following,
  (copied from the mailinglist August last year)
 
  
  my conclusion:
  -
  What numpy.random.pareto actually produces, are random numbers from a
  pareto distribution with lower bound m=1, but location parameter
  loc=-1, that shifts the distribution to the left.
 
  To actually get useful  random numbers (that are correct in the usual
  usage http://en.wikipedia.org/wiki/Pareto_distribution), we need to
  add 1 to them.
  stats.distributions doesn't use mtrand.pareto
 
  rvs_pareto = 1 + numpy.random.pareto(a, size)
 
  
 
  I still have to work though the math of your argument, but maybe we
  can come to an agreement how the docstrings (or the function) should
  be changed, and what numpy.random.pareto really means.
 
  Josef
  (grateful, that there are another set of eyes on this)
 
 


 Yes, I think my renormalizing statement is incorrect as it is really
 just sampling from a different pdf altogether.  See the following image:

 http://www.dumpt.com/img/viewer.php?file=q9tfk7ehxsw865vn067c.png

 It plots histograms of the various implementations against the pdfs.
 Summarizing:

 The NumPy implementation is based on (Devroye p. 262).  The pdf listed
 there is:

a / (1+x)^(a+1)

 This differs from the standard Pareto pdf:

a / x^(a+1)

 It also differs from the pdf of the generalized Pareto distribution,
 with scale=1 and location=0:

(1 + a x)^(-1/a - 1)

 And it also differs from the pdf of the generalized Pareto
 distribution with scale=1 and location=-1  or location=1.

 random.paretovariate and scipy.stats.pareto sample from the standard
 Pareto, and this is the desired behavior, IMO.  Its true that 1 +
 np.random.pareto provides the fix, but I think we're better off
 changing the underlying implementation.  Devroye has a more recent
 paper:

  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.85.8760

 which states the Pareto distribution in the standard way.  So I think
 it is safe to make this change.  Backwards compatibility might be the
 only argument for not making this change.  So here is my proposal:

  1) Remove every mention of the generalized Pareto distribution from
 the docstring.  As far as I can see, the generalized Pareto
 distribution does not reduce to the standard Pareto at all.  We can
 still mention scipy.stats.distributions.genpareto and
 scipy.stats.distributions.pareto.  The former is something different
 and the latter will (now) be equivalent to the NumPy function.

  2) Modify numpy/random/mtrand/distributions.c in the following way:

 double rk_pareto(rk_state *state, double a)
 {
   //return exp(rk_standard_exponential(state)/a) - 1;
   return 1.0 / rk_double(state)**(1.0 / a);
 }

 Does this sound good?
 ___


Whatever the community decides, don't forget to please go through the formal
procedure of submitting a bug ticket so all of this is recorded in the
right way in the right place.  Thanks!

DG
-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding an ndarray.dot method

2010-05-04 Thread David Goldsmith
On Thu, Apr 29, 2010 at 12:30 PM, Pauli Virtanen p...@iki.fi wrote:

 Wed, 28 Apr 2010 14:12:07 -0400, Alan G Isaac wrote:
 [clip]
  Here is a related ticket that proposes a more explicit alternative:
  adding a ``dot`` method to ndarray.
  http://projects.scipy.org/numpy/ticket/1456

 I kind of like this idea. Simple, obvious, and leads
 to clear code:

a.dot(b).dot(c)

 or in another multiplication order,

a.dot(b.dot(c))

 And here's an implementation:


 http://github.com/pv/numpy-work/commit/414429ce0bb0c4b7e780c4078c5ff71c113050b6

 I think I'm going to apply this, unless someone complains,


I have a big one: NO DOCSTRING!!!  We're just perpetuating the errors of the
past people!  Very discouraging!

DG


 as I
 don't see any downsides (except maybe adding one more to the
 huge list of methods ndarray already has).

 Cheers,
 Pauli

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Math Library

2010-04-05 Thread David Goldsmith
On Mon, Apr 5, 2010 at 8:40 AM, Charles R Harris
charlesr.har...@gmail.comwrote:

 Hi All,

 David Cournapeau has mentioned that he would like to have a numpy math
 library that would supply missing functions and I'm wondering how we should
 organise the source code. Should we put a mathlib directory in
 numpy/core/src? Inside that directory would be functions for
 single/double/extended/quad precision. Should they be in separate
 directories? What about complex versions? I'm thinking that a good start
 would be to borrow the msun functions for doubles. We should also make a
 list of what functions would go into the library and what interface the
 complex functions present.

 Thoughts?


For starters: you talking things like Airy, Bessel, Gamma, stuff like that?

DG


 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Math Library

2010-04-05 Thread David Goldsmith
On Mon, Apr 5, 2010 at 9:50 AM, Charles R Harris
charlesr.har...@gmail.comwrote:

 On Mon, Apr 5, 2010 at 10:43 AM, Robert Kern robert.k...@gmail.comwrote:

 On Mon, Apr 5, 2010 at 11:11, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Mon, Apr 5, 2010 at 10:00 AM, Robert Kern robert.k...@gmail.com
 wrote:
 
  On Mon, Apr 5, 2010 at 10:56, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Mon, Apr 5, 2010 at 9:43 AM, Robert Kern robert.k...@gmail.com
   wrote:
  
   On Mon, Apr 5, 2010 at 10:40, Charles R Harris
   charlesr.har...@gmail.com wrote:
Hi All,
   
David Cournapeau has mentioned that he would like to have a numpy
math
library that would supply missing functions and I'm wondering how
 we
should
organise the source code. Should we put a mathlib directory in
numpy/core/src?
  
   David already did this: numpy/core/src/npymath/
  
  
   Yeah, but there isn't much low level stuff there and I don't want to
   toss a
   lot of real numerical code into it.
 
  Who cares? I don't.
 
  I care. I want the code to be organized.

 Then do it when there is code and we can see what needs to be organized.


 I am writing code and I want to decide up front where to put it. I know
 where you stand, so you need say no more. I'm waiting to see if other folks
 have an opinion.

 Chuck


Will you be using it right away?  If so, organize it locally how think it'll
work best, work w/ it a little while and see if you guessed right or if
you find yourself wanting to reorganize; then provide it to us w/ the
benefit of your experience. :-)

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Asymmetry in Chebyshev.deriv v. Chebyshev.integ

2010-04-02 Thread David Goldsmith
On Thu, Apr 1, 2010 at 6:42 PM, David Goldsmith d.l.goldsm...@gmail.comwrote:

  np.version.version
 '1.4.0'
  c = np.polynomial.chebyshev.Chebyshev(1)
  c.deriv(1.0)
 Chebyshev([ 0.], [-1.,  1.])
  c.integ(1.0)
 Traceback (most recent call last):
   File stdin, line 1, in module
   File string, line 441, in integ
   File C:\Python26\lib\site-packages\numpy\polynomial\chebyshev.py, line
 739,
 in chebint
 k = list(k) + [0]*(m - len(k))
 TypeError: can't multiply sequence by non-int of type 'float'
  c.integ(1)
 Chebyshev([ 0.,  1.], [-1.,  1.])

 i.e., deriv accepts int_like input but integ doesn't.

 Given the way I just embarrassed myself on the scipy-dev list :-(, I'm
 confirming this is a bug before I file a ticket.


Also:

 c.deriv(0)
Chebyshev([ 1.], [-1.,  1.])
 c.integ(0)
Traceback (most recent call last):
  File stdin, line 1, in module
  File string, line 441, in integ
  File C:\Python26\lib\site-packages\numpy\polynomial\chebyshev.py, line
729,
in chebint
raise ValueError, The order of integration must be positive
ValueError: The order of integration must be positive

i.e., deriv supports zero-order differentiation, but integ doesn't support
zero-order integration (though I acknowledge that this may be a feature, not
a bug).

-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Asymmetry in Chebyshev.deriv v. Chebyshev.integ

2010-04-02 Thread David Goldsmith
On Fri, Apr 2, 2010 at 10:42 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:


 On Thu, Apr 1, 2010 at 7:42 PM, David Goldsmith 
 d.l.goldsm...@gmail.comwrote:

  np.version.version
 '1.4.0'
  c = np.polynomial.chebyshev.Chebyshev(1)
  c.deriv(1.0)
 Chebyshev([ 0.], [-1.,  1.])
  c.integ(1.0)
 Traceback (most recent call last):
   File stdin, line 1, in module
   File string, line 441, in integ
   File C:\Python26\lib\site-packages\numpy\polynomial\chebyshev.py, line
 739,
 in chebint
 k = list(k) + [0]*(m - len(k))
 TypeError: can't multiply sequence by non-int of type 'float'
  c.integ(1)
 Chebyshev([ 0.,  1.], [-1.,  1.])


 I don't think it should accept a float when an integer is needed. That
 said, I should either raise a more informative error or folks should
 convince me that floats are a reasonable input for the number of
 integrations.

 Chuck


My only concern is API consistency: if you want to restrict the integ input
to int dtypes, that's fine, but then why allow non-int dtypes in deriv
(which, BTW, accepts much more than just int_like floats - it works with one
element lists containing an int_like float, and similar numpy arrays, even
zero-order Polynomial objects (I didn't check tuples, but given all that,
I'd be surprised if it didn't))?  Superficially, this is a pretty big API
discrepancy; ultimately of course it doesn't matter, but I'd like to know
where we want to land so I can make sure the docstrings correctly document
desired behavior.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Asymmetry in Chebyshev.deriv v. Chebyshev.integ

2010-04-02 Thread David Goldsmith
On Fri, Apr 2, 2010 at 10:46 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 On Fri, Apr 2, 2010 at 11:27 AM, David Goldsmith 
 d.l.goldsm...@gmail.comwrote:

 Also:

  c.deriv(0)
 Chebyshev([ 1.], [-1.,  1.])
  c.integ(0)

 Traceback (most recent call last):
   File stdin, line 1, in module
   File string, line 441, in integ
   File C:\Python26\lib\site-packages\numpy\polynomial\chebyshev.py, line
 729,
 in chebint
 raise ValueError, The order of integration must be positive
 ValueError: The order of integration must be positive

 i.e., deriv supports zero-order differentiation, but integ doesn't support
 zero-order integration (though I acknowledge that this may be a feature, not
 a bug).


 It was inherited. I have no qualms about letting it integrate zero times if
 folks think it should go that way. I think the reason derivatives allowed 0
 for the number of derivations was for classroom instruction.


Again, my only concern is API consistency, yada, yada, yada.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Asymmetry in Chebyshev.deriv v. Chebyshev.integ

2010-04-01 Thread David Goldsmith
 np.version.version
'1.4.0'
 c = np.polynomial.chebyshev.Chebyshev(1)
 c.deriv(1.0)
Chebyshev([ 0.], [-1.,  1.])
 c.integ(1.0)
Traceback (most recent call last):
  File stdin, line 1, in module
  File string, line 441, in integ
  File C:\Python26\lib\site-packages\numpy\polynomial\chebyshev.py, line
739,
in chebint
k = list(k) + [0]*(m - len(k))
TypeError: can't multiply sequence by non-int of type 'float'
 c.integ(1)
Chebyshev([ 0.,  1.], [-1.,  1.])

i.e., deriv accepts int_like input but integ doesn't.

Given the way I just embarrassed myself on the scipy-dev list :-(, I'm
confirming this is a bug before I file a ticket.

DG
-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   >