Re: [Numpy-discussion] Create a n-D grid; meshgrid alternative

2015-05-12 Thread Stefan Otte
Hello,

indeed I was looking for the cartesian product.

I timed the two stackoverflow answers and the winner is not quite as clear:

n_elements:10  cartesian  0.00427 cartesian2  0.00172
n_elements:   100  cartesian  0.02758 cartesian2  0.01044
n_elements:  1000  cartesian  0.97628 cartesian2  1.12145
n_elements:  5000  cartesian 17.14133 cartesian2 31.12241

(This is for two arrays as parameters: np.linspace(0, 1, n_elements))
cartesian2 seems to be slower for bigger.

I'd really appreciate if this was be part of numpy. Should I create a pull
request?

Regarding combinations and permutations: I could be convenient to have as
well.


Cheers,
 Stefan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] python is cool

2015-05-12 Thread Neal Becker
In order to make sure all my random number generators have good 
independence, it is a good practice to use a single shared instance (because 
it is already known to have good properties).  A less-desirable alternative 
is to used rng's seeded with different starting states - in this case the 
independence properties are not generally known.

So I have some fairly deeply nested data structures (classes) that somewhere 
contain a reference to a RandomState object.

I need to be able to clone these data structures, producing new independent 
copies, but I want the RandomState part to be the shared, singleton rs 
object.

In python, no problem:

---
from numpy.random import RandomState

class shared_random_state (RandomState):
def __init__ (self, rs):
RandomState.__init__(self, rs)

def __deepcopy__ (self, memo):
return self
---

Now I can copy.deepcopy the data structures, but the randomstate part is 
shared.  I just use

rs = shared_random_state (random.RandomState(0))

and provide this rs to all my other objects.  Pretty nice!

-- 
Those who fail to understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [JOB] Work full time on Project Jupyter/IPython

2015-05-12 Thread Brian Granger
Hi all,

I wanted to let the community know that we are currently hiring 3 full time
software engineers to work full time on Project Jupyter/IPython. These
positions will be in my group at Cal Poly in San Luis Obispo, CA. We are
looking for frontend and backend software engineers with lots of
Python/JavaScript experience and a passion for open source software. The
details can be found here:

https://www.calpolycorporationjobs.org/postings/736

This is an unusual opportunity in a couple of respects:

* These positions will allow you to work on open source software full time
- not as a X% side project (aka weekends and evenings).
* These are fully benefited positions (CA state retirement, health care,
etc.)
* You will get to work and live in San Luis Obispo, one of the nicest
places on earth. We are minutes from the beach, have perfect year-round
weather and are close to both the Bay Area and So Cal.

I am more than willing to talk to any who is interested in these positions.

Cheers,

Brian

-- 
Brian E. Granger
Cal Poly State University, San Luis Obispo
@ellisonbg on Twitter and GitHub
bgran...@calpoly.edu and elliso...@gmail.com
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python is cool

2015-05-12 Thread Neal Becker
Roland Schulz wrote:

 Hi,
 
 I think the best way to solve this issue to not use a state at all. It is
 fast, reproducible even in parallel (if wanted), and doesn't suffer from
 the shared issue. Would be nice if numpy provided such a stateless RNG as
 implemented in Random123: www.deshawresearch.com/resources_random123.html
 
 Roland

That is interesting.  I think np.random needs to be refactored, so it can 
accept a pluggable rng - then we could switch the underlying rng.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python is cool

2015-05-12 Thread Roland Schulz
Hi,

I think the best way to solve this issue to not use a state at all. It is
fast, reproducible even in parallel (if wanted), and doesn't suffer from
the shared issue. Would be nice if numpy provided such a stateless RNG as
implemented in Random123: www.deshawresearch.com/resources_random123.html

Roland

On Tue, May 12, 2015 at 2:18 PM, Neal Becker ndbeck...@gmail.com wrote:

 In order to make sure all my random number generators have good
 independence, it is a good practice to use a single shared instance
 (because
 it is already known to have good properties).  A less-desirable alternative
 is to used rng's seeded with different starting states - in this case the
 independence properties are not generally known.

 So I have some fairly deeply nested data structures (classes) that
 somewhere
 contain a reference to a RandomState object.

 I need to be able to clone these data structures, producing new independent
 copies, but I want the RandomState part to be the shared, singleton rs
 object.

 In python, no problem:

 ---
 from numpy.random import RandomState

 class shared_random_state (RandomState):
 def __init__ (self, rs):
 RandomState.__init__(self, rs)

 def __deepcopy__ (self, memo):
 return self
 ---

 Now I can copy.deepcopy the data structures, but the randomstate part is
 shared.  I just use

 rs = shared_random_state (random.RandomState(0))

 and provide this rs to all my other objects.  Pretty nice!

 --
 Those who fail to understand recursion are doomed to repeat it

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integral Equation Solver

2015-05-12 Thread Charles R Harris
On Tue, May 12, 2015 at 12:41 PM, Pierson, Oliver C o...@gatech.edu wrote:

  Hi All,

   Awhile back I had written some code to solve Volterra integral equations
 (integral equations where one of the integration bounds is a variable).
 The code is available on Github (https://github.com/oliverpierson/volterra).
 Just curious if there'd be any interest in adding this to Numpy?  I still
 have some work to do on the code.  However, before I invest too much time,
 I was trying to get a feel for the interest in this functionality.


Could be useful. The best place for something like this would be scipy (
scipy-...@scipy.org)..

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Integral Equation Solver

2015-05-12 Thread Pierson, Oliver C
Hi All,

  Awhile back I had written some code to solve Volterra integral equations 
(integral equations where one of the integration bounds is a variable).  The 
code is available on Github (https://github.com/oliverpierson/volterra).  Just 
curious if there'd be any interest in adding this to Numpy?  I still have some 
work to do on the code.  However, before I invest too much time, I was trying 
to get a feel for the interest in this functionality.


Please let me know if you have any questions.


Thanks,

Oliver


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Create a n-D grid; meshgrid alternative

2015-05-12 Thread Johannes Kulick
I'm totally in favor of the 'gridspace(linspaces)' version, as you probably end
up wanting to create grids of other things than linspaces (e.g. a logspace grid,
or a grid of random points etc.).

It should be called somewhat different though. Maybe 'cartesian(arrays)'?

Best,
Johannes

Quoting Stefan Otte (2015-05-10 16:05:02)
 I just drafted different versions of the `gridspace` function:
 https://tmp23.tmpnb.org/user/1waoqQ8PJBJ7/notebooks/2015-05%20gridspace.ipynb
 
 
 Beste Grüße,
  Stefan
 
 
 
 On Sun, May 10, 2015 at 1:40 PM, Stefan Otte stefan.o...@gmail.com wrote:
  Hey,
 
  quite often I want to evaluate a function on a grid in a n-D space.
  What I end up doing (and what I really dislike) looks something like this:
 
x = np.linspace(0, 5, 20)
M1, M2 = np.meshgrid(x, x)
X = np.column_stack([M1.flatten(), M2.flatten()])
X.shape  # (400, 2)
 
fancy_function(X)
 
  I don't think I ever used `meshgrid` in any other way.
  Is there a better way to create such a grid space?
 
  I wrote myself a little helper function:
 
def gridspace(linspaces):
return np.column_stack([space.flatten()
for space in np.meshgrid(*linspaces)])
 
  But maybe something like this should be part of numpy?
 
 
  Best,
   Stefan
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

-- 
Question: What is the weird attachment to all my emails?
Answer:   http://en.wikipedia.org/wiki/Digital_signature
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: Scipy 0.16.0 beta 1 release

2015-05-12 Thread Ralf Gommers
Hi all,

I'm pleased to announce the availability of the first beta release of Scipy
0.16.0. Please try this beta and report any issues on the Github issue
tracker or on the scipy-dev mailing list.

This first beta is a source-only release; binary installers will follow
(probably next week). Source tarballs and the full release notes can be
found at https://sourceforge.net/projects/scipy/files/scipy/0.16.0b1/. Part
of the release notes copied below.

Thanks to everyone who contributed to this release!

Ralf



==
SciPy 0.16.0 Release Notes
==

.. note:: Scipy 0.16.0 is not released yet!

SciPy 0.16.0 is the culmination of 6 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and
better documentation.  There have been a number of deprecations and
API changes in this release, which are documented below.  All users
are encouraged to upgrade to this release, as there are a large number
of bug-fixes and optimizations.  Moreover, our development attention
will now shift to bug-fix releases on the 0.15.x branch, and on adding
new features on the master branch.

This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or greater.

Highlights of this release include:

- A Cython API for BLAS/LAPACK in `scipy.linalg`
- A new benchmark suite.  It's now straightforward to add new benchmarks,
and
  they're routinely included with performance enhancement PRs.
- Support for the second order sections (SOS) format in `scipy.signal`.


New features


Benchmark suite
---

The benchmark suite has switched to using `Airspeed Velocity
http://spacetelescope.github.io/asv/`__ for benchmarking. You can
run the suite locally via ``python runtests.py --bench``. For more
details, see ``benchmarks/README.rst``.

`scipy.linalg` improvements
---

A full set of Cython wrappers for BLAS and LAPACK has been added in the
modules `scipy.linalg.cython_blas` and `scipy.linalg.cython_lapack`.
In Cython, these wrappers can now be cimported from their corresponding
modules and used without linking directly against BLAS or LAPACK.

The functions `scipy.linalg.qr_delete`, `scipy.linalg.qr_insert` and
`scipy.linalg.qr_update` for updating QR decompositions were added.

The function `scipy.linalg.solve_circulant` solves a linear system with
a circulant coefficient matrix.

The function `scipy.linalg.invpascal` computes the inverse of a Pascal
matrix.

The function `scipy.linalg.solve_toeplitz`, a Levinson-Durbin Toeplitz
solver,
was added.

Added wrapper for potentially useful LAPACK function ``*lasd4``.  It
computes
the square root of the i-th updated eigenvalue of a positive symmetric
rank-one
modification to a positive diagonal matrix. See its LAPACK documentation and
unit tests for it to get more info.

Added two extra wrappers for LAPACK least-square solvers. Namely, they are
``*gelsd`` and ``*gelsy``.

Wrappers for the LAPACK ``*lange`` functions, which calculate various matrix
norms, were added.

Wrappers for ``*gtsv`` and ``*ptsv``, which solve ``A*X = B`` for
tri-diagonal
matrix ``A``, were added.

`scipy.signal` improvements
---

Support for second order sections (SOS) as a format for IIR filters
was added.  The new functions are:

* `scipy.signal.sosfilt`
* `scipy.signal.sosfilt_zi`,
* `scipy.signal.sos2tf`
* `scipy.signal.sos2zpk`
* `scipy.signal.tf2sos`
* `scipy.signal.zpk2sos`.

Additionally, the filter design functions `iirdesign`, `iirfilter`,
`butter`,
`cheby1`, `cheby2`, `ellip`, and `bessel` can return the filter in the SOS
format.

The function `scipy.signal.place_poles`, which provides two methods to place
poles for linear systems, was added.

The option to use Gustafsson's method for choosing the initial conditions
of the forward and backward passes was added to `scipy.signal.filtfilt`.

New classes ``TransferFunction``, ``StateSpace`` and ``ZerosPolesGain`` were
added.  These classes are now returned when instantiating
`scipy.signal.lti`.
Conversion between those classes can be done explicitly now.

An exponential (Poisson) window was added as `scipy.signal.exponential`,
and a
Tukey window was added as `scipy.signal.tukey`.

The function for computing digital filter group delay was added as
`scipy.signal.group_delay`.

The functionality for spectral analysis and spectral density estimation has
been significantly improved: `scipy.signal.welch` became ~8x faster and the
functions `scipy.signal.spectrogram`, `scipy.signal.coherence` and
`scipy.signal.csd` (cross-spectral density) were added.

`scipy.signal.lsim` was rewritten - all known issues are fixed, so this
function can now be used instead of ``lsim2``; ``lsim`` is orders of
magnitude
faster than ``lsim2`` in most cases.

`scipy.sparse` improvements
---

The function `scipy.sparse.norm`, which computes sparse matrix norms, was
added.

The function `scipy.sparse.random`, which allows 

Re: [Numpy-discussion] Create a n-D grid; meshgrid alternative

2015-05-12 Thread Jaime Fernández del Río
On Tue, May 12, 2015 at 1:17 AM, Stefan Otte stefan.o...@gmail.com wrote:

 Hello,

 indeed I was looking for the cartesian product.

 I timed the two stackoverflow answers and the winner is not quite as clear:

 n_elements:10  cartesian  0.00427 cartesian2  0.00172
 n_elements:   100  cartesian  0.02758 cartesian2  0.01044
 n_elements:  1000  cartesian  0.97628 cartesian2  1.12145
 n_elements:  5000  cartesian 17.14133 cartesian2 31.12241

 (This is for two arrays as parameters: np.linspace(0, 1, n_elements))
 cartesian2 seems to be slower for bigger.


On my system, the following variation on Pauli's answer is 2-4x faster than
his for your test cases:

def cartesian4(arrays, out=None):
arrays = [np.asarray(x).ravel() for x in arrays]
dtype = np.result_type(*arrays)

n = np.prod([arr.size for arr in arrays])
if out is None:
out = np.empty((len(arrays), n), dtype=dtype)
else:
out = out.T

for j, arr in enumerate(arrays):
n /= arr.size
out.shape = (len(arrays), -1, arr.size, n)
out[j] = arr[np.newaxis, :, np.newaxis]
out.shape = (len(arrays), -1)

return out.T


 I'd really appreciate if this was be part of numpy. Should I create a pull
 request?


There hasn't been any opposition, quite the contrary, so yes, I would go
ahead an create that PR. I somehow feel this belongs with the set
operations, rather than with the indexing ones. Other thoughts?

Also for consideration: should it work on flattened arrays? or should we
give it an axis argument, and then broadcast on the rest, a la
generalized ufunc?

Jaime

-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Create a n-D grid; meshgrid alternative

2015-05-12 Thread Stefan Otte
Hey,

here is an ipython notebook with benchmarks of all implementations (scroll
to the bottom for plots):

https://github.com/sotte/ipynb_snippets/blob/master/2015-05%20gridspace%20-%20cartesian.ipynb

Overall, Jaime's version is the fastest.







On Tue, May 12, 2015 at 2:01 PM Jaime Fernández del Río 
jaime.f...@gmail.com wrote:

 On Tue, May 12, 2015 at 1:17 AM, Stefan Otte stefan.o...@gmail.com
 wrote:

 Hello,

 indeed I was looking for the cartesian product.

 I timed the two stackoverflow answers and the winner is not quite as
 clear:

 n_elements:10  cartesian  0.00427 cartesian2  0.00172
 n_elements:   100  cartesian  0.02758 cartesian2  0.01044
 n_elements:  1000  cartesian  0.97628 cartesian2  1.12145
 n_elements:  5000  cartesian 17.14133 cartesian2 31.12241

 (This is for two arrays as parameters: np.linspace(0, 1, n_elements))
 cartesian2 seems to be slower for bigger.


 On my system, the following variation on Pauli's answer is 2-4x faster
 than his for your test cases:

 def cartesian4(arrays, out=None):
 arrays = [np.asarray(x).ravel() for x in arrays]
 dtype = np.result_type(*arrays)

 n = np.prod([arr.size for arr in arrays])
 if out is None:
 out = np.empty((len(arrays), n), dtype=dtype)
 else:
 out = out.T

 for j, arr in enumerate(arrays):
 n /= arr.size
 out.shape = (len(arrays), -1, arr.size, n)
 out[j] = arr[np.newaxis, :, np.newaxis]
 out.shape = (len(arrays), -1)

 return out.T


 I'd really appreciate if this was be part of numpy. Should I create a
 pull request?


 There hasn't been any opposition, quite the contrary, so yes, I would go
 ahead an create that PR. I somehow feel this belongs with the set
 operations, rather than with the indexing ones. Other thoughts?

 Also for consideration: should it work on flattened arrays? or should we
 give it an axis argument, and then broadcast on the rest, a la
 generalized ufunc?

 Jaime

 --
 (\__/)
 ( O.o)
 (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
 de dominación mundial.
  ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in np.nonzero / Should index returning functions return ndarray subclasses?

2015-05-12 Thread Marten van Kerkwijk
Agreed that indexing functions should return bare `ndarray`. Note that in
Jaime's PR one can override it anyway by defining __nonzero__.  -- Marten

On Sat, May 9, 2015 at 9:53 PM, Stephan Hoyer sho...@gmail.com wrote:

  With regards to np.where -- shouldn't where be a ufunc, so subclasses or
 other array-likes can be control its behavior with __numpy_ufunc__?

 As for the other indexing functions, I don't have a strong opinion about
 how they should handle subclasses. But it is certainly tricky to attempt to
 handle handle arbitrary subclasses. I would agree that the least error
 prone thing to do is usually to return base ndarrays. Better to force
 subclasses to override methods explicitly.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion