[Numpy-discussion] ENH: compute many inner products quickly

2016-05-28 Thread Scott Sievert
I recently ran into an application where I had to compute many inner products quickly (roughy 50k inner products in less than a second). I wanted a vector of inner products over the 50k vectors, or `[x1.T @ A @ x1, …, xn.T @ A @ xn]` with A.shape = (1k, 1k). My first instinct was to look for a

[Numpy-discussion] NumPy 1.11 docs

2016-05-28 Thread Stephan Hoyer
These still are missing from the SciPy.org page, several months after the release. What do we need to do to keep these updated? Is there someone at Enthought we should ping? Or do we really just need to transition to different infrastructure? ___

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-28 Thread Sebastian Berg
On Fr, 2016-05-27 at 22:51 +0200, Lion Krischer wrote: > Hi all, > > I was told to take this to the mailing list. Relevant pull request: > https://github.com/numpy/numpy/pull/7686 > > > NumPy's FFT implementation caches some form of execution plan for > each > encountered input data length.