Re: [Numpy-discussion] NumPy 1.11 docs

2016-05-30 Thread Stephan Hoyer
Awesome, thanks Ralf!
On Sun, May 29, 2016 at 1:13 AM Ralf Gommers  wrote:

> On Sun, May 29, 2016 at 4:35 AM, Stephan Hoyer  wrote:
>
>> These still are missing from the SciPy.org page, several months after the
>> release.
>>
>
> Thanks Stephan, that needs fixing.
>
>
>>
>> What do we need to do to keep these updated?
>>
>
>
> https://github.com/numpy/numpy/blob/master/doc/HOWTO_RELEASE.rst.txt#update-docsscipyorg
>
>
>> Is there someone at Enthought we should ping? Or do we really just need
>> to transition to different infrastructure?
>>
>
> No, we just need to not forget:) The release manager normally does this,
> or he pings someone else to do it. At the moment Pauli, Julian, Evgeni and
> me have access to the server. I'll fix it up today.
>
> Ralf
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Lion Krischer


On 30/05/16 10:07, Joseph Martinot-Lagarde wrote:
> Marten van Kerkwijk  gmail.com> writes:
> 
>> I did a few simple timing tests (see comment in PR), which suggests it is
> hardly worth having the cache. Indeed, if one really worries about speed,
> one should probably use pyFFTW (scipy.fft is a bit faster too, but at least
> for me the way real FFT values are stored is just too inconvenient). So, my
> suggestion would be to do away with the cache altogether. 


I added a slightly more comprehensive benchmark to the PR. Please have a
look. It tests the total time for 100 FFTs with and without cache. It is
over 30 percent faster with cache which it totally worth it in my
opinion as repeated FFTs of the same size are a very common use case.

Also many people will not have enough knowledge to use FFTW or some
other FFT implementation.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Lion Krischer

> You can backport the pure Python version of lru_cache for Python 2 (or
> vendor the backport done here:
> https://pypi.python.org/pypi/backports.functools_lru_cache/).
> The advantage is that lru_cache is C-accelerated in Python 3.5 and
> upwards...

That's a pretty big back-port. The speed also does not matter for this
particular use-case: Time for the actual FFT will dominate by far. The
lru_cache decorator can furthermore only limit the cache size by item
count and not size in memory as the proposed solution does. I think the
downsides outweigh the advantages of being able to use functionality
from the stdlib.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Joseph Martinot-Lagarde
Marten van Kerkwijk  gmail.com> writes:

> I did a few simple timing tests (see comment in PR), which suggests it is
hardly worth having the cache. Indeed, if one really worries about speed,
one should probably use pyFFTW (scipy.fft is a bit faster too, but at least
for me the way real FFT values are stored is just too inconvenient). So, my
suggestion would be to do away with the cache altogether. 

The problem with FFTW is that its license is more restrictive (GPL), and
because of this may not be suitable everywhere numpy.fft is.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Antoine Pitrou
On Sat, 28 May 2016 20:19:27 +0200
Sebastian Berg  wrote:
> 
> The complexity addition is a bit annoying I must admit, on python 3
> functools.lru_cache could be another option, but only there.

You can backport the pure Python version of lru_cache for Python 2 (or
vendor the backport done here:
https://pypi.python.org/pypi/backports.functools_lru_cache/).
The advantage is that lru_cache is C-accelerated in Python 3.5 and
upwards...

Regards

Antoine.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion