On Fri, Feb 23, 2018 at 2:15 AM, ast <n...@gmail.com> wrote:
> Le 22/02/2018 à 13:03, bartc a écrit :
>> On 22/02/2018 10:59, Steven D'Aprano wrote:
>> While an interesting article on speed-up techniques, that seems to miss
>> the point of benchmarks.
>> On the fib(20) test, it suggests using this to get a 30,000 times
>> from functools import lru_cache as cache
>> def fib_cache(n):
>> if n<2:
>> return n
>> return fib_cache(n-1)+fib_cache(n-2)
> It's a meaningless to test the execution time of a function
> with a cache decorator on 1.000.000 loops
> The first execution, ok, you get something meaningfull
> but for the other 999.999 executions, the result is already on
> the cache so you just measure the time to read the result
> in a dictionnary and output it.
> On my computer:
>>>> setup = """\
> from functools import lru_cache as cache
> def fib(n):
> if n < 2: return n
> return fib(n-1) + fib(n-2)
>>>> from timeit import timeit
>>>> timeit("fib(20)", setup=setup, number=1)
>>>> timeit("fib(20)", setup=setup, number=100)
> so 100 loops or 1 loop provides similar results
> as expected !
The solution would be to flush the cache in the core code. Here's a tweak:
from timeit import timeit
setup = """
from functools import lru_cache as cache
if n < 2: return n
return fib(n-1) + fib(n-2)
# this is what we effectively do inside the loop:
for count in 1, 10, 100, 1000:
print(count, timeit("cache(maxsize=None)(fib)(20)", setup=setup,
You could improve on performance by keeping the same function object
and just flushing the cache with fib.cache_clear(), but I have no idea
how you'd translate that into other languages. Constructing a cache at
run-time should be safe.