Le 22/02/2018 à 13:03, bartc a écrit :
On 22/02/2018 10:59, Steven D'Aprano wrote:
https://www.ibm.com/developerworks/community/blogs/jfp/entry/Python_Meets_Julia_Micro_Performance?lang=en

While an interesting article on speed-up techniques, that seems to miss the point of benchmarks.

On the fib(20) test, it suggests using this to get a 30,000 times speed-up:

     from functools import lru_cache as cache

     @cache(maxsize=None)
     def fib_cache(n):
         if n<2:
             return n
         return fib_cache(n-1)+fib_cache(n-2)


It's a meaningless to test the execution time of a function
with a cache decorator on 1.000.000 loops

The first execution, ok, you get something meaningfull
but for the other 999.999 executions, the result is already on
the cache so you just measure the time to read the result
in a dictionnary and output it.

On my computer:

>>> setup = """\
from functools import lru_cache as cache
@cache(maxsize=None)
def fib(n):
    if n < 2: return n
    return fib(n-1) + fib(n-2)
"""
>>> from timeit import timeit

>>> timeit("fib(20)", setup=setup, number=1)
0.00010329007704967808

>>> timeit("fib(20)", setup=setup, number=100)
0.0001489834564836201

so 100 loops or 1 loop provides similar results
as expected !





--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to