New submission from Eugene Toder <elto...@gmail.com>:
It's convenient to use @lru_cache on functions with no arguments to delay doing some work until the first time it is needed. Since @lru_cache is implemented in C, it is already faster than manually caching in a closure variable. However, it can be made even faster and more memory efficient by not using the dict at all and caching just the one result that the function returns. Here are my timing results. Before my changes: $ ./python -m timeit -s "import functools; f = functools.lru_cache()(lambda: 1)" "f()" 5000000 loops, best of 5: 42.2 nsec per loop $ ./python -m timeit -s "import functools; f = functools.lru_cache(None)(lambda: 1)" "f()" 5000000 loops, best of 5: 38.9 nsec per loop After my changes: $ ./python -m timeit -s "import functools; f = functools.lru_cache()(lambda: 1)" "f()" 10000000 loops, best of 5: 22.6 nsec per loop So we get improvement of about 80% compared to the default maxsize and about 70% compared to maxsize=None. ---------- components: Library (Lib) messages: 384883 nosy: eltoder, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: optimize lru_cache for functions with no arguments type: performance versions: Python 3.10 _______________________________________ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue42903> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com