Marek Otahal added the comment:
Hi David,
> How is (1) different from:
@lru_cache(maxsize=1000)
def foo_long(self, arg1...)
As I mentioned, for use in a library that is called by end-users. They can call
functions and modify params, but do not edit the code. It's up to me (lib devs)
to prepare the cache decorator. Ie.:
class MyLib():
@lru_cache
def foo_long(self, arg1, **kwds):
pass
#user
import MyLib
i = MyLib()
i.foo_long(1337)
> As for computing it at runtime: if you need to compute it, you can compute it
> and *then* define the decorator wrapped function.
ditto as above, at runtime no new decorator definitions should be needed for a
library.
+ a speed penalty, I'd have to wrap a wrapper in 1 or 2 more nested functions,
which incures a speed penalty, and we're focusing at cache here. I esp. mention
this as I've noticed the ongoing effort to use a C implementation of lru_cache
here.
----------
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue24969>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com