On 4/8/06, Phillip J. Eby <[EMAIL PROTECTED]> wrote: > >Even with the cache I put in? The hairy algorithm doesn't get invoked > >more than once per actual signature (type tuple). > > You've measured it now, but yes, my own measurements of similar techniques > in RuleDispatch showed the same effect, and as with your measurement, it is > the Python code to build the tuple, rather than the tuple allocation > itself, that produces the delays. The current version of RuleDispatch goes > to some lengths to avoid creating any temporary data structures (if > possible) for this reason.
Check out the accelerated version in time_overloading.py in the svn sandbox/overloading/. It's mostly faster than the manual version! Anyway, you seem to be confirming that it's the speed with which we can do a cache lookup. I'm not worried about that now. -- --Guido van Rossum (home page: http://www.python.org/~guido/) _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com