Changes by Serhiy Storchaka storch...@gmail.com:
--
resolution: - fixed
stage: test needed - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
Roundup Robot added the comment:
New changeset 6951d7b8d3ad by Serhiy Storchaka in branch '3.2':
Issue #16389: Fixed an issue number in previos commit.
http://hg.python.org/cpython/rev/6951d7b8d3ad
New changeset 7b737011d822 by Serhiy Storchaka in branch '3.3':
Issue #16389: Fixed an issue
Serhiy Storchaka added the comment:
Raymond, actually my patch reverts 3.1 logic. lru_cache used since 3.2.
There are no any additional re cache tests in 3.2 or 3.1.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
Richard Oudkerk added the comment:
Which does give me a thought - perhaps lru_cache in 3.4 could accept a
key argument that is called as key(*args, **kwds) to derive the cache
key? (that would be a separate issue, of course)
Agreed. I suggested the same in an earlier post.
--
Nick Coghlan added the comment:
Raymond's plan sounds good to me.
We may also want to tweak the 3.3 lru_cache docs to note the trade-offs
involved in using it. Perhaps something like:
As a general purpose cache, lru_cache needs to be quite pessimistic in
deriving non-conflicting keys from
Raymond Hettinger added the comment:
Serhiy, please go ahead an apply your patch. Be sure to restore the re cache
tests that existed in Py3.2 as well.
Thank you.
--
assignee: rhettinger - serhiy.storchaka
priority: normal - high
stage: needs patch - test needed
Raymond Hettinger added the comment:
A few thoughts:
* The LRU cache was originally intended for IO bound calls not for tight,
frequently computationally bound calls like re.compile.
* The Py3.3 version of lru_cache() favors size optimizations (i.e. it uses only
one dictionary instead of the
Raymond Hettinger added the comment:
Until the lru_cache can be sped-up significantly, I recommend just accepting
Serhiy's patch to go back to 3.2 logic in the regex module.
In the meantime, I'll continue to work on improving speed of _make_key().
--
Changes by Brian Kearns bdkea...@gmail.com:
--
nosy: +brian.kearns
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
___
Python-bugs-list
Terry J. Reedy added the comment:
Since switching from a simple custom cache to the generalized lru cache made a
major slowdown, I think the change should be reverted. A dict + either
occasional clearing or a circular queue and a first-in, first-out discipline is
quite sufficient. There is no
Ezio Melotti added the comment:
For 3.4 #14373 might solve the issue.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
___
Changes by Andrew Svetlov andrew.svet...@gmail.com:
--
nosy: +asvetlov
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
___
Serhiy Storchaka added the comment:
Maybe lru_cache() should have a key argument so you can specify a specialized
key function.
It would be interesting to look at the microbenchmarking results.
--
___
Python tracker rep...@bugs.python.org
Changes by Jesús Cea Avión j...@jcea.es:
--
nosy: +jcea
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
___
Python-bugs-list mailing list
Serhiy Storchaka added the comment:
I think the lru_cache should be kept if possible (i.e. I'm -0.5 on your
patch).
This patch is only to show the upper level to which should be sought. I tried
to optimize lru_cache(), but got only 15%. I'm afraid that serious
optimization is impossible
Richard Oudkerk added the comment:
Maybe lru_cache() should have a key argument so you can specify a specialized
key function. So you might have
def _compile_key(args, kwds, typed):
return args
@functools.lru_cache(maxsize=500, key=_compile_key)
def _compile(pattern,
Ezio Melotti added the comment:
Attached a proof of concept that removes the caching for re.compile, as
suggested in msg174599.
--
Added file: http://bugs.python.org/file27895/issue16389.diff
___
Python tracker rep...@bugs.python.org
Serhiy Storchaka added the comment:
Ezio, I agree with you, but I think this should be a separate issue.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
Ezio Melotti added the comment:
I think the lru_cache should be kept if possible (i.e. I'm -0.5 on your patch).
If this result in a slowdown (as the mako_v2 benchmark indicates), then there
are two options:
1) optimize lru_cache;
2) avoid using it for regular expressions compiled with
Serhiy Storchaka added the comment:
Here is a patch which reverts 3.1 implementation (and adds some optimization).
Microbenchmark:
$ ./python -m timeit -s import re re._compile('', 0)
Results:
3.1: 1.45 usec per loop
3.2: 4.45 usec per loop
3.3: 9.91 usec per loop
3.4patched: 0.89 usec per
mike bayer added the comment:
in response to ezio, I poked around the source here, since I've never been sure
if re.compile() cached its result or not. It seems to be the case in 2.7 and
3.2 also - 2.7 uses a local caching scheme and 3.2 uses functools.lru_cache,
yet we don't see as much of
Nick Coghlan added the comment:
Now that Brett has a substantial portion of the benchmark suite running on
Py3k, we should see a bit more progress on the PyPy-inspired speed.python.org
project (which should make it much easier to catch this kind of regression
before it hits a production
Serhiy Storchaka added the comment:
This is not only 3.3 regression, this is also 3.2 regression. 3.1, 3.2 and 3.3
have different caching implementation.
Mikrobenchmark:
$ ./python -m timeit -s import re re.match('', '')
Results:
3.1: 2.61 usec per loop
3.2: 5.77 usec per loop
3.3: 11.8 usec
New submission from Philip Jenvey:
#9396 replaced a few caches in the stdlib w/ lru_cache, this made the mako_v2
benchmark on Python 3 almost 3x slower than 2.7
The benchmark results are good now that Mako was changed to cache the re
itself, but the problem still stands that lru_cache seems
Changes by Ezio Melotti ezio.melo...@gmail.com:
--
keywords: +3.3regression
nosy: +ezio.melotti
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
Changes by Antoine Pitrou pit...@free.fr:
--
stage: - needs patch
versions: +Python 3.4
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
Antoine Pitrou added the comment:
lru_cache() seems to use a complicated make_key() function, which is invoked on
each cache hit. The LRU logic is probably on the slow side too, compared to a
hand-coded logic which would favour lookup cost over insertion / eviction cost.
--
nosy:
Brett Cannon added the comment:
Would be interesting to know what speed difference would occur if the
statistics gathering was optional and turned off.
As for _make_key(), I wonder if ``(args, tuple(sorted(kwd.items(`` as a key
would be any faster as a tuple's hash is derived from its
Brett Cannon added the comment:
Ditching the statistics only sped up regex_compile by 2%.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
Antoine Pitrou added the comment:
Ditching the statistics only sped up regex_compile by 2%.
Does explicit compiling even go through the cache?
Regardless, the issue here is with performance of cache hits, not cache
misses. By construction, you cache something which is costly to compute,
so the
Brett Cannon added the comment:
re.compile() calls _compile() which has the lru_cache decorator so it will
trigger it. But you make a good point, Antoine, that it's the hit overhead here
that we care about as long as misses don't get worse as the calculation of is
to be cached should
Changes by Barry A. Warsaw ba...@python.org:
--
nosy: +barry
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16389
___
___
Python-bugs-list mailing
Nick Coghlan added the comment:
Did you try moving the existing single-argument fast path to before the main if
statement in _make_key? That is:
if not kwds and len(args) == 1:
key = args[0]
key_type = type(key)
if key_type in fasttypes:
if typed:
33 matches
Mail list logo