New submission from INADA Naoki: All lookdict* functions are implemented like pseudo language:
``` lookup() if not collision: return result while True: perturb_shift() lookup() if not collision: return result ``` This patch changes it as: ``` while True: lookup() if not collision: return result perturb_shift() ``` It removes 100 lines of code. Good. But how about performance? When this patch is applied to 4a534c45bbf6: ``` $ ../python.patched -m perf compare_to default.json patched2.json -G --min-speed=2 Slower (4): - xml_etree_generate: 271 ms +- 6 ms -> 283 ms +- 9 ms: 1.04x slower (+4%) - nqueens: 263 ms +- 4 ms -> 272 ms +- 3 ms: 1.04x slower (+4%) - scimark_monte_carlo: 272 ms +- 10 ms -> 280 ms +- 14 ms: 1.03x slower (+3%) - scimark_lu: 435 ms +- 23 ms -> 446 ms +- 32 ms: 1.03x slower (+3%) Faster (7): - call_method: 15.2 ms +- 0.2 ms -> 14.7 ms +- 0.4 ms: 1.04x faster (-4%) - call_simple: 14.4 ms +- 0.2 ms -> 13.9 ms +- 0.3 ms: 1.04x faster (-4%) - xml_etree_iterparse: 227 ms +- 9 ms -> 219 ms +- 7 ms: 1.04x faster (-3%) - scimark_sor: 527 ms +- 10 ms -> 510 ms +- 11 ms: 1.03x faster (-3%) - call_method_slots: 14.7 ms +- 0.5 ms -> 14.3 ms +- 0.2 ms: 1.03x faster (-3%) - genshi_text: 90.2 ms +- 1.1 ms -> 87.8 ms +- 1.1 ms: 1.03x faster (-3%) - django_template: 403 ms +- 5 ms -> 394 ms +- 4 ms: 1.02x faster (-2%) Benchmark hidden because not significant (53): 2to3, ... ``` And when this patch applied to 1a97b10cb420 : ``` $ ../python.patched -m perf compare_to default.json patched.json -G --min-speed=2 Slower (6): - call_simple: 13.5 ms +- 0.5 ms -> 14.4 ms +- 0.4 ms: 1.07x slower (+7%) - xml_etree_generate: 270 ms +- 6 ms -> 287 ms +- 5 ms: 1.06x slower (+6%) - xml_etree_process: 240 ms +- 6 ms -> 247 ms +- 4 ms: 1.03x slower (+3%) - regex_compile: 429 ms +- 3 ms -> 440 ms +- 5 ms: 1.03x slower (+3%) - call_method_unknown: 16.1 ms +- 0.2 ms -> 16.5 ms +- 0.3 ms: 1.02x slower (+2%) - logging_simple: 31.2 us +- 0.4 us -> 32.0 us +- 0.3 us: 1.02x slower (+2%) Faster (8): - genshi_text: 90.6 ms +- 1.4 ms -> 87.6 ms +- 1.2 ms: 1.03x faster (-3%) - scimark_sor: 513 ms +- 11 ms -> 497 ms +- 12 ms: 1.03x faster (-3%) - genshi_xml: 200 ms +- 2 ms -> 194 ms +- 2 ms: 1.03x faster (-3%) - unpickle_pure_python: 857 us +- 21 us -> 835 us +- 13 us: 1.03x faster (-3%) - python_startup_no_site: 9.95 ms +- 0.02 ms -> 9.74 ms +- 0.02 ms: 1.02x faster (-2%) - json_dumps: 29.7 ms +- 0.4 ms -> 29.1 ms +- 0.4 ms: 1.02x faster (-2%) - xml_etree_iterparse: 225 ms +- 9 ms -> 220 ms +- 5 ms: 1.02x faster (-2%) - chameleon: 31.1 ms +- 0.3 ms -> 30.5 ms +- 0.5 ms: 1.02x faster (-2%) Benchmark hidden because not significant (50): 2to3, ... ``` I can't see any stable and significant performance regression. I'll try to create some micro benchmarks. ---------- files: dictlook-refactoring.patch keywords: patch messages: 285695 nosy: inada.naoki priority: normal severity: normal status: open title: dict: simplify lookup function Added file: http://bugs.python.org/file46324/dictlook-refactoring.patch _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29304> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com