Guido van Rossum <gu...@python.org> added the comment:

I wrote a tiny script that calls compile() on raw bytes read from some source 
file, does this 100 times, and reports the total time. I tested the script with 
Lib/pydoc_data/topics.py (which happens to be the largest source file in the 
CPython repo, but mostly string literals) and with Lib/test/test_socket.py (the 
second-largest file).

I built python.exe on a Mac with PGO/LTO, from "make clean", both before and 
after (at) PR 30177. For both files, the difference between the results is well 
in the noise caused by my machine (I don't have a systematic way to stop 
background jobs). But it's very clear that this PR cannot have been the cause 
of an 85% jump in the time taken by the python_startup benchmark in 
PyPerformance.

For topics.py, the time was around 7.2 msec/compile; for test_socket.py, it was 
around 38. (I am not showing separate before/after numbers because the noise in 
my data really is embarrassing.)

The compilation speed comes down to ~170,000 lines/sec on my machine (an Intel 
Mac from 2019; 2.6 GHz 6-Core Intel Core i7 running macOS Big Sur 11.6.1; it 
has clang 12.0.5).

It must be something weird on the benchmark machines. I suspect that a new 
version of some package was installed in the venv shared by all the benchmarks 
(we are still using PyPerformance 1.0.2) and that affected something, perhaps 
through a .pth file?

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue46110>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to