Steve Dower <steve.do...@python.org> added the comment:

> I'm not sure if PGO builds are reproducible, since there is a training step 
> which requires to run a non-deterministic workload (the Python test suite 
> which is somehow randomized by I/O timings and other stuffs).

I'm 99% sure it's a tracing PGO rather than sampling, so provided everything 
runs in the same order and follows the same codepaths, wall-clock timings 
shouldn't have a significant effect.

Without looking at the generated code, it may be more effective to try and 
force the functions called from the macro to never be inlined. The optimiser 
may be missing that the calls are uncommon, and trying to inline them first, 
then deciding that the whole merged function is too big to inline in places 
where it would be useful.

There's also no particular reason we need to use these tests as the profile, 
it's just that nobody has taken the time to come up with anything better. I'd 
rather see us invest time in generating a good profile rather than trying to 
hand-optimise inlining. (Though the test suite is good in many ways, because it 
makes sure all the extension modules are covered. We definitely want as close 
to full code coverage as we can get, at least for happy paths, but may need 
more eval loop in the profile if that's what needs help.)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue45116>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to