On Feb 26, 2019, at 2:28 PM, Neil Schemenauer <nas-pyt...@python.ca> wrote:
> 
> Are you compiling with --enable-optimizations (i.e. PGO)?  In my
> experience, that is needed to get meaningful results.

I'm not and I would worry that PGO would give less stable comparisons because 
it is highly sensitive to changes its training set as well as the actual 
CPython implementation (two moving targets instead of one).  That said, it 
doesn't really matter to the world how I build *my* Python.  We're trying to 
keep performant the ones that people actually use.  For the Mac, I think there 
are only four that matter:

1) The one we distribute on the python.org 
    website at 
https://www.python.org/ftp/python/3.8.0/python-3.8.0a2-macosx10.9.pkg

2) The one installed by homebrew

3) The way folks typically roll their own:
        $ ./configure && make               (or some variant of make install)

4) The one shipped by Apple and put in /usr/bin

Of the four, the ones I've been timing are #1 and #3.

I'm happy to drop this.  I was looking for independent confirmation and didn't 
get it. We can't move forward unless some else also observes a consistently 
measurable regression for a benchmark they care about on a build that they care 
about.  If I'm the only who notices then it really doesn't matter.  Also, it 
was reassuring to not see the same effect on a GCC-8 build.

Since the effect seems to be compiler specific, it may be that we knocked it 
out of a local minimum and that performance will return the next time someone 
touches the eval-loop.


Raymond  








_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to