This just went by this morning on reddit's /r/programming. It's a paper
that analyzed Python--among a handful of other languages--to answer the
question "are branch predictors still that bad at the big switch
statement approach to interpreters?" Their conclusion: no.
Our simulations [...] show that, as long as the payload in the
bytecode remains limited and do not feature significant amount of
extra indirect branches, then the misprediction rate on the
interpreter can be even become insignificant (less than 0.5 MPKI).
(MPKI = missed predictions per thousand instructions)
Their best results were on simulated hardware with state-of-the-art
prediction algorithms ("TAGE" and "ITTAGE"), but they also demonstrate
that branch predictors in real hardware are getting better quickly.
When running the Unladen Swallow test suite on Python 3.3.2, compiled
with USE_COMPUTED_GOTOS turned off, Intel's Nehalem experienced an
average of 12.8 MPKI--but Sandy Bridge drops that to 3.5 MPKI, and
Haswell reduces it further to a mere *1.4* MPKI. (AFAICT they didn't
compare against Python 3.3.2 using computed gotos, either in terms of
MPKI or in overall performance.)
The paper is here:
https://hal.inria.fr/hal-01100647/document
I suppose I wouldn't propose removing the labels-as-values opcode
dispatch code yet. But perhaps that day is in sight!
//arry/
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com