On 25/03/2016 16:22, Steven D'Aprano wrote:
On Fri, 25 Mar 2016 10:06 pm, BartC wrote:

(OK, I'll risk the wrath of Mark Lawrence et al by daring to post my opinions.)

But it is an optimisation that /could/ be done by the byte-code compiler

I would also expect that Victor Stinner's FAT Python project will
(eventually) look at optimizing such for-loops too. If you see something
like:

for i in range(N):
     do_stuff()


and three conditions hold:

(1) i is unused in the loop body;

(2) range is still bound to the expected built-in function, and no
other "range" in a nearer scope (say, a local variable called "range", or a
global) exists to shadow it; and

The volatility of 'range' I'd completely forgotten about. Python really doesn't make this stuff easy, does it?

Checking this at runtime, per loop (I don't think it's necessary per iteration) is a possibility, but now an extra overhead is introduced. It might worth it if the body of the loop is also optimised, but this is getting into the territory covered by tracing JITs.

My kind of approach would try to keep it simple, and that would be helped tremendously by special language constructs (something like Ruby's 100.times) where you can use a streamlined loop, and possible
unrolling, straight off.

(Personally, if I was creating a byte-code intepreter for Python as-is, I would have a -dynamic:on or -dynamic:off switch.)

Here is one of the oldest:

http://www.ibm.com/developerworks/library/l-psyco/index.html

Yes, I tried that long ago. It's very fast on certain benchmarks. (Both psyco and pypy can run my lexer at around 170Klps. And this is code using string compares and long if-elif chains, that would be crazy in any other language. From that point of view, it's very impressive (I can only get half that speed using the same methods).

--
Bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to