On 09/11/2015 02:23, Steven D'Aprano wrote:
On Mon, 9 Nov 2015 09:35 am, BartC wrote:

import m
a=10
b=20
c=30
m.f()

The set of global names the compiler knows will be ("m","a","b","c").

Wrong. Up to the line "c=30", the set of names the compiler can infer are m,
a, b and c. Once the line "m.f()" executes, *all bets are off*. The
compiler can no longer infer *anything* about those names. In principle,
m.f may have reached into the globals and deleted *any* of the names,
including itself.


I don't believe code can remove these names (that would cause problems).

Of course names can be removed. There's even a built-in statement to do
so: "del".

I tried this code:

a=10
print (a)

del a
#print (a)

a=20
print (a)

That sort of confirms what you are saying: that names don't even come into existence until the first time they are encountered. They don't just contain None, or some other value which means the name itself exists, but it hasn't been initialised to anything. And that del removes the name completely from the set that are known.

That makes Python unlike any other language I've used.

On the other hand, if I put the above code into a function and then call the function, attempting to print a just after it's been deleted results in:

 UnboundLocalError: local variable 'a' referenced before assignment

So while local names can presumably be manipulated just like globals, that doesn't stop being implemented via a fixed slot in a table.

Therefore these names could be referred to by index.

Perhaps. But that adds significant complexity to the compiler,

It would mean that each name needs a flag indicating whether or not it has yet been brought into existence by an assignment in the program, or has been banished by the use of del.

and the
performance benefits *in practice* may not be as good as you imagine. After
all, there is usually far more to real programs than just getting and
setting names.

Any programs will consist largely of pushing names and constants (LOAD ops), performing some operations on them, and then sometimes popping values (STORE ops).

If the names represent complex data, then doing the work on the data will dominate the timings. But often it's the 'little' variables that hold indices, counts, flags etc that are being frequently loaded and stored.

def unopt():
     from math import *  # Defeats the local variable optimization.
     x = sin; x = cos; x = tan; x = exp; x = pi
     x = e; x = trunc; x = log; x = hypot; x = sqrt
     return

def opt():
     from math import sin, cos, tan, exp, pi, e, trunc, log, hypot, sqrt
     x = sin; x = cos; x = tan; x = exp; x = pi
     x = e; x = trunc; x = log; x = hypot; x = sqrt
     return

When I run this code, I get

16.5607659817 seconds for unopt, and 3.58955097198 seconds for opt. That's a
significant difference.

The optimisation means the code can use LOAD_FAST instead of LOAD_NAME. It still uses STORE_FAST instead of STORE_NAME, so perhaps the difference could be even more if the unoptimised version had to use STORE_NAME.

--
Bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to