[Jim Jewett]
> >> (2)  Add a way to say "Make this function I'm calling use *my* locals
> >> and globals."  This seems to meet all the agreed-upon-as-good use
> >> cases, but there is disagreement over how to sensibly write it.  The
> >> calling function is the place that could get surprised, but people
> >> who want thunks seem to want the specialness in the called function.

[Guido]
> > I think there are several problems with this. First, it looks
> > difficult to provide semantics that cover all the corners for the
> > blending of two namespaces. What happens to names that have a
> > different meaning in each scope?

[Jim]
> Programming error.  Same name ==> same object.

Sounds like a recipe for bugs to me. At the very least it is a total
breach of abstraction, which is the fundamental basis of the
relationship between caller and callee in normal circumstances. The
more I understand your proposal the less I like it.

> If a function is using one of _your_ names for something incompatible,
> then don't call that function with collapsed scope.  The same "problem"
> happens with globals today.  Code in module X can break if module Y
> replaces (not shadows, replaces) a builtin with an incompatible object.
> 
> Except ...
> > (E.g. 'self' when calling a method of
> > another object; or any other name clash.)
> 
> The first argument of a method *might* be a special case.  It seems
> wrong to unbind a bound method.  On the other hand, resource
> managers may well want to use unbound methods for the called
> code.

Well, what would you pass in as the first argument then?

> > Are the globals also blended?  How?
> 
> Yes.  The callee does not even get to see its normal namespace.
> Therefore, the callee does not get to use its normal name resolution.

Another breach of abstraction: if a callee wants to use an imported
module, the import should be present in the caller, not in the callee.

This seems to me to repeat all the mistakes of the dynamic scoping of
early Lisps (including GNU Emacs Lisp I believe).

It really strikes me as an endless source of errors that these
blended-scope callees (in your proposal) are ordinary
functions/methods, which means that they can *also* be called without
blending scopes. Having special syntax to define a callee intended for
scope-blending seems much more appropriate (even if there's also
special syntax at the call site).

> If the name normally resolves in locals (often inlined to a tuple, today),
> it looks in the shared scope, which is "owned" by the caller.  This is
> different from a free variable only because the callee can write to this
> dictionary.

Aha! This suggests that a blend-callee needs to use different bytecode
to avoid doing lookups in the tuple of optimized locals, since the
indices assigned to locals in the callee and the caller won't match up
except by miracle.

> If the name is free in that shared scope, (which implies that the
> callee does not bind it, else it would be added to the shared scope)
> then the callee looks up the caller's nested stack and then to the
> caller's globals, and then the caller's builtins.
> 
> > Second, this construct only makes sense for all callables;

(I meant this to read "does not make sense for all callables".)

> Agreed.

(And I presume you read it that way. :-)

> But using it on a non-function may cause surprising results
> especially if bound methods are not special-cased.
> 
> The same is true of decorators, which is why we have (at least
> initially) "function decorators" instead of "callable decorators".

Not true. It is possible today to write decorators that accept things
other than functions -- in fact, this is often necessary if you want
to write decorators that combine properly with other decorators that
don't return function objects (such as staticmethod and classmethod).

> > it makes no sense when the callable is implemented as
> > a C function,
> 
> Or rather, it can't be implemented, as the compiler may well
> have optimized the variables names right out.  Stack frame
> transitions between C and python are already special.

Understatement of the year. There just is no similarity between C and
Python stack frames. How much do you really know about Python's
internals???

> > or is a class, or an object with a __call__ method.
> 
> These are just calls to __init__ (or __new__) or __call__.

No they're not. Calling a class *first* creates an instance (calling
__new__ if it exists) and *then* calls __init__ (if it exists).

> These may be foolish things to call (particularly if the first
> argument to a method isn't special-cased), but ... it isn't
> a problem if the class is written appropriately.  If the class
> is not written appropriately, then don't call it with collapsed
> scope.

That's easy for you to say. Since the failure behavior is so messy I'd
rather not get started.

> > Third, I expect that if we solve the first two
> > problems, we'll still find that for an efficient implementation we
> > need to modify the bytecode of the called function.
> 
> Absolutely.  Even giving up the XXX_FAST optimizations would
> still require new bytecode to not assume them.  (Deoptimizing
> *all* functions, in *all* contexts, is not a sensible tradeoff.)

So you actually *agree* that blended-scope functions should be marked
as such at the callee definition, not just at the call site. Or how
else would you do this?

> Eventually, an optimizing compiler could do the right thing, but ...
> that isn't the point.
> 
> For a given simple algorithm, interpeted python is generally slower
> than compiled C, but we write in python anyhow -- it is fast enough,
> and has other advantages.  The same is true of anything that lets
> me not cut-and-paste.

Whatever. Any new feature that causes a measurable slowdown for code
that does *not* need the feature has a REALLY hard time getting
accepted, by me as well as by the Python community. Slow Python down
enough, and your target audience reduces to a small bunch of folks who
are programming for their own education.

> > Try to make sure that it can be used in a "statement context"
> > as well as in an "expression context".
> 
> I'm not sure I understand this.  The preferred way would be
> to just stick the keyword before the call.  Using 'collapse', it
> would look like:
> 
>     def foo(b):
>         c=a
>     def bar():
>         a="a1"
>         collapse foo("b1")
>         print b, c        # prints "b1", "a1"
>         a="a2"
>         foo("b2")        # Not collapsed this time
>         print b, c        # still prints "b1", "a1"

I'm trying to sensitize you to potential uses like this:

def bar():
    a = "a1"
    print collapse foo("b1")

> but I suppose you could treat it like the 'global' keyword
> 
>     def bar():
>         a="a1"
>         collapse foo   # forces foo to always collapse when called within bar
>         foo("b1")
>         print b, c        # prints "b1", "a1"
>         a="a2"
>         foo("b2")        # still collapsed
>         print b, c        # now prints "b2", "a2"

Would make more sense if the collapse keyword was at the module level.

> >> [Alternative 3 ... bigger that merely collapsing scope]
> >> (3)  Add macros.  We still have to figure out how to limit their 
> >> obfuscation.
> >> Attempts to detail that goal seem to get sidetracked.
> 
> > No, the problem is not how to limit the obfuscation. The problem is
> > the same as for (2), only more so: nobody has given even a *remotely*
> > plausible mechanism for how exactly you would get code executed at
> > compile time.
> 
> macros can (and *possibly* should) be evaluated at run-time.

We must still have very different views on what a macro is. After a
macros is run, there is new syntax that needs to be parsed and
compiled to bytecode. While Python frequently switches between compile
time and run time, anything that requires invoking the compiler each
time a macro is used will be so slow that nobody will want to use it.
(Python's compiler is very slow, and it's even slower in alternate
implementations like Jython and IronPython.)

> Compile time should be possible (there is an interpreter running) and
> faster, but ... is certainly not required.

OK, now you *must* look at the Boo solution.
http://boo.codehaus.org/Syntactic+Macros

> Even if the macros just rerun the same boilerplate code less efficiently,
> it is still good to have that boilerplate defined once, instead of cutting
> and pasting.  Or, at least, it is better *if* that once doesn't become
> unreadable in the process.

I am unable to assess the value of this mechanism unless you make a
concrete proposal. You seem to have something in mind but you're not
doing a good job getting it into mine...

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to