At 10:20 PM 1/13/05 -0800, Guido van Rossum wrote:
[Guido]
> >This may solve the curernt raging argument, but IMO it would make the
> >optional signature declaration less useful, because there's no way to
> >accept other kind of adapters. I'd be happier if def f(X: Y) implied X
> >= adapt(X, Y).

[Phillip]
> The problem is that type declarations really want more guarantees about
> object identity and state than an unrestricted adapt() can provide,

I'm not so sure. When I hear "guarantee" I think of compile-time
checking, and I though that was a no-no.

No, it's not compile-time based, it's totally at runtime. I mean that if the implementation of 'adapt()' *generates* the adapter (cached of course for source/target type pairs), it can trivially guarantee that adapter's stateless. Quick demo (strawman syntax) of declaring adapters...


First, a type declaring that its 'read' method has the semantics of 'file.read':

    class SomeKindOfStream:
        def read(self, byteCount) like file.read:
            ...

Second, third-party code adapting a string iterator to a readable file:

    def read(self, byteCount) like file.read for type(iter("")):
        # self is a string iterator here, implement read()
        # in terms of its .next()

And third, some standalone code implementing an "abstract" dict.update method for any source object that supports a method that's "like" dict.__setitem__:

    def update_anything(self:dict, other:dict) like dict.update for object:
        for k,v in other.items(): self[k] = v

Each of these examples registers the function as an implementation of the "file.read" operation for the appropriate type. When you want to build an adapter from SomeKindOfStream or from a string iterator to the "file" type, you just access the 'file' type's descriptors, and look up the implementation registered for that descriptor for the source type (SomeKindOfStream or string-iter). If there is no implementation registered for a particular descriptor of 'file', you leave the corresponding attribute off of the adapter class, resulting in a class representing the subset of 'file' that can be obtained for the source class.

The result is that you generate a simple adapter class whose only state is a read-only slot pointing to the adapted object, and descriptors that bind the registered implementations to that object. That is, the descriptor returns a bound instancemethod with an im_self of the original object, not the adapter. (Thus the implementation never even gets a reference to the adapter, unless 'self' in the method is declared of the same type as the adapter, which would be the case for an abstract method like 'readline()' being implemented in terms of 'read'.)

Anyway, it's therefore trivially "guaranteed" to be stateless (in the same way that an 'int' is "guaranteed" to be immutable), and the implementation is also "guaranteed" to be able to always get back the "original" object.

Defining adaptation in terms of adapting operations also solves another common problem with interface mechanisms for Python: the dreaded "mapping interface" and "file-like object" problem. Really, being able to *incompletely* implement an interface is often quite useful in practice, so this "monkey see, monkey do" typing ditches the whole concept of a complete interface in favor of "explicit duck typing". You're just declaring "how can X act 'like' a duck" -- emulating behaviors of another type rather than converting structure.


Are there real-life uses of stateful adapters that would be thrown out
by this requirement?

Think about this: if an adapter has independent state, that means it has a particular scope of applicability. You're going to keep the adapter and then throw it away at some point, like you do with an iterator. If it has no state, or only state that lives in the original object (by tacking annotations onto it), then it has a common lifetime with the original object.


If it has state, then, you have to explicitly manage that state; you can't do that if the only way to create an adapter is to pass it into some other function that does the adapting, unless all it's going to do is return the adapter back to you!

Thus, stateful adapters *must* be explicitly adapted by the code that needs to manage the state.

This is why I say that PEP 246 is fine, but type declarations need a more restrictive version. PEP 246 provides a nice way to *find* stateful adapters, it just shouldn't do it for function arguments.


> Even if you're *very* careful, your seemingly safe setup can be blown just
> by one routine passing its argument to another routine, possibly causing an
> adapter to be adapted.  This is a serious pitfall because today when you
> 'adapt' you can also access the "original" object -- you have to first
> *have* it, in order to *adapt* it.

How often is this used, though? I can imagine all sorts of problems if
you mix access to the original object and to the adapter.

Right - and early adopters of PEP 246 are warned about this, either from the PEP or PyProtocols docs. The PyProtocols docs early on have dire warnings about not forwarding adapted objects to other functions unless you already know the other method needs only the interface you adapted to already. However, with type declarations, you may never receive the original object.




> But type declarations using adapt()
> prevents you from ever *seeing* the original object within a function.  So,
> it's *really* unsafe in a way that explicitly calling 'adapt()' is
> not.  You might be passing an adapter to another function, and then that
> function's signature might adapt it again, or perhaps just fail because you
> have to adapt from the original object.

Real-life example, please?

If you mean, an example of code that's currently using adapt() that I'd have changed to use type declaration instead and then broken something, I'll have to look for one and get back to you. I have a gut feel/vague recollection that there are some, but I don't know how many.


The problem is that the effect is inherently non-local; you can't look at a piece of code using type declarations and have a clue as to whether there's even *potentially* a problem there.


I can see plenty of cases where this could happen with explicit
adaptation too, for example f1 takes an argument and adapts it, then
calls f2 with the adapted value, which calls f3, which adapts it to
something else. Where is f3 going to get the original object?

PyProtocols warns people not to do this in the docs, but it can't do anything about enforcing it.




But the solution IMO is not to weigh down adapt(), but to agree, as a
user community, not to create such "bad" adapters, period.

Maybe. The thing that inspired me to come up with a new approach is that "bad" adapters are just *sooo* tempting; many of the adapters that we're just beginning to realize are "bad", were ones that Alex and I both initially thought were okay. Making the system such that you get "safe" adapters by default removes the temptation, and provides a learning opportunity to explain why the caller needs to manage the state when creating a stateful adapter. PEP 246 still allows you to leave it implicit how you get the adapter, but it still should be created explicitly by the code that needs to manage its lifetime.



 OTOH there
may be specific cases where the conventions of a particular
application or domain make stateful or otherwise naughty adapters
useful, and everybody understands the consequences and limitations.

Right; and I think that in those cases, it's the *caller* that needs to (explicitly) adapt, not the callee, because it's the caller that knows the lifetime for which the adapter needs to exist.



> Clark's proposal isn't going to solve this issue for PEP 246, alas.  In
> order to guarantee safety of adaptive type declarations, the implementation
> strategy *must* be able to guarantee that 1) adapters do not have state of
> their own, and 2) adapting an already-adapted object re-adapts the original
> rather than creating a new adapter.  This is what the monkey-typing PEP and
> prototype implementation are intended to address.

Guarantees again. I think it's hard to provide these, and it feels
unpythonic.

Well, right now Python provides lots of guarantees, like that numbers are immutable. It would be no big deal to guarantee immutable adapters, if Python supplies the adapter type for you.



(2) feels weird too -- almost as if it were to require
that float(int(3.14)) should return 3.14. That ain't gonna happen.

No, but 'int_wrapper(3.14).original_object' is trivial.

The point is that adaptation should just always return a wrapper of a type that's immutable and has a pointer to the original object.

If you prefer, call these characteristics "implementation requirements" rather than guarantees. :)


Or maybe we shouldn't try to guarantee so much and instead define
simple, "Pythonic" semantics and live with the warts, just as we do
with mutable defaults and a whole slew of other cases where Python
makes a choice rooted in what is easy to explain and implement (for
example allowing non-Liskovian subclasses). Adherence to a particular
theory about programming is not very Pythonic; doing something that
superficially resembles what other languages are doing but actually
uses a much more dynamic mechanism is (for example storing instance
variables in a dict, or defining assignment as name binding rather
than value copying).

Obviously the word "guarantee" hit a hot button; please don't let it obscure the actual merit of the approach, which does not involve any sort of compile-time checking. Heck, it doesn't even have interfaces!


_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to