On 5/10/06, tomer filiba <[EMAIL PROTECTED]> wrote: > i think it's only Bill who said that, but i wanted to show why interfaces > (and inheritance) shouldn't be the basis for type-checking. > > here's a nice interface: > > class IFile(object): > def write(self, data): > pass > def read(self, count): > pass > > and here's a class that "implements/derives" from this interface, but > completely ignores/breaks it: > > class MyFile(IFile): > def write(self, x, y, z): > .... > > m = MyFile() > > now m is an instance of IFile, but calling m.write with the signature of > IFile.write won't work...
So what? It wouldn't work either if it had the correct signature but used the wrong implementation. You seem to be still under the influence of the type-safety mafia, who want to make you believe that with enough type-checking you can prevent all bugs (though they don't believe it themselves). Python already has a very useful kind of strong type safety: it tags all data at the lowest level so it won't be interpreted the wrong way; in particular it can't be tricked into dereferencing something that's not a pointer to an object, or calling something that's not a pointer to a well-behaved function. (Extensions like ctypes excluded.) Java makes a similar guarantee. C++ and Perl don't. Interface declarations, generic functions, and all the stuff we're discussing here are not tools for catching more bugs; they are tools for making programs more flexible. They let us overload APIs so that foo(x) can do different things based on what kind of thing x *claims to be*. There are already lots of ad-hoc approaches to this: foo(x) could call x.__foo__(), or it could call fooregistry[type(x)](), or it could partition the type space into a few categories (e.g. sequences, dicts, files, and everything else) by doing a few hasattr() checks. Or, of course, we could have written x.foo(), which is yet another approach to dispatch. The design choices are endless, and one size is unlikely to fit all: adaptation, generic/overloaded functions, interfaces, abstract base classes (not quite the same as interfaces), typeclasses, who knows... The only things I clearly see at this point are: (a) The x.foo() notation will continue to handle at least 80% of the cases. Anything else we come up with is going to have to be satisfied with improving the situation for the remaining 20% or less. The mechanisms for resolving x.foo() are already extremely flexible; you can overload __getattribute__ (there you have your mutable mro!), and if that isn't enough you can write a metaclass. I doubt that we need more flexibility in this area, and I doubt that we'll want to revise the existing approach much. (b) It would be good if we could provide a standard mechanism or convention to handle the most common 80% of those 20% (i.e. 16% of the total). At this point I'm not sure whether this ought to be based on generic/overloaded functions, interfaces/abstract base classes, or something completely different; those two are the most likely candidates, perhaps even together. (c) Any solution should not prevent people like Phillip Eby to implement their own mechanisms to cover the remaining 4% of the total. (These numbers are of course just rough indicators. :-) It would be ideal if such mechanisms could be implemented as systematic modifications of the standard approach (like metaclasses are), but that's not an absolute necessity as long as there's *some* way to implement advanced dispatch machinery. -- --Guido van Rossum (home page: http://www.python.org/~guido/) _______________________________________________ Python-3000 mailing list [email protected] http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com
