Luke wrote:

    "In absence of other information, a derived class behaves just
like its parent."

I can argue that one into the ground, but it is a postulate and
doesn't fall out of anything deeper (in my thinking paradigm, I
suppose).  My best argument is that, how can you expect to add to
something's behavior if it changes before you start?

But the act of deriving is already a change...in identity and type relationship. Derivation is not symmetrical and derived classes are not identical to their bases. *Especially* under multiple inheritance.


Every SMD system that I can think of obeys this principle

Perl 5's SMD doesn't. At least, not under multiple inheritance.
Perl 6's SMD doesn't either. At least, not wrt submethods.


And as I showed below, the Manhattan metric for MMD dispatch is
outside the realm of OO systems that obey this principle.

Yep. Because MMD relies on type relationships and type relationships change under inheritance.


In fact, I'm pretty sure (I haven't proved it) that any MMD system that relies
on "number of derivations" as its metric will break this principle.

But, as I illustrated in my previous message, Pure Ordering breaks under this principle too. Deriving a class can break existing method dispatch, even when "degree of derivation" isn't a factor.


But your classes *do* do something that makes them do something. They
change the degree of generalization of the leaf classes under an L[1]
metric. Since they do that, it makes perfect sense that they also change
the resulting behaviour under an L[1] metric. If the resulting behaviour
didn't change then the L[1] *semantics* would be broken.

Yep.  And I'm saying that L[1] is stupid.

Ah yes, a compelling argument. ;-)


What we're really talking about here is how do we *combine* the compatibility
measures of two or more arguments to determine the best overall fit. Pure
Ordering does it in a "take my bat and go home" manner, Manhattan distance
does it by weighing all arguments equally.

For some definition of "equal".

Huh. It treats all arguments as equally significant in determining overall closeness. Just like Pure Ordering does. I don't see your point.


And now maybe you see why I am so disgusted by this metric.  You see,
I'm thinking of a class simply as the set of all of its possible
instances.

There's your problem. Classes are not isomorphic to sets of instances and derived classes are not isomorphic to subsets.


And then when you refer to L[1] on the number of
derivations, I put it into set-subset terms, and mathematics explodes.

Sure. But if I think of a class as a piece of cheese and subclasses as mousetraps, the L[1] metric doesn't work either. The fault, dear Brutus, lies not in our metrics, but in our metaphors! ;-)


Here's how you can satisfy me: argue that Palmer's zero-effect
principle is irrelevant,

It's not irrelevant. It's merely insufficient. As I mention above, deriving a new class does not make the new class identical to the old in any context where derivation is part of the measure of similarity. Since MMD is such a context, derivation itself cannot be considered zero-effect, so the effects of derivation cannot be zero.


and explain either how Manhattan dispatch
makes any sense in a class-is-a-set world view,

It doesn't. But that's not a problem with the Manhattan metric. ;-)


or why that world view itself doesn't make sense.

Because a class isn't a set of instances...it's a recipe for creating instances and a specification for the unique behaviour of those instances. Two sets which happen to contain the same components are--by definition--identical. Two classes which happen to specify the same set of instance behaviours are--again by definition--*not* identical.


Or just don't satisfy me.

I suspect it this is the option that will occur. You seem to be looking for mathematical correctness and theoretical purity...a laudable goal. But I'm merely looking for practical utility and convenience.

So far, the twain seemed destined never to meet.

Damian

Reply via email to