Do I need "has $.foo;" for accessor-only virtual attributes?
Say I make an "accessor" method for an attribute that doesn't really 'exist'. For instance, a good example of this is the "month_0" vs "month" properties on a date object; I want to make both look equivalent as real properties, but without the users of the class knowing which one is the "real" one. Users of the class includes people subclassing the class, so to them they need to be able to use $.month_0 and $.month, even though there is no "has $.month_0" declared in the Class implementation, only "has $.month". So, is $.month just shorthand for $?SELF.month, which happens to be optimised away to a variable access in the common case where the "method month" isn't defined, or has a sufficiently simple "is accessor" trait in its declaration? And that, in turn, $:month is actually $?SELF.(":month"), where ":month" is an alias for the submethod called "month". After all, we want Roles used by Classes to have access to this virtual attribute coolness, too. These simple definitions should make all sorts of OO tricks possible, and reduces the definition of Classes to one that is purely functional (ie, state is just a set of functions too). Sam.
Database Transactions and STM [was: Re: STM semantics, the Transactional role]
Yuval Kogman wrote: everyone gets to choose, and another thing I have in mind is the Transactional role... DBI::Handle does Transactional; To the STM rollbacker and type checker thingy this means that any IO performed by DBI::Handle invoked code is OK - it can be reversed using the Transactional interface it proposes. Is this needed, when you can just; atomic { unsafeIO { $dbh.begin_work }; unsafeIO { $dbh.do(...) }; unsafeIO { $dbh.commit }; } CATCH { $dbh.rollback; }; Of course, these unsafeIO calls can go inside a higher level wrapper for the DBI, assuming that it is possible to detect whether or not we are running in an atomic{ }, and "which" atomic block we are in. As for the efficiency of things, hate to say it but that's really up to the backend in use, and it's highly unlikely that anything other than Parrot or GHC will support atomic{ }. However a standard Role for this kind of behaviour might make sense. Maybe draw up a prototype? Sam.
The Use and Abuse of Liskov (was: Type::Class::Haskell does Role)
"You keep using that word. I do not think it means what you think it means" -- Inigo Montoya Luke Palmer wrote: >>Recently I discussed MMD with chromatic, and he mentioned two things >>that were very important, in my opinion: >> >>* The Liskov substitution principal sort of means that MMD >> between two competing superclasses, no matter how far, is >> equal Actually, no it doesn't mean that at all. It means that the LSP--which proposes an operational definition of strict polymorphic subtyping--simply isn't applicable to multimorphic type-hierarchy interactions such as multiple dispatch. See: http://users.rcn.com/jcoplien/Patterns/Symmetry/Springer/SpringerSymmetry.html for a group theoretic explanation of why that's the case. > For those of you who have not been exposed to this "paradox", > here's an example: > > class A {...} > class B is A {...} > class C is B {...} > class D is A {...} > > multi foo(C $x, A $y) { "(1)" } > multi foo(A $x, D $y) { "(2)" } > > If you call foo(C.new, D.new), (1) will be called (because it has > distance 1, while (2) has distance 2). Now suppose I refactor to > prepare for later changes, and add two *empty* classes: > > class A {...} > class B is A {...} > class C is B {...} > class E is A { } > class F is E { } > class D is F {...} > > Now if you call foo(C.new, D.new), (2) will be called instead of (1) > (because (1) has distance 3, while (2) still has distance 2). That is > how Liskov subtly breaks. Liskov isn't "broken" here...it was never applicable here. The LSP says that *semantics* mustn't change due to subtyping, not that *behaviour* mustn't change. If behaviour weren't permitted to change, then you could never redefine a method in a derived (or intermediate) class. For example, taking this hierarchy: class A { method foo { "(1)" } } class B is A { } B.new.foo() # "(1)" and updating it with an intermediate class: class A { method foo { "(1)" } } class Z is A { method foo { "(2)" } } class B is Z { } B.new.foo() # "(2)" *doesn't* violate LSP. Meyer gives a much more practical definition of the intent of LSP: "A derived class cannot require more, or promise less, than a base class." This formulation still allows for the possibility that, when a hierarchy changes, the dispatch of a given method call may change and that a different method may be invoked. But merely invoking a different response after a hierarchy changes is not in itself a violation of LSP/Meyer. In the above example, class Z *can* still be used in place of class A, without the call to C<.foo> suddenly breaking. It's the "not breaking" that is critical to LSP here. Using class Z instead of class A does change the behaviour (i.e. which C is called), but it doesn't change the semantics (i.e. the fact that it's legal and possible to call C on a B object). What *would* break LSP is writing this instead: class A { method foo { "(1)" } } class Z is A { method foo { die } } class B is Z { } B.new.foo() # Kaboom! Indeed, this is one of the commonest ways of breaking LSP (almost everyone does it): you replace an inherited method with one which sometimes throws a previously-unthrown exception. Class Z is now promising less than class A. Specifically, it's no longer promising to return. You can no longer treat a derived object as if it were a base object without the risk of abnormally terminating the program on some (previously working) polymorphic method call. Interestingly, Luke's original example is closely analogous to that situation, only in multimorphic space. From a Liskov/Meyer perspective, it's perfectly okay that a change in one of the parameter hierarchies changes which multisub variant is invoked. The real problem is when a change in one of the hierarchies causes a formerly legal multisub call to become illegal (i.e. fatally ambiguous). And that is one of the main reasons I advocate Manhattan Metric dispatch over Pure Ordering dispatch. Because Pure Ordering--which imposes a much stricter criterion for "unambiguous"--is far more likely to break semantics in precisely that way. Consider the following classes: class A {...} # AB class B {...} #| class C is B {...} #C D class D {...} # \ / class E is C is D {...} # E multi sub foo(A $x, B $y) { "(1)" } multi sub foo(A $x, C $y) { "(2)" } foo(A.new, E.new); Clearly this produces "(2)" under either Pure Ordering or Manhattan Metric. Now we change the class hierarchy, adding *zero* extra empty classes (which is surely an even stricter LSP/Meyer test-case than adding one extra empty class!) We get this: class A {...} # A B class B
Re: Type::Class::Haskell does Role
On 7/17/05, Yuval Kogman <[EMAIL PROTECTED]> wrote: > I have another view. > > The Num role and the Str role both consume the Eq role. When your > class tries to both be a Num and a Str, == conflicts. > > I have two scenarios: > > class Moose does Num does Str { ... } > > # Moose was populated with: > multi method infix:<==> (Moose does Num, Moose does Num) { ... } > multi method infix:<==> (Moose does Str, Moose does Str) { ... } Which is an ambiguity error. > OR > > # Str and Num try to add the same short name, and a class > # composition error happenns at compile time. Which is a composition error. So they both amount to the same thing. > Recently I discussed MMD with chromatic, and he mentioned two things > that were very important, in my opinin: > > * The Liskov substitution principal sort of means that MMD > between two competing superclasses, no matter how far, is > equal Amen. For those of you who have not been exposed to this "paradox", here's an example: class A {...} class B is A {...} class C is B {...} class D is A {...} multi foo(C $x, A $y) { "(1)" } multi foo(A $x, D $y) { "(2)" } If you call foo(C.new, D.new), (1) will be called (because it has distance 1, while (2) has distance 2). Now suppose I refactor to prepare for later changes, and add two *empty* classes: class A {...} class B is A {...} class C is B {...} class E is A { } class F is E { } class D is F {...} Now if you call foo(C.new, D.new), (2) will be called instead of (1) (because (1) has distance 3, while (2) still has distance 2). That is how Liskov subtly breaks. As a matter of taste, classes that don't do anything shouldn't do anything! But here they do. If I had only put E in and omitted F, we would have moved from a functioning program to a breaking one. > * Coercion of parameters and a class's willingness to coerce > into something is a better metric of distance Well, if you think metrics at all are a good way to do dispatch. > Under these rules the way this would be disambiguated is one of: > > - Moose provided it's own infix:<==> > - Moose said which <==> it prefers, Num or Str. A syntax: > > multi method infix:<==> from Str; > > (this could also be used for importing just part of a > hierarchy?) Well, I like your proposal. It's a very generics-oriented world view, which I hold very dear. However, you didn't actually solve anything. What happened to our numeric == and string eq. Are you proposing that we toss out string eq and let MMD do the work? If I recall correctly the reason for == and eq existing and being distinct is so that you don't have to do: [EMAIL PROTECTED]()+$expression] == +%another{long($hairy % expression()) } ^..^ You can't see what's going on according to that operator, because the eye scanning distance is too great. We still need to satisfy the scripters who like the distinction between numeric and string comparison. It's hard for me to argue for them, since I'm not one of them. One more possibility is operator adverbs. We could assume that == is generic unless you give it a :num or :str adverb: $a == $b # generic $a == $b :num # numeric $a == $b :str # string But that has the eye scanning distance problem again. Maybe that's a flaw in the design of adverbs this time... Luke