Why multi-by-default is a bad idea (was: Re: Object Order of Precedence (Was: Vocabulary))

2003-12-22 Thread Dan Sugalski
At 2:21 PM -0800 12/20/03, Larry Wall wrote:
On Sat, Dec 20, 2003 at 12:41:10PM -0800, Jonathan Lang wrote:
: So what happens if more than one of the candidates is tagged as the
: default?  The same thing as if none of them was?  This could happen if
: both Predator and Pet have declared their 'feed' methods as the default. 

Could blow up, or look for a more generic default that isn't in a tie.
The latter seems more fail-soft, since something else of the same name
is likelier to know what to do than some random exception handler in
who-knows-what dynamic context.
It's straightforward enough to pitch an exception at sub definition 
time. That'll be a compile time error generally, which is likely 
fine, or an eval/do/require runtime error, which is also fine as 
anyone doing runtime code loading should be ready to catch 
compile-class errors. Installing new subs symbolically into symbol 
tables might be a bit more problematic, but mostly syntactically so 
it's not my problem. :)

: What about making multi dispatches the assumed behavior, with a Cunique
: keyword to explicitly shut it off (for the sake of optimization)?  That
: is, replace the Cmulti keyword used to define routines that participate
: in multiple dispatching with a Cunique keyword used to define routines
: that don't. 

Now that's...an *interesting* idea.

But I'm getting sidetracked.  The underlying question is whether multi
should be the default.  And that's still an interesting idea regardless
of the syntax.
And, IMAO, a very, *very* bad one. I dunno about you, but when I 
install a sub into a symbol table I fully expect it to be the only 
one of that name, and if there's an existing sub of that name I 
expect it to be replaced, not supplemented. (Or have a warning and/or 
error pitched, that's fine too)

This also makes language interoperability somewhat tricky, as it is 
*not* the default for any other language in our class, including perl 
5. That means either we change the default behaviour of perl 5 (which 
strikes me as bad) or we retain the base default behavior of each 
language in which case you end up with subs that may or may not be 
multi depending on the order of inclusion of modules. (if you include 
two modules that define the same sub in the same namespace, one with 
multi-by-default and one without)

Another unexplored question is how and whether to open up multiple
dispatch to more scopes than just the first one in which you find
the name
I can do lexically-scoped multi-method dispatch tables, the same way 
we're going to do lexically-scoped method caches, but I'm not sure 
it's a wise idea. (Well... I'm pretty sure it's an unwise one, but 
I'm unsure of how correct that is)

Could we just leave it as global multimethod subs and methods, and 
package-local multimethod subs and methods for now? We can always 
bring in the more insane^Wexpansive version later, in perl 6.2 or 
something, once we see how things are going and how people are 
dealing with it.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Object Order of Precedence (Was: Vocabulary)

2003-12-20 Thread Jonathan Lang
Larry Wall wrote:
 Jonathan Lang wrote:
 : Larry Wall wrote:
 :  Jonathan Lang wrote:

 Also, there will be access to the list of call candidates for SUPER::
 (and presumably ROLE::) such that the class's method can get explicit
 control of which super/role method or methods get called.  So we can
 have methods that fail-over to the next candidate.  It's just not the
 default way to resolve multiple methods with the same signature.

Make micromanaging possible, but not mandatory.  Sounds good to me.  

 :  Another possibility is that the class's method could be declared to 
 :  be the default multi method in case the type information is not 
 :  sufficient to decide which role's multi method should be called.  
 :  Maybe if it's declared multi it works that way.  Otherwise it's 
 :  just called first automatically.  
 : 
 : ...meaning that the question of which role do you mean? has already
 : been addressed by the time the ROLE:: deference gets used.  
 
 No, in this case the ROLE:: deference has already given up on finding
 a unique role to call, and called the class's method to break the tie,
 or do something really generic, or call more than one of the role
 methods, or die.  

Oh; OK.  

 : Although I'm not following what you're saying here in terms of the
 : third means of disambiguation.  Could someone provide an example, 
 : please?  
 
 role Pet {
   method feed (PetFood $x) {...}
 }
 role Predator {
   method feed (PredatorFood $x) {...}
 }
 class DangerousPet does Pet does Predator {
 }
 
 If DangerousPet doesn't define a feed method at all, then we might 
 dispatch to Pet and Predator as if their methods had an implicit 
 multi.  

And the Cdefault trait is the tie-breaker when several options are
equally likely candidates (in terms of type information); OK.  

So what happens if more than one of the candidates is tagged as the
default?  The same thing as if none of them was?  This could happen if
both Predator and Pet have declared their 'feed' methods as the default.  

 Arguably, the role's might be required to declare their methods multi
 if they want to participate in this, but that's one of those things
 that feel like they ought to be declared by the user rather than the
 definer.  On the other hand, maybe a role would feel that its method
 *must* be unique, and leaving out the multi is the way to do that.
 But I hate to get into the trap of culturally requiring every method
 in every role to specify multi.  It's a little too much like the C++
 ubiquitous-const problem.

What about making multi dispatches the assumed behavior, with a Cunique
keyword to explicitly shut it off (for the sake of optimization)?  That
is, replace the Cmulti keyword used to define routines that participate
in multiple dispatching with a Cunique keyword used to define routines
that don't.  

 My hope for unifying traits and superclasses is that, if you call an
 ordinary class using is, the wicked thing that it does is insert
 itself into the ISA property of the class.  

When you say is foo, you're inheriting if foo is a class and you're
adding a trait if foo is a trait.  OK.  This would also imply that the
proper way to access a trait's namespace would be identical to the
explicit means of accessing a superclass's namespace.  

 Where that may cause problems if you want to inherit from an existing 
 trait that does something else wicked.  But then, traits aren't often 
 going to be inherited anyway, since their purpose is to break the 
 rules.  

Unless you're trying to create a variation on an existing trait, of
course.  

 We can maybe inherit from classof(trait) to get around any difficulties.

Perhaps you could use Ctype instead of Cclassof?  More concise and
just as meaningful.  

 So I'm still thinking we do inheritance with is rather than isa.
 We just have to keep our names straight.  Generally, traits will
 be lowercase, and true class or role names start with an uppercase
 letter.

But then, this remains merely a convention; a sloppy programmer (or one
who isn't worried about his code being extensible) could violate it
without the compiler complaining.  

The only fear that I have here is whether we're violating the different
things should look different principle: are traits and superclasses
similar enough to each other to be added to a class by the same means?  It
might not be a bad idea to include isa as a more explicit alternative to
is, with the added benefit that isa traitname would be short for is
classof(traitname).  It also occurs to me that traits can be thought of
as adjectives (thus the is trait vs. is a class distinction) -
another way to attach an adjective to a noun in English is to prepend it
to the noun: 

  my Dog $Spot is red; 
  my black Cat $Tom; 
  my thoughtful $Larry is overworked; 

where red, black, thoughtful, and overworked are traits.  

Or is this too much?  

In a similar vein, what about making a disjunction of classes in an Cis
or Cisa 

Re: Object Order of Precedence (Was: Vocabulary)

2003-12-20 Thread Larry Wall
On Sat, Dec 20, 2003 at 12:41:10PM -0800, Jonathan Lang wrote:
: So what happens if more than one of the candidates is tagged as the
: default?  The same thing as if none of them was?  This could happen if
: both Predator and Pet have declared their 'feed' methods as the default.  

Could blow up, or look for a more generic default that isn't in a tie.
The latter seems more fail-soft, since something else of the same name
is likelier to know what to do than some random exception handler in
who-knows-what dynamic context.

:  Arguably, the role's might be required to declare their methods multi
:  if they want to participate in this, but that's one of those things
:  that feel like they ought to be declared by the user rather than the
:  definer.  On the other hand, maybe a role would feel that its method
:  *must* be unique, and leaving out the multi is the way to do that.
:  But I hate to get into the trap of culturally requiring every method
:  in every role to specify multi.  It's a little too much like the C++
:  ubiquitous-const problem.
: 
: What about making multi dispatches the assumed behavior, with a Cunique
: keyword to explicitly shut it off (for the sake of optimization)?  That
: is, replace the Cmulti keyword used to define routines that participate
: in multiple dispatching with a Cunique keyword used to define routines
: that don't.  

Now that's...an *interesting* idea.  Maybe even worth some punctuation
right there in the name.  Maybe is unique is written:

my sub print! ([EMAIL PROTECTED]) {...}

meaning: Always call this one, dang it!

Then maybe is default could be

my sub print? ([EMAIL PROTECTED]) {...}

meaning: Call this one in case of a tie?

The character would only be in the declaration, not in the call.
(Of course, that prevents us from actually using those characters in
names like Ruby does, but I'm not sure that's a big loss.)

But I'm getting sidetracked.  The underlying question is whether multi
should be the default.  And that's still an interesting idea regardless
of the syntax.

Another unexplored question is how and whether to open up multiple
dispatch to more scopes than just the first one in which you find
the name, which is the normal Perl6 semantics.  I doubt looking
everywhere should be the default, but just as a class might call any
or all of its roles' methods, a lexically scoped sub might want to
call or dispatch to any set of subs of that name that are visible in
the current scope.  Naming such collections is an interesting problem.
Taking scope transitions into account in the distance calculation of
multi signatures could be even more interesting.  If you define your
own multi foo, and there's a global multi foo, how far away is that?
Is it worth a level of inheritance in one of the arguments?  Is it
worth more?  Less?  Is it worth anything, if you've included both
in your set of possible multis to begin with?

[Should probably change the Subject if you want reply to this one.
In fact, I should probably have split this message into separate
responses...]

:  Where that may cause problems if you want to inherit from an existing 
:  trait that does something else wicked.  But then, traits aren't often 
:  going to be inherited anyway, since their purpose is to break the 
:  rules.  
: 
: Unless you're trying to create a variation on an existing trait, of
: course.  

Which might well be done with wrappers rather than via inheritance.  I
don't think traits are going to have a lot of methods you'd want to
inherit.

On the other hand, applying a container type to a container is currently
done with Cis, and one might very well want to inherit from a container
type, which (like Perl 5 tie classes) might have oodles of inheritable
methods.  But then, Cis isn't ambiguous because you're not using it
on a class at that point, so maybe it's still okay...

:  We can maybe inherit from classof(trait) to get around any difficulties.
: 
: Perhaps you could use Ctype instead of Cclassof?  More concise and
: just as meaningful.  

Unless Ctype encompasses roles and subtypes but Cclassof doesn't...
Still, if you're using Ctype on a real object, it has to be a class.

Probably ought to be Ctypeof if we want to reserve Ctype for a
declarator keyword.  Though if we end up with typed refs, we could
well end up with a situation where classof($self) returns DangerousPet
but typeof($self) returns Predator.

:  So I'm still thinking we do inheritance with is rather than isa.
:  We just have to keep our names straight.  Generally, traits will
:  be lowercase, and true class or role names start with an uppercase
:  letter.
: 
: But then, this remains merely a convention; a sloppy programmer (or one
: who isn't worried about his code being extensible) could violate it
: without the compiler complaining.  

Certainly.

: The only fear that I have here is whether we're violating the different
: things should look different principle: are traits and superclasses
: similar enough to 

Re: [perl] Re: Object Order of Precedence (Was: Vocabulary)

2003-12-20 Thread Joe Gottman

- Original Message - 
From: Jonathan Lang [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, December 20, 2003 3:41 PM
Subject: [perl] Re: Object Order of Precedence (Was: Vocabulary)


 Larry Wall wrote:
  If DangerousPet doesn't define a feed method at all, then we might
  dispatch to Pet and Predator as if their methods had an implicit
  multi.

 And the Cdefault trait is the tie-breaker when several options are
 equally likely candidates (in terms of type information); OK.

   I'm a little leery about calling this trait default. The problem is
that we are already using default as a keyword (see the switch statement),
and having a trait with the same name as a keyword might confuse users
and/or the compiler.

Joe Gottman




Re: [perl] Re: Object Order of Precedence (Was: Vocabulary)

2003-12-20 Thread Rod Adams


Luke Palmer wrote:
Joe Gottman writes:

- Original Message - 
From: Jonathan Lang [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, December 20, 2003 3:41 PM
Subject: [perl] Re: Object Order of Precedence (Was: Vocabulary)



Larry Wall wrote:

If DangerousPet doesn't define a feed method at all, then we might
dispatch to Pet and Predator as if their methods had an implicit
multi.
And the Cdefault trait is the tie-breaker when several options are
equally likely candidates (in terms of type information); OK.
  I'm a little leery about calling this trait default. The problem is
that we are already using default as a keyword (see the switch statement),
and having a trait with the same name as a keyword might confuse users
and/or the compiler.


Perl's using a top-down compiler now, so it won't be looking for the
keyword variant of Cdefault after is.  default is sufficiently
overloaded in English, and by context in Perl, that I don't think anyone
will get confused.
Not to say that other names for this trait aren't welcome.

Cpreferred



Re: Object Order of Precedence (Was: Vocabulary)

2003-12-19 Thread Larry Wall
On Mon, Dec 15, 2003 at 07:02:53PM -0800, Jonathan Lang wrote:
: Larry Wall wrote:
:  Jonathan Lang wrote:
:  : Let's see if I've got this straight:
:  : 
:  : role methods supercede inherited methods;
:  
:  But can defer via SUPER::
:  
:  : class methods supercede role methods;
:  
:  But can defer via ROLE:: or some such.
: 
: Check, and check.  Of course, SUPER:: works well in single inheritence,
: but runs into problems of which superclass? in multi-inheritence; ROLE::
: would on the surface appear to have that same problem, except that...
: 
:  : conflicting methods from multiple roles get discarded...
:  
:  They aren't silently discarded--they throw a very public exception.
:  (But methods with differing multi signatures are not considered to
:  be conflicting, I hope.)
: 
: (OK.)  

Also, there will be access to the list of call candidates for SUPER::
(and presumably ROLE::) such that the class's method can get explicit
control of which super/role method or methods get called.  So we can
have methods that fail-over to the next candidate.  It's just not the
default way to resolve multiple methods with the same signature.

:  :   ...but the class may alias or exclude any of the conflicting methods
:  : to explicitly resolve the dispute.  
:  
:  Right.  Another possibility is that the class's method could be
:  declared to be the default multi method in case the type information
:  is not sufficient to decide which role's multi method should be called.
:  Maybe if it's declared multi it works that way.  Otherwise it's just
:  called first automatically.
: 
: ...meaning that the question of which role do you mean? has already been
: addressed by the time the ROLE:: deference gets used.  

No, in this case the ROLE:: deference has already given up on finding
a unique role to call, and called the class's method to break the tie,
or do something really generic, or call more than one of the role methods,
or die.

: Although I'm not following what you're saying here in terms of the third
: means of disambiguation.  Could someone provide an example, please?  

role Pet {
method feed (PetFood $x) {...}
}
role Predator {
method feed (PredatorFood $x) {...}
}
class DangerousPet does Pet does Predator {
}

If DangerousPet
doesn't define a feed method at all, then we might dispatch to Pet and
Predator as if their methods had an implicit multi.  But maybe the
actual type of $x is sufficiently ambiguous that we can't decide whether
it's more like PetFood or PredatorFood.  In that case it would throw
an exception, just as any multimethod without a default would.  If you
define an ordinary method in DangerousPet:

class DangerousPet does Pet does Predator {
method feed ($x) {...}
}

then you have the ordinary case.  DangerousPet::feed is always called
because the class method overrides the role methods.  Presumably the
class method can dispatch to the role methods if it so chooses.  But
if you say something like:

class DangerousPet does Pet does Predator {
multi method feed ($x) {...}
}

then DangerousPet::feed is called only when multimethod dispatch
would have thrown an exception.  Alternately, multi's will probably have
some way of identifying the default method in any case, so maybe you
have to write it something like this: 

class DangerousPet does Pet does Predator {
multi method feed ($x) is default {...}
}

that leaves the door open for real multi's within the class working
in parallel to the roles' methods:

class DangerousPet does Pet does Predator {
multi method feed ($x) is default {...}
multi method feed (DangerousPetFood $x) {...}
}

Arguably, the role's might be required to declare their methods multi
if they want to participate in this, but that's one of those things
that feel like they ought to be declared by the user rather than the
definer.  On the other hand, maybe a role would feel that its method
*must* be unique, and leaving out the multi is the way to do that.
But I hate to get into the trap of culturally requiring every method
in every role to specify multi.  It's a little too much like the C++
ubiquitous-const problem.

:  : trait methods supercede class methods;
:  
:  I'm not sure traits work that way.  I see them more as changing the
:  metaclass rules.  They feel more like macros to me, where anything
:  goes, but you have to be a bit explicit and intentional.
: 
: Well, the question is: the trait is providing a method with the same name
: as a method provided by the class, and type information is insufficient to
: distinguish between them; which one do I use?  In the absence of
: additional conflict resolution code, the possible options as I see them
: would be: 
: 
: 1) the class supercedes the trait
: 2) the trait supercedes the class
: 3) an ambiguity exception gets thrown
: 4) the trait's method can't be called without explicitly naming the trait
: 
: Which of these 

Re: Object Order of Precedence (Was: Vocabulary)

2003-12-19 Thread Luke Palmer
Larry Wall writes:
 But if you say something like:
 
 class DangerousPet does Pet does Predator {
   multi method feed ($x) {...}
 }
 
 then DangerousPet::feed is called only when multimethod dispatch
 would have thrown an exception.  Alternately, multi's will probably have
 some way of identifying the default method in any case, so maybe you
 have to write it something like this: 
 
 class DangerousPet does Pet does Predator {
   multi method feed ($x) is default {...}
 }
 
 that leaves the door open for real multi's within the class working
 in parallel to the roles' methods:
 
 class DangerousPet does Pet does Predator {
   multi method feed ($x) is default {...}
   multi method feed (DangerousPetFood $x) {...}
 }
 
 Arguably, the role's might be required to declare their methods multi
 if they want to participate in this, but that's one of those things
 that feel like they ought to be declared by the user rather than the
 definer.  On the other hand, maybe a role would feel that its method
 *must* be unique, and leaving out the multi is the way to do that.
 But I hate to get into the trap of culturally requiring every method
 in every role to specify multi.  It's a little too much like the C++
 ubiquitous-const problem.

I'd like that roles don't have control over whether their methods are
unique, but the class does.  That is, multi dispatch would be one of the
several disambiguation possibilities for role conflicts.  But without
specification, name conflicts always result in error.

 [...]
 
 My hope for unifying traits and superclasses is that, if you call an
 ordinary class using is, the wicked thing that it does is insert
 itself into the ISA property of the class.  Where that may cause
 problems if you want to inherit from an existing trait that does
 something else wicked.  But then, traits aren't often going to be
 inherited anyway, since their purpose is to break the rules.  We can
 maybe inherit from classof(trait) to get around any difficulties.
 So I'm still thinking we do inheritance with is rather than isa.
 We just have to keep our names straight.  Generally, traits will
 be lowercase, and true class or role names start with an uppercase
 letter.

I like that.  That allows for nice things like letting a class keep
track of its subclasses, or in general doing other tricky things to its
subclasses.

Inheritance is a logical association, not a functional one, so when
your classes are doing nonstandard things, you'd like to modify what
inheritance does.  Definitely good.

Luke


Re: Vocabulary

2003-12-18 Thread David Wheeler
On Dec 16, 2003, at 10:20 PM, Rafael Garcia-Suarez wrote:

There's a need (more or less) for special blocks that can be run at the
end of the compilation phase of any arbitrary compilation unit.
This would be especially useful in an environment such as mod_perl, 
where CHECK and INIT blocks currently _never_ execute, no matter when 
they're declared.

Regards,

David

--
David Wheeler AIM: dwTheory
[EMAIL PROTECTED]  ICQ: 15726394
http://www.kineticode.com/ Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]
Kineticode. Setting knowledge in motion.[sm]


Re: Vocabulary

2003-12-18 Thread David Wheeler
On Dec 17, 2003, at 1:39 AM, Simon Cozens wrote:

The desire to optimize the hell out of Perl 6 is a good one, but surely
you optimize when there is a problem, not when before. Is there a 
problem
with the speed you're getting from Perl 6 at the moment?
Yes, it's taking too long to be released! ;-)

Regards,

David (Who wants to start writing Perl 6 applications yesterday.)

--
David Wheeler AIM: dwTheory
[EMAIL PROTECTED]  ICQ: 15726394
http://www.kineticode.com/ Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]
Kineticode. Setting knowledge in motion.[sm]


Re: Vocabulary

2003-12-18 Thread Larry Wall
On Wed, Dec 17, 2003 at 06:20:22AM -, Rafael Garcia-Suarez wrote:
: Larry Wall wrote in perl.perl6.language :
:  On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
: : When you say CHECK time, do you mean there'll be a CHECK phase for
: : code that gets required at run time?
:  
:  Dunno about that.  When I say CHECK time I'm primarily referring
:  to the end of the main compilation.  Perl 5 appears to ignore CHECK
:  blocks declared at run time, so in the absence of other considerations
:  I suspect Perl 6 might do the same.
: 
: This has proven to be inconvenient except for a few specialized usages,
: such as the B::/O compiler framework.
: 
: There's a need (more or less) for special blocks that can be run at the
: end of the compilation phase of any arbitrary compilation unit.

Well, if you want to run at the end of the current compilation unit, a
BEGIN block at the end is close to what you want.  Admittedly, the BEGIN
block can't easily *know* that it's the last thing...

That's not to say we can't improve the semantics of CHECK and INIT.

Larry


Re: Vocabulary

2003-12-17 Thread Piers Cawley
Larry Wall [EMAIL PROTECTED] writes:

 On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
 : When you say CHECK time, do you mean there'll be a CHECK phase for
 : code that gets required at run time?

 Dunno about that.  When I say CHECK time I'm primarily referring
 to the end of the main compilation.  Perl 5 appears to ignore CHECK
 blocks declared at run time, so in the absence of other considerations
 I suspect Perl 6 might do the same.

I feared that might be the case. 

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


Re: Vocabulary

2003-12-17 Thread Simon Cozens
[EMAIL PROTECTED] (Michael Lazzaro) writes:
 Well, just for clarification; in my anecdotal case (server-side web
 applications), the speed I actually need is as much as I can get,
 and all the time.  Every N cycles I save represents an increase in
 peak traffic capabilities per server, which is, from a marketing
 perspective, essential.

The desire to optimize the hell out of Perl 6 is a good one, but surely
you optimize when there is a problem, not when before. Is there a problem
with the speed you're getting from Perl 6 at the moment?

-- 
evilPetey I often think I'd get better throughput yelling at the modem.


RE: Vocabulary

2003-12-17 Thread Gordon Henriksen
Michael Lazzaro wrote:

 I don't think so; we're just talking about whether you can extend a 
 class at _runtime_, not _compiletime_.  Whether or not Perl can have 
 some degree of confidence that, once a program is compiled, it won't 
 have to assume the worst-case possibility of runtime alteration of 
 every class, upon every single method call, just in case 
 you've screwed with something.

That's a cute way of glossing over the problem.

How do you truly know when runtime is in the first place? Imagine an
application server which parses and loads code from files on-demand.
This shouldn't be difficult. Imagine that that code references a
system of modules.

Imagine if Perl finalizes classes after primary compilation
(after parsing, say, an ApacheHandler file), and proceeds to behave
quite differently indeed afterwards.

Imagine that a perfectly innocent coder finds that his class
library doesn't run the same (doesn't run at all) under the
application server as it does when driven from command line scripts:
His method overrides don't take effect (or, worse, Perl tells him he
can't even compile them because the class is already finalized! And
he thought Perl was a dynamic language!).

What's his recourse? Nothing friendly. Tell Perl that he's going
to subclass the classes he subclasses? Why? He already subclasses
them! Isn't that tell enough? And how? Some obscure configuration
file of the application server, no doubt. And now the app server needs
to be restarted if that list changes. His uptime just went down. And
now he can't have confidence that his system will continue to behave
consistently over time; apachectl restart becomes a part of his
development troubleshooting lexicon.

Java doesn't make him do that; HotSpot can make this optimization at
runtime and back it out if necessary. Maybe he'll just write a JSP
instead.

C# and VB.NET do likewise. ASP.NET isn't looking so bad, either. The
.NET Frameworks are sure a lot less annoying than the Java class
library, after all.


Point of fact, for a large set of important usage cases, Perl simply
can't presume that classes will EVER cease being introduced into the
program. That means it can NEVER make these sorts of optimizations
unless it is prepared to back them out. Even in conventional programs,
dynamic class loading is increasingly unavoidable. Forcing virtuous
programmers to declare virtual (lest their program misbehave or
their perfectly valid bytecode fail to load, or their perfectly valid
source code fail to compile) is far worse than allowing foolish
programmers to declare final.

Making semantic distinctions of this scale between compile time
and runtime will be a significant blow to Perl, which has always been
strengthened by its dynamism. Its competitors do not include such
artifacts; they perform class finalization optimizations on the fly,
and, despite the complexity of the task, are prepared to back out these
optimizations at runtime--while the optimized routines are executing,
if necessary. Yes, this requires synchronization points, notifications
(or inline checks), and limits code motion. Better than the
alternative, I say. It is very simply a huge step backwards to
create a semantic wall between primary compilation and program
execution.

So write the complicated code to make it work right.
- or -
Take the performance hit and go home.

Dynamism has a price. Perl has always paid it in the past. What's
changed?

-- 

Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]



Re: Vocabulary

2003-12-17 Thread Larry Wall
On Wed, Dec 17, 2003 at 06:20:22AM -, Rafael Garcia-Suarez wrote:
: Larry Wall wrote in perl.perl6.language :
:  On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
: : When you say CHECK time, do you mean there'll be a CHECK phase for
: : code that gets required at run time?
:  
:  Dunno about that.  When I say CHECK time I'm primarily referring
:  to the end of the main compilation.  Perl 5 appears to ignore CHECK
:  blocks declared at run time, so in the absence of other considerations
:  I suspect Perl 6 might do the same.
: 
: This has proven to be inconvenient except for a few specialized usages,
: such as the B::/O compiler framework.
: 
: There's a need (more or less) for special blocks that can be run at the
: end of the compilation phase of any arbitrary compilation unit.

Well, that's what I'd call an other consideration.  :-)

Larry


Re: Vocabulary

2003-12-17 Thread Larry Wall
On Tue, Dec 16, 2003 at 06:55:56PM -0500, Gordon Henriksen wrote:
: Michael Lazzaro wrote:
: 
:  I don't think so; we're just talking about whether you can extend a 
:  class at _runtime_, not _compiletime_.  Whether or not Perl can have 
:  some degree of confidence that, once a program is compiled, it won't 
:  have to assume the worst-case possibility of runtime alteration of 
:  every class, upon every single method call, just in case 
:  you've screwed with something.
: 
: That's a cute way of glossing over the problem.
: 
: How do you truly know when runtime is in the first place? Imagine an
: application server which parses and loads code from files on-demand.
: This shouldn't be difficult. Imagine that that code references a
: system of modules.
: 
: Imagine if Perl finalizes classes after primary compilation
: (after parsing, say, an ApacheHandler file), and proceeds to behave
: quite differently indeed afterwards.
: 
: Imagine that a perfectly innocent coder finds that his class
: library doesn't run the same (doesn't run at all) under the
: application server as it does when driven from command line scripts:
: His method overrides don't take effect (or, worse, Perl tells him he
: can't even compile them because the class is already finalized! And
: he thought Perl was a dynamic language!).
: 
: What's his recourse? Nothing friendly. Tell Perl that he's going
: to subclass the classes he subclasses? Why? He already subclasses
: them! Isn't that tell enough? And how? Some obscure configuration
: file of the application server, no doubt. And now the app server needs
: to be restarted if that list changes. His uptime just went down. And
: now he can't have confidence that his system will continue to behave
: consistently over time; apachectl restart becomes a part of his
: development troubleshooting lexicon.

Any such application server would probably just

use DYNAMIC_EVERYTHING;

(or whatever we call it) and have done with it.

: Java doesn't make him do that; HotSpot can make this optimization at
: runtime and back it out if necessary. Maybe he'll just write a JSP
: instead.

If Parrot turns out to be able to make this optimization, then the
individual declarations of dynamism merely become hints that it's
not worth trying to optimize a particular class because it'll get
overridden anyway.  It's still useful information on an individual
class basis.  The only thing that is bogus in that case is the global
DYNAMIC_EVERYTHING declaration in the application server.  So I could
be argued into making that the default.  A program that wants a static
analysis at CHECK time for speed would then need to declare that.
The downside of making that the default is that then people won't
declare which classes need to remain extensible under such a regime.
That's another reason such a declaration does not belong with the
class itself, but with the users of the class.  If necessary, the main
program can pick out all the classes it things need to remain dymanic:

module Main;
use STATIC_CLASS_CHECK;
use class Foo is dynamic;
use class Bar is dynamic;

or whatever the new Cuse syntax will be in A11...

: C# and VB.NET do likewise. ASP.NET isn't looking so bad, either. The
: .NET Frameworks are sure a lot less annoying than the Java class
: library, after all.

On the other hand, those guys are also doing a lot more mandatory
static typing to get their speed, and that's also annoying.
(Admittedly, they're working on supporting dynamic languages better.)

: Point of fact, for a large set of important usage cases, Perl simply
: can't presume that classes will EVER cease being introduced into the
: program. That means it can NEVER make these sorts of optimizations
: unless it is prepared to back them out. Even in conventional programs,
: dynamic class loading is increasingly unavoidable. Forcing virtuous
: programmers to declare virtual (lest their program misbehave or
: their perfectly valid bytecode fail to load, or their perfectly valid
: source code fail to compile) is far worse than allowing foolish
: programmers to declare final.

The relative merit depends on who declares the final, methinks.  But
if we can avoid both problems, I think we should.

: Making semantic distinctions of this scale between compile time
: and runtime will be a significant blow to Perl, which has always been
: strengthened by its dynamism. Its competitors do not include such
: artifacts; they perform class finalization optimizations on the fly,
: and, despite the complexity of the task, are prepared to back out these
: optimizations at runtime--while the optimized routines are executing,
: if necessary. Yes, this requires synchronization points, notifications
: (or inline checks), and limits code motion. Better than the
: alternative, I say. It is very simply a huge step backwards to
: create a semantic wall between primary compilation and program
: execution.
: 
: So write the complicated code to make it work 

Re: Object Order of Precedence (Was: Vocabulary)

2003-12-16 Thread Luke Palmer
Jonathan Lang writes:
 Larry Wall wrote:
  Well, nothing much really supercedes the class.  Even traits have
  to be requested by the class, and if you have an entirely different
  metaclass, it's probably declared with a different keyword than
  Cclass.  (But sure, multiple traits will have to applied in order
  of declaration, and I don't doubt there will be ordering dependencies.)
 
 My apologies; I'm apparently a bit weak on my object-oriented terminology.
  I'm not quite sure what's being meant here by metaclass, other than a
 vague concept that it's somehow similar to the relationship between logic
 and metalogic.  Also, I was under the impression that the writers of the
 tTraits paper that you referred us to disliked mixins largely because
 they _did_ use an order-of-precedence conflict resolution scheme; surely
 their concerns would apply equally well to what we're calling traits?  

I think metaclass is referring to the thing that knows how to associate
attributes with their corresponding objects, how do dispatch methods to
their corresponding code objects, and whatnot.

  I think the normative way to supercede a class should be to
  subclass it.  That's what OO is supposed to be all about, after all.
  If we can keep that orthogonal to role composition, we stand a good
  chance of being able to do a lot of what AOP claims to do without
  the downsides of AOP's own slatheron approach.  Or more precisely,
  we can resort to AOP-style wrappers where we really need them, and
  avoid them where we don't.
 
 As I don't know what AOP is, this is largely lost on me.  But I'm all for
 keeping various aspects of perl orthogonal to each other if it's
 reasonable to do so.  Likewise, my main concern isn't so much how to
 supercede a class as it is how to keep a class from superceding a role
 that it doesn't know about.  

Cperldoc Aspect does a pretty good job of introducing one to AOP, at
least the extent to which Perl is capable of it (which is quite a lot).

  I'm probably spouting nonsense.  I just hope it's good-sounding
  nonsense...
 
 More importantly, it seems to be _useful_ nonsense.  I just hope that _my_
 nonsense is more useful than it is annoying.  :)

Luke



Re: Vocabulary

2003-12-16 Thread Luke Palmer
Michael Lazzaro writes:
 
 On Sunday, December 14, 2003, at 06:14 PM, Larry Wall wrote:
 But the agreement could be implied by silence.  If, by the time the
 entire program is parsed, nobody has said they want to extend an
 interface, then the interface can be considered closed.  In other
 words, if you think you *might* want to extend an interface at run
 time, you'd better say so at compile time somehow.  I think that's
 about as far as we can push it in the final direction.
 
 That seems a very fair rule, especially if it adds a smidge more speed. 
  Runtime extension will likely be very unusual 

Unless you're me.  Or Damian.  Or a fair number of other programmers who
like to dive into the Perl Dark Side on a regular basis.

 -- requiring it to be explicit seems reasonable.

It seems so.  Knowing Larry, I'm sure this is an ungrounded fear, but I
definitely want to be able to declare in a module I'm going to be
screwing with stuff; keep out of my way, so that I don't impose any
awkward declarations on my module users.  If that request can be made
more explicit in the cases where that's possible, great, but the general
declaration should be available.

Luke

 
 I'm probably spouting nonsense.  I just hope it's good-sounding 
 nonsense...
 
 It's beyond good-sounding, it's frickin' awesome.
 
 MikeL
 


Re: Vocabulary

2003-12-16 Thread Larry Wall
On Tue, Dec 16, 2003 at 07:05:19AM -0700, Luke Palmer wrote:
: Michael Lazzaro writes:
:  
:  On Sunday, December 14, 2003, at 06:14 PM, Larry Wall wrote:
:  But the agreement could be implied by silence.  If, by the time the
:  entire program is parsed, nobody has said they want to extend an
:  interface, then the interface can be considered closed.  In other
:  words, if you think you *might* want to extend an interface at run
:  time, you'd better say so at compile time somehow.  I think that's
:  about as far as we can push it in the final direction.
:  
:  That seems a very fair rule, especially if it adds a smidge more speed. 
:   Runtime extension will likely be very unusual 
: 
: Unless you're me.  Or Damian.  Or a fair number of other programmers who
: like to dive into the Perl Dark Side on a regular basis.
: 
:  -- requiring it to be explicit seems reasonable.
: 
: It seems so.  Knowing Larry, I'm sure this is an ungrounded fear, but I
: definitely want to be able to declare in a module I'm going to be
: screwing with stuff; keep out of my way, so that I don't impose any
: awkward declarations on my module users.  If that request can be made
: more explicit in the cases where that's possible, great, but the general
: declaration should be available.

Okay, we'll call the general declaration:

use $

or some such.  :-)

Seriously, I hope we can provide a framework in which you can screw
around to your heart's content while modules are being compiled,
and to a lesser extent after compilation.  But we'll never get to a
programming-in-the-large model if we can't limit most of the screwing
around to the lexical scope currently being compiled, or at least
to a known subset of the code.  Modules that turn off optimization
for all other modules are going to be about as popular as $.  So
the general declaration should probably be something easy to see like:

use STRANGE_SEMANTICS_THAT_SLOW_EVERYONE_DOWN;

That will encourage people to be more specific about what they want
to pessimize.  Certainly, your fancy module should be encouraged
to declare these things on behalf of its users if it can.  I'm not
suggesting that Lukian or Damianly modules force such declarations onto
the users unless it's impossible for the module to know.  And it seems
to me that with sufficient control over the user's grammar, you can
often get that information into your own fancy module somehow.
Might take a few macros though, or analysis of the user's code at
CHECK time (or maybe just before).

And in general, it's probably not necessary to declare all the new
interfaces, but only those interfaces known at compile time that want
to stay open.  Any interfaces added at run time are probably assumed
to be open.  So in some cases you might find yourself deriving a
single open class at compile time from which you can derive other
open classes later.

But still, the principle remains that original declarer of an
interface doesn't know in general whether its users are going to want
to extend it.  At some point the users have to take responsibility
if they want their code to run fast.  Or run at all...

So we need to make it very easy for users to provide this kind of
information when it's needed.

Larry


Re: Vocabulary

2003-12-16 Thread Michael Lazzaro
On Tuesday, December 16, 2003, at 09:07 AM, Larry Wall wrote:
Seriously, I hope we can provide a framework in which you can screw
around to your heart's content while modules are being compiled,
and to a lesser extent after compilation.  But we'll never get to a
programming-in-the-large model if we can't limit most of the screwing
around to the lexical scope currently being compiled, or at least
to a known subset of the code.  Modules that turn off optimization
for all other modules are going to be about as popular as $.  So
the general declaration should probably be something easy to see like:
use STRANGE_SEMANTICS_THAT_SLOW_EVERYONE_DOWN;

That will encourage people to be more specific about what they want
to pessimize.  Certainly, your fancy module should be encouraged
to declare these things on behalf of its users if it can.  I'm not
suggesting that Lukian or Damianly modules force such declarations onto
the users unless it's impossible for the module to know.  And it seems
to me that with sufficient control over the user's grammar, you can
often get that information into your own fancy module somehow.
Might take a few macros though, or analysis of the user's code at
CHECK time (or maybe just before).
And in general, it's probably not necessary to declare all the new
interfaces, but only those interfaces known at compile time that want
to stay open.  Any interfaces added at run time are probably assumed
to be open.  So in some cases you might find yourself deriving a
single open class at compile time from which you can derive other
open classes later.
Exactly, assuming I correctly understand.  :-)

My own first instinct would be that the run-time extensibility of a 
particular interface/class would simply be a trait attached to that 
class... by default, classes don't get it.  By limiting or not limiting 
the amount of runtime screwin' around you can do with the class, it is 
therefore able to control the level of optimization that calls to 
methods, etc., are given -- but specific to that particular 
interface/class, not to the module and certainly not to the program in 
general.

class Wombat is runtime_extensible { ... };

So everything is closed, except the specific classes which are not.  
Even when you are (to use an example from my own code) making runtime 
subclasses on-the-fly, you're almost always starting from some common 
base class.  (And 'almost' is probably an unneeded qualifier, there.  
As is 'probably'.)

As far as users of your class being able to specify that they want 
something runtime-extensible, when your original module didn't call for 
it, I don't see that as a problem, if they can just add the trait to 
your class shortly after they Cuse the package containing it, if such 
things are possible -- or, for that matter, simply subclass your 
original into a runtime_extensible class:

  class Wombat { ... };   # Not runtime extensible
  class MyWombat is Wombat
  is runtime_extensible { ... };  # Runtime extensible
Now, it might be that declaring MyWombat to be runtime_extensible 
actually silently disables some compile-time optimizations not only for 
it, but for all its superclasses/roles/etc., depending on how 
intelligent and far reaching those optimizations may be.  Not sure.  
Still, the fact that you are _requesting_ that happen is specific to 
the particular class that needs it -- and should be associated with 
that class, such that if that class later falls into disuse, the 
optimizations silently reappear.

(At first glance, I am less sure of the need to have similar 
functionality for entire modules, as opposed to classes, but perhaps 
someone out there can come up with an example.)

MikeL



Re: Vocabulary

2003-12-16 Thread chromatic
On Tue, 2003-12-16 at 12:06, Michael Lazzaro wrote:

 My own first instinct would be that the run-time extensibility of a 
 particular interface/class would simply be a trait attached to that 
 class... by default, classes don't get it.

That doesn't sound very dynamic.

At the post-OSCON design meetings, Larry suggested that the user of a
class or library could say I'm not going to muck about with this at
runtime and any extra optimization would be nice, so go ahead and do
whatever you can do it.

Putting that opportunity on the user has several advantages:

- the library writer isn't responsible for getting the library
completely perfect, because library users can make changes if necessary
- the common case (run-time extension and manipulation) needs less code
(that is, you don't have to say Mother, may I take advantage of the
features of the dynamic language I'm supposed to be? to take advantage
of those features)
- the user of the library can choose specific optimizations when and
where he needs them

-- c



RE: Vocabulary

2003-12-16 Thread Gordon Henriksen
finally by default? None for me; thanks, though.

--
 
Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]



Re: Vocabulary

2003-12-16 Thread Luke Palmer
Chip Salzenberg writes:
 According to Jonathan Scott Duff:
  Those classes that are closed can be opened at run-time and the
  user pays the penalty then when they try to modify the class [...]
 
 The optimization that can be reversed is not the true optimization.

While poetic and concise, I think that statement needs to be driven into
the ground a bit more.

Over on p6i, I think we're basically in agreement that the ability to
undo optimizations is nothing we can count on.  Unless there is a
breakthrough in computer science any time soon, this while loop:

sub one() { 1 };
sub go() {
my $x = 0;
while $x++  one {  # loop optimized away
%main::{'one'} = sub { 10 };
print Boing!\n;
}
}

Is not something that can can be re-inserted when we find out one() has
changed.  While it's possible to make it so go() is unoptimized on the
next call, that's not good enough.  We expect changes to act instantly.

But if you separate parsing and code-generation time, you can make
optimizations earlier based on declarations later, which is just fine.
It allows you to say:

use PresumptuousModule  SomeClass ;
class SomeClass is extensible { };

Then even if the writer of PresumptuousModule thinks you'll be better
off with the optimization, you can tell him otherwise.  But you have to
do it before the code is generated.

Luke


Re: Vocabulary

2003-12-16 Thread Michael Lazzaro
On Tuesday, December 16, 2003, at 12:20 PM, Gordon Henriksen wrote:
finally by default? None for me; thanks, though.
I don't think so; we're just talking about whether you can extend a 
class at _runtime_, not _compiletime_.  Whether or not Perl can have 
some degree of confidence that, once a program is compiled, it won't 
have to assume the worst-case possibility of runtime alteration of 
every class, upon every single method call, just in case you've screwed 
with something.

They still aren't final classes, in that you can subclass them at 
will.  You just can't subclass them _runtime_, via Ceval, unless 
you've specifically marked that you want to allow that for that 
_specific_ class.

As Larry hypothesized:
The other reason for final is to make it easy for the compiler
to optimize.  That's also problematical.  As implemented by Java,
it's a premature optimization.  The point at which you'd like to
know this sort of thing is just after parsing the entire program and
just before code generation.  And the promises have to come from
the users of interfaces, not the providers, because the providers
don't know how their services are going to be used.  Methods, roles,
and classes may never declare themselves final.  They may be declared
final only by the agreement of all their users.
But the agreement could be implied by silence.  If, by the time the
entire program is parsed, nobody has said they want to extend an
interface, then the interface can be considered closed.  In other
words, if you think you *might* want to extend an interface at run
time, you'd better say so at compile time somehow.  I think that's
about as far as we can push it in the final direction.
-and-

Actually, I think making people declare what they want to extend
might actually provide a nice little safety mechanism for what can
be modified by the eval and what can't.  It's not exactly Safe, but
it's a little safer.
-and-

Seriously, I hope we can provide a framework in which you can screw
around to your heart's content while modules are being compiled,
and to a lesser extent after compilation.  But we'll never get to a
programming-in-the-large model if we can't limit most of the screwing
around to the lexical scope currently being compiled, or at least
to a known subset of the code.


So, if I may interpret that; it might not be so bad to have to declare 
whether or not you were going to extend/alter a class at runtime, in 
order that Perl could optimize what it knows at compile-time for the 
99.5% of the classes that you wouldn't be doing that for.

MikeL



Re: Vocabulary

2003-12-16 Thread Michael Lazzaro
On Tuesday, December 16, 2003, at 03:00 PM, Luke Palmer wrote:
But Perl hinges on laziness, doesn't it?  Eh, I trust that Perl 6 will
make it easy to figure that out in most cases.  I was coming from the
perspective that 90% of my projects don't need speed; but I can say no
such thing on account of my users.  And what about that 
un-accounted-for
10%?
As someone who has 90% of their projects relying very critically on 
speed, and who has had to battle a number of clients' IT departments 
over the years in defense of said speed compared to other popular 
languages which, out of spite, I will not name, I beg you to never 
speak or think that sentence again.

;-)

MikeL



Re: Vocabulary

2003-12-16 Thread Chip Salzenberg
According to Michael Lazzaro:
 As someone who has 90% of their projects relying very critically on 
 speed

... an anecdote ...

 and who has had to battle a number of clients' IT departments 
 over the years in defense of said speed compared to other popular 
 languages which, out of spite, I will not name,

... and a public relations issue.

Let us not confuse them.
-- 
Chip Salzenberg   - a.k.a. -   [EMAIL PROTECTED]
I wanted to play hopscotch with the impenetrable mystery of existence,
but he stepped in a wormhole and had to go in early.  // MST3K


Re: Vocabulary

2003-12-16 Thread Piers Cawley
Larry Wall [EMAIL PROTECTED] writes:

 On Tue, Dec 16, 2003 at 07:05:19AM -0700, Luke Palmer wrote:
 : Michael Lazzaro writes:
 :  
 :  On Sunday, December 14, 2003, at 06:14 PM, Larry Wall wrote:
 :  But the agreement could be implied by silence.  If, by the time the
 :  entire program is parsed, nobody has said they want to extend an
 :  interface, then the interface can be considered closed.  In other
 :  words, if you think you *might* want to extend an interface at run
 :  time, you'd better say so at compile time somehow.  I think that's
 :  about as far as we can push it in the final direction.
 :  
 :  That seems a very fair rule, especially if it adds a smidge more speed. 
 :   Runtime extension will likely be very unusual 
 : 
 : Unless you're me.  Or Damian.  Or a fair number of other programmers who
 : like to dive into the Perl Dark Side on a regular basis.
 : 
 :  -- requiring it to be explicit seems reasonable.
 : 
 : It seems so.  Knowing Larry, I'm sure this is an ungrounded fear, but I
 : definitely want to be able to declare in a module I'm going to be
 : screwing with stuff; keep out of my way, so that I don't impose any
 : awkward declarations on my module users.  If that request can be made
 : more explicit in the cases where that's possible, great, but the general
 : declaration should be available.

 Okay, we'll call the general declaration:

 use $

 or some such.  :-)

 Seriously, I hope we can provide a framework in which you can screw
 around to your heart's content while modules are being compiled,
 and to a lesser extent after compilation.  But we'll never get to a
 programming-in-the-large model if we can't limit most of the screwing
 around to the lexical scope currently being compiled, or at least
 to a known subset of the code.  Modules that turn off optimization
 for all other modules are going to be about as popular as $. 

Or the debugger. Or a refactoring tool. Or a Class browser... 

 So the general declaration should probably be something easy to see
 like:

 use STRANGE_SEMANTICS_THAT_SLOW_EVERYONE_DOWN;

No question about that.

 That will encourage people to be more specific about what they want
 to pessimize.  Certainly, your fancy module should be encouraged
 to declare these things on behalf of its users if it can.  I'm not
 suggesting that Lukian or Damianly modules force such declarations onto
 the users unless it's impossible for the module to know.  And it seems
 to me that with sufficient control over the user's grammar, you can
 often get that information into your own fancy module somehow.
 Might take a few macros though, or analysis of the user's code at
 CHECK time (or maybe just before).

When you say CHECK time, do you mean there'll be a CHECK phase for
code that gets required at run time?

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


Re: Vocabulary

2003-12-16 Thread Piers Cawley
Michael Lazzaro [EMAIL PROTECTED] writes:

 On Tuesday, December 16, 2003, at 12:20 PM, Gordon Henriksen wrote:
 finally by default? None for me; thanks, though.

 I don't think so; we're just talking about whether you can extend a
 class at _runtime_, not _compiletime_.  Whether or not Perl can have
 some degree of confidence that, once a program is compiled, it won't
 have to assume the worst-case possibility of runtime alteration of
 every class, upon every single method call, just in case you've
 screwed with something.

There's still a hell of a lot of stuff you can do with 'cached'
optimization that can be thrown away if anything changes. What the
'final' type declarations would do is allow the compiler to throw away
the unoptimized paths and the checks for dynamic changes that mean the
optimization has to be thrown out and started again.

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


Re: Vocabulary

2003-12-16 Thread Michael Lazzaro
On Tuesday, December 16, 2003, at 04:01 PM, Chip Salzenberg wrote:

According to Michael Lazzaro:
As someone who has 90% of their projects relying very critically on
speed
... an anecdote ...
Yes.

and who has had to battle a number of clients' IT departments
over the years in defense of said speed compared to other popular
languages which, out of spite, I will not name,
... and a public relations issue.
Yes, again.

Let us not confuse them.
I'm not sure I understand which part of that is in conflict.  Is it the 
premise that some people use Perl in environments in which speed is an 
issue, the premise that Perl5 has a public relations issue about being 
inappropriate for speed-critical environments, or the conflation that 
someone that works in speed-critical environments, and wishes to use 
Perl, is going to run up against the public-relations issue?

MikeL



Re: Vocabulary

2003-12-16 Thread John Macdonald
On Wed, Dec 17, 2003 at 12:15:04AM +, Piers Cawley wrote:
 There's still a hell of a lot of stuff you can do with 'cached'
 optimization that can be thrown away if anything changes. What the
 'final' type declarations would do is allow the compiler to throw away
 the unoptimized paths and the checks for dynamic changes that mean the
 optimization has to be thrown out and started again.

As Luke pointed out in an earlier message,
you can encounter grave difficulty (i.e. halting
problem unsolvable sort of difficulty) in trying to
unoptimize a piece of code that is in the middle of
being executed.  Just about any subroutine call might
(but almost always won't :-) happen to execute code
that makes the current subroutine have to revert
to unoptimized (or differently optimized) form.
When that subroutine call returns after such a rare
occurrence, it can't return to the unoptimized code
(because there could be missing context because the
calling routine got this far using the optimized code
and may have skipped stuff that is (now) necessary)
and it can't return to the old code (because its
optimization might now be wrong).


Re: Vocabulary

2003-12-16 Thread Chip Salzenberg
According to Michael Lazzaro:
 On Tuesday, December 16, 2003, at 04:01 PM, Chip Salzenberg wrote:
 ... an anecdote ...
 ... and a public relations issue.
 Let us not confuse them.
 
 I'm not sure I understand which part of that is in conflict.

Speed is for users.  PR is for non-users.

You want speed?  OK, we can talk about the actual speed you actually
need based on your actual usage patterns.  But from a design
perspective you're a collection of anecote, not a user base; so your
usage patterns may be irrelevant to Perl in the big picture.

In a separate matter, non-users may perceive Perl {5,6} to be too slow
for their needs; more to the point, they may *assume* that it is too
slow without research and testing.  That assumption is a public
relations issue -- ironically, one which is fundamentally disconnected
from the question of Perl's _actual_ efficiency.
-- 
Chip Salzenberg   - a.k.a. -   [EMAIL PROTECTED]
I wanted to play hopscotch with the impenetrable mystery of existence,
but he stepped in a wormhole and had to go in early.  // MST3K


Re: Vocabulary

2003-12-16 Thread Larry Wall
On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
: When you say CHECK time, do you mean there'll be a CHECK phase for
: code that gets required at run time?

Dunno about that.  When I say CHECK time I'm primarily referring
to the end of the main compilation.  Perl 5 appears to ignore CHECK
blocks declared at run time, so in the absence of other considerations
I suspect Perl 6 might do the same.

Larry


Re: Vocabulary

2003-12-16 Thread Michael Lazzaro
On Tuesday, December 16, 2003, at 05:36 PM, Chip Salzenberg wrote:
Speed is for users.  PR is for non-users.

You want speed?  OK, we can talk about the actual speed you actually
need based on your actual usage patterns.  But from a design
perspective you're a collection of anecote, not a user base; so your
usage patterns may be irrelevant to Perl in the big picture.
In a separate matter, non-users may perceive Perl {5,6} to be too slow
for their needs; more to the point, they may *assume* that it is too
slow without research and testing.  That assumption is a public
relations issue -- ironically, one which is fundamentally disconnected
from the question of Perl's _actual_ efficiency.


Well, just for clarification; in my anecdotal case (server-side web 
applications), the speed I actually need is as much as I can get, and 
all the time.  Every N cycles I save represents an increase in peak 
traffic capabilities per server, which is, from a marketing 
perspective, essential.

If a potential client company needs to decide between two server-based 
products -- my Perl based product, and a competing Java-based one -- 
one of the first questions they ask is how much traffic can it handle 
for X dollars of hardware and software.  I don't have to win that 
benchmark, but I have to be close.  Otherwise I don't get to play.

I agree, it is frequently the case that the question of speed is made 
critical by people who most assuredly do not need it.  But they still 
decide that way, and I have found that asserting to them that speed is 
not important has been... well, less than effective.  I do not doubt 
that P6 will be much more competitive, speed-wise, than P5 -- but if it 
could actually _win_ a few benchmarks, it would turn my company's use 
of Perl from a PR problem to a PR advantage.


your usage patterns may be irrelevant to Perl in the big picture.
The thought has crossed my mind repeatedly, believe me.

MikeL



Re: Vocabulary

2003-12-16 Thread Luke Palmer
Michael Lazzaro writes:
 I agree, it is frequently the case that the question of speed is made 
 critical by people who most assuredly do not need it.  But they still 
 decide that way, and I have found that asserting to them that speed is 
 not important has been... well, less than effective.  I do not doubt 
 that P6 will be much more competitive, speed-wise, than P5 -- but if it 
 could actually _win_ a few benchmarks, it would turn my company's use 
 of Perl from a PR problem to a PR advantage.

In the presence of parrot's JIT, competing should be no problem.  I'm
not entirely sure Perl 6 will be faster than Perl 5 on the average.  But
the difference is that Perl 6 will allow you to make fast code where you
need it.  For instance (and the main one, probably), using native
(lowercase) types allows you to JIT, and using JIT is just...  well, you
have to see it for yourself.  Amazing.  But since, as I've said, I don't
do speed-critical work, I won't be usually using lowercase types.  And
that trades me flexibility for speed.

And from what I've seen of Java, if you need speed, hand-optimizing your
inner loop to parrot assembly should blow Java out of the water.
Without needing a C compiler (I despise XS).

Luke

 your usage patterns may be irrelevant to Perl in the big picture.
 
 The thought has crossed my mind repeatedly, believe me.
 
 MikeL
 


Re: Vocabulary

2003-12-16 Thread Rafael Garcia-Suarez
Larry Wall wrote in perl.perl6.language :
 On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
: When you say CHECK time, do you mean there'll be a CHECK phase for
: code that gets required at run time?
 
 Dunno about that.  When I say CHECK time I'm primarily referring
 to the end of the main compilation.  Perl 5 appears to ignore CHECK
 blocks declared at run time, so in the absence of other considerations
 I suspect Perl 6 might do the same.

This has proven to be inconvenient except for a few specialized usages,
such as the B::/O compiler framework.

There's a need (more or less) for special blocks that can be run at the
end of the compilation phase of any arbitrary compilation unit.


Re: Vocabulary

2003-12-15 Thread Jonathan Scott Duff
On Sun, Dec 14, 2003 at 06:14:42PM -0800, Larry Wall wrote:
 On Sun, Dec 14, 2003 at 03:16:16AM -0600, Jonathan Scott Duff wrote:
 [ my ramblings about a mechanism for role methods to supercede class
   methods elided ]

 I think there's a simple way to solve this: If you're changing the
 policy of the class, then you're changing the class! Derive from the
 defective class and pull in the roles the way you prefer. 

D'oh!  You are absolutely correct.

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Object Order of Precedence (Was: Vocabulary)

2003-12-15 Thread Jonathan Lang
Larry Wall wrote:
 Jonathan Lang wrote:
 : Let's see if I've got this straight:
 : 
 : role methods supercede inherited methods;
 
 But can defer via SUPER::
 
 : class methods supercede role methods;
 
 But can defer via ROLE:: or some such.

Check, and check.  Of course, SUPER:: works well in single inheritence,
but runs into problems of which superclass? in multi-inheritence; ROLE::
would on the surface appear to have that same problem, except that...

 : conflicting methods from multiple roles get discarded...
 
 They aren't silently discarded--they throw a very public exception.
 (But methods with differing multi signatures are not considered to
 be conflicting, I hope.)

(OK.)  

 :   ...but the class may alias or exclude any of the conflicting methods
 : to explicitly resolve the dispute.  
 
 Right.  Another possibility is that the class's method could be
 declared to be the default multi method in case the type information
 is not sufficient to decide which role's multi method should be called.
 Maybe if it's declared multi it works that way.  Otherwise it's just
 called first automatically.

...meaning that the question of which role do you mean? has already been
addressed by the time the ROLE:: deference gets used.  

Although I'm not following what you're saying here in terms of the third
means of disambiguation.  Could someone provide an example, please?  

 : trait methods supercede class methods;
 
 I'm not sure traits work that way.  I see them more as changing the
 metaclass rules.  They feel more like macros to me, where anything
 goes, but you have to be a bit explicit and intentional.

Well, the question is: the trait is providing a method with the same name
as a method provided by the class, and type information is insufficient to
distinguish between them; which one do I use?  In the absence of
additional conflict resolution code, the possible options as I see them
would be: 

1) the class supercedes the trait
2) the trait supercedes the class
3) an ambiguity exception gets thrown
4) the trait's method can't be called without explicitly naming the trait

Which of these three ought to hold true?  

Second, where does the additional conflict resolution code go?  In the
trait, in the class, or somewhere else?  

 : Am I right so far?  Maybe not; I noticed earlier that you've mentioned
 : that roles can be applied at compile-time using does or at run-time
 : using but; might _that_ be the defining feature as to whether the
 : role supercedes the class or vice versa?  does supercedes 
 : inheritence, has and method supercedes does, is and but 
 : supercedes has and method...
 
 No, I think I'm rejecting that notion as too complicated to keep
 track of from moment to moment, and too much like slatherons in
 policy wishy-washiness.  The method precedence won't change from
 compile time to run time.

OK.  My concern is that things like properties add new factors to the
ambiguity issue that you can't expect the class to know about, because
they're being introduced after the class was written.  The fact that a
role supercedes inheritence makes sense to me (more precisely, it isn't
counterintuitive); that a class supercedes a role also makes sense to me,
as long as the role was there when the class was defined.  But when you
add a role to the class after the fact, as in the case of properties, I
don't see how you can expect the class to be able to resolve the conflict.
 What happens when the sticky note that you put on a microwave oven covers
up the display panel?  

It's not so much run-time vs. compile-time as it is while the class is
being written and after the class has been written, and the principle
that he who knows the most about what's going on should make the
decisions.  

Perhaps this could be handled by requiring sticky-note roles (of which
properties are a subset) to be explicitly named when their methods are
called?  That is, after the fact roles don't get flattened into the
class the way that normal roles do.  That way, you're not requiring either
the class _or_ the role to resolve the conflict.  This would be similar to
the relationship between positional parameters and named parameters, in
that the latter is there to let you add capabilities to an existing
function without disrupting the way that the function normally operates. 
(OTOH, that's just about _all_ that it has in common.)  

 : So how do you resolve conflicts between things that supercede the
 : class?  First come first serve (as per slatherons)?  
 
 Well, nothing much really supercedes the class.  Even traits have
 to be requested by the class, and if you have an entirely different
 metaclass, it's probably declared with a different keyword than
 Cclass.  (But sure, multiple traits will have to applied in order
 of declaration, and I don't doubt there will be ordering dependencies.)

My apologies; I'm apparently a bit weak on my object-oriented terminology.
 I'm not quite sure what's being meant here by metaclass, 

Re: Vocabulary

2003-12-15 Thread Michael Lazzaro
On Sunday, December 14, 2003, at 06:14 PM, Larry Wall wrote:
But the agreement could be implied by silence.  If, by the time the
entire program is parsed, nobody has said they want to extend an
interface, then the interface can be considered closed.  In other
words, if you think you *might* want to extend an interface at run
time, you'd better say so at compile time somehow.  I think that's
about as far as we can push it in the final direction.
That seems a very fair rule, especially if it adds a smidge more speed. 
 Runtime extension will likely be very unusual -- requiring it to be 
explicit seems reasonable.


I'm probably spouting nonsense.  I just hope it's good-sounding 
nonsense...
It's beyond good-sounding, it's frickin' awesome.

MikeL



Re: Vocabulary

2003-12-14 Thread Jonathan Scott Duff
On Sat, Dec 13, 2003 at 01:44:34PM -0800, Larry Wall wrote:
 On Sat, Dec 13, 2003 at 12:50:50PM -0500, Austin Hastings wrote:
 : It seems to me there's an argument both ways --
 : 
 : 1. Code written in the absence of a role won't anticipate the role and
 : therefore won't take (unknowable) steps to disambiguate method calls. Ergo
 : method overloads are bad.
 : 
 : 2. Roles may be written to deliberately supercede the methods of their
 : victims. Method overloads are vital.
 
 I think the default has to be 1, with an explicit way to get 2, preferably
 with the agreement of the class in question, though that's not absolutely
 necessary if you believe in AOP.

So, if we follow the rules in the Traits paper, a role may have no
semantic effect if the object's class already provides the necessary
methods.  To *guarantee* that a role will modify an object's behavior,
we need some sytactic clue.  Perhaps shall?

my Person $pete shall Work;

Whatever methods Work defines will override corresponding methods of the
same name (signature?) in the Person class. (With will or does just
the opposite is true) And that same idea could extend to roles
overriding roles easy enough:

my Person $pete will Work shall Lead;
$pete.order();  # calls Lead.order

i.e. if the class Person, and the roles Work and Lead all define a
method called order(), then the Lead role's order() will be the one
called.

I'm not sure that will works that way, but you get the idea.

WRT to the classes cooperation, should a class be able to say You
can't override this method?

method foo is forever { ... }

Seems like it would make it harder for the programmers to debug in the
event of problems.  I guess that could be mitigated by clear error
reporting and rampant object introspection.

 : This doesn't take into account role vs. role conflicts (which seem more
 : likely to be serendipitous).
 : 
 : Perhaps an exact signature rule would work? Such that if the method was an
 : exact replacement, then no error occurs, otherwise it becomes either an
 : error or a multi?
 
 Er, which method?
 
 : Alternatively, perhaps a role must declare its method to be multi or not,
 : and if not then the role's method overloads the original class's.
 
 No, I think the default needs to be such that the class's method is
 expected to dispatch to the role's method.  If no such method exists
 then it falls back on the normal role method dispatch.  In either case,
 it would almost always be the case that you'd want multimethod dispatch
 to the set of role methods of the same name.
 
 I'm starting to think that any methods declared in roles are automatically
 considered multi when composed, whether so declared or not.

Hmm. So that would mean that we'd need a syntax for method replacement
when we wanted it (which we'd need anyway if the method was our smallest
unit of reuse rather than the role) and the only time we would get an
error is when 2 (or more) roles included in a composition have exactly
the same signature.  [slight digression, but methods are really
singleton roles aren't they?]

I'm not sure now whether I like idea of shall at all now. It seems
better to just have every method of a role declare that it replaces the
same-name method of the class. If the class doesn't want future roles
replacing a given method, then we get an error. Oh, but without shall,
composition order would matter and there'd be no visual cue at
composition time that a particular role is special. Drat.

Surely we would need a way to do same-signature replacement on methods
too? This would mildly argue against an implicit multi.

 So whenever you bind a run-time role, the class looks to see if it
 already knows how to do the combination of roles this object wants,
 and if so, the role binding is very fast.  Otherwise it creates the new
 composition, checks for conflicts, resolves them (or doesn't), and then
 binds the new composition as the object's current view of its class.

Neat.

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Vocabulary

2003-12-14 Thread Piers Cawley
Jonathan Scott Duff [EMAIL PROTECTED] writes:

 On Sat, Dec 13, 2003 at 01:44:34PM -0800, Larry Wall wrote:
 On Sat, Dec 13, 2003 at 12:50:50PM -0500, Austin Hastings wrote:
 : It seems to me there's an argument both ways --
 : 
 : 1. Code written in the absence of a role won't anticipate the role and
 : therefore won't take (unknowable) steps to disambiguate method calls. Ergo
 : method overloads are bad.
 : 
 : 2. Roles may be written to deliberately supercede the methods of their
 : victims. Method overloads are vital.
 
 I think the default has to be 1, with an explicit way to get 2, preferably
 with the agreement of the class in question, though that's not absolutely
 necessary if you believe in AOP.

 So, if we follow the rules in the Traits paper, a role may have no
 semantic effect if the object's class already provides the necessary
 methods.  To *guarantee* that a role will modify an object's behavior,
 we need some sytactic clue.  Perhaps shall?

   my Person $pete shall Work;

But presumably 

my Person $self will Work;

Using a pair of words that change their meaning depending on the
subject of the verb seems to be a courageous choice of language and
rather too contextual even for Perl.

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


Re: Vocabulary

2003-12-14 Thread Larry Wall
On Sun, Dec 14, 2003 at 03:16:16AM -0600, Jonathan Scott Duff wrote:
: So, if we follow the rules in the Traits paper, a role may have no
: semantic effect if the object's class already provides the necessary
: methods.  To *guarantee* that a role will modify an object's behavior,
: we need some sytactic clue.  Perhaps shall?
: 
:   my Person $pete shall Work;
: 
: Whatever methods Work defines will override corresponding methods of the
: same name (signature?) in the Person class. (With will or does just
: the opposite is true) And that same idea could extend to roles
: overriding roles easy enough:
: 
:   my Person $pete will Work shall Lead;
:   $pete.order();  # calls Lead.order
: 
: i.e. if the class Person, and the roles Work and Lead all define a
: method called order(), then the Lead role's order() will be the one
: called.
: 
: I'm not sure that will works that way, but you get the idea.

I think there's a simple way to solve this: If you're changing the
policy of the class, then you're changing the class!  Derive from the
defective class and pull in the roles the way you prefer.  If we're
taking away the job of code reuse away from classes and giving it to
roles, then the only job left to classes is object policy.  Let's not
take that away too.

: WRT to the classes cooperation, should a class be able to say You
: can't override this method?
: 
:   method foo is forever { ... }
: 
: Seems like it would make it harder for the programmers to debug in the
: event of problems.  I guess that could be mitigated by clear error
: reporting and rampant object introspection.

I'm deeply suspicious of any trait resembling final.  The way
to prevent people from overriding your interface is to write the
interface good enough that no one will want to override it.  Good
luck.  :-)

[begin digression]

The other reason for final is to make it easy for the compiler
to optimize.  That's also problematical.  As implemented by Java,
it's a premature optimization.  The point at which you'd like to
know this sort of thing is just after parsing the entire program and
just before code generation.  And the promises have to come from
the users of interfaces, not the providers, because the providers
don't know how their services are going to be used.  Methods, roles,
and classes may never declare themselves final.  They may be declared
final only by the agreement of all their users.

But the agreement could be implied by silence.  If, by the time the
entire program is parsed, nobody has said they want to extend an
interface, then the interface can be considered closed.  In other
words, if you think you *might* want to extend an interface at run
time, you'd better say so at compile time somehow.  I think that's
about as far as we can push it in the final direction.

[end digression]

:  I'm starting to think that any methods declared in roles are automatically
:  considered multi when composed, whether so declared or not.
: 
: Hmm. So that would mean that we'd need a syntax for method replacement
: when we wanted it (which we'd need anyway if the method was our smallest
: unit of reuse rather than the role) and the only time we would get an
: error is when 2 (or more) roles included in a composition have exactly
: the same signature.

Right, I think.

: [slight digression, but methods are really singleton roles aren't they?]

In a sense, yes.  Though just as with singleton classes, the type and
the thing it represents should not be confused.

: I'm not sure now whether I like idea of shall at all now. It seems
: better to just have every method of a role declare that it replaces the
: same-name method of the class. If the class doesn't want future roles
: replacing a given method, then we get an error. Oh, but without shall,
: composition order would matter and there'd be no visual cue at
: composition time that a particular role is special. Drat.
: 
: Surely we would need a way to do same-signature replacement on methods
: too? This would mildly argue against an implicit multi.

I still think the right way to change policy is to write your own class
that *doesn't* define the methods you want the roles to override.  The
name of a class has to represent *something*, if it doesn't represent
a fixed set of methods.  I submit that it represents a consistent policy.
A different policy should have a different class name.

So I think that classes have to be in charge of role composition.
The actor chooses how to play the part, not vice versa.  To the extent
that the actor can't choose, we're looking at the actor's traits,
not the actor's roles.

I try not to confuse roles and traits in my own life.  Being the Perl
god is a role.  Being a stubborn cuss is a trait.  :-)

Larry


Re: Vocabulary

2003-12-14 Thread Chip Salzenberg
According to Larry Wall:
 If, by the time the entire program is parsed, nobody has said they
 want to extend an interface, then the interface can be considered
 closed.

What with Ceval STRING and its various wrappers, when can the program
be said to be fully parsed?  - anticipating Mu
-- 
Chip Salzenberg   - a.k.a. -   [EMAIL PROTECTED]
I wanted to play hopscotch with the impenetrable mystery of existence,
but he stepped in a wormhole and had to go in early.  // MST3K


Re: Vocabulary

2003-12-14 Thread Jonathan Lang
Larry Wall wrote:
 I think the class is still the final arbiter of what its objects
 are--there is no other entity that holds all the reins.  If a class
 chooses to include a role, and that role violates the normal rules of
 roles, the class is still responsible for that (or else you need some
 babysitting code somewhere, hopefully in the metaclass).  Maybe that's
 what a trait is--a renegade role.  

Let's see if I've got this straight:

role methods supercede inherited methods;
class methods supercede role methods;
trait methods supercede class methods;
conflicting methods from multiple roles get discarded...
  ...but the class may alias or exclude any of the conflicting methods to 
 explicitly resolve the dispute.  

Am I right so far?  Maybe not; I noticed earlier that you've mentioned
that roles can be applied at compile-time using does or at run-time
using but; might _that_ be the defining feature as to whether the role
supercedes the class or vice versa?  does supercedes inheritence, has
and method supercedes does, is and but supercedes has and
method...

So how do you resolve conflicts between things that supercede the class? 
First come first serve (as per slatherons)?  

=
Jonathan Dataweaver Lang

__
Do you Yahoo!?
New Yahoo! Photos - easier uploading and sharing.
http://photos.yahoo.com/


RE: Vocabulary

2003-12-13 Thread Austin Hastings


 -Original Message-
 From: Larry Wall [mailto:[EMAIL PROTECTED]
 Sent: Friday, December 12, 2003 8:30 PM

 On Fri, Dec 12, 2003 at 05:17:37PM -0500, Austin Hastings wrote:

 : Good. I like the mixin being available at either time. This
 makes properties
 : a lot more useful since I can provided default or normal values:
 :
 :   role Celebrated
 : does Date
 : does {
 :   method celebrated($d) { return $d.date; }
 :   }
 :
 :   class Birthday does Celebrated {
 : has $.date;
 :   }
 :
 :   my Birthday $d = Birthday.new('February', 29, 2004) but
 : Celebrated('March', 01, 2004);
 :
 :   print My birthday is celebrated $d.celebrated;

 More generally, you can write the rest of the class knowing that the
 role is there if it's compiled in.

 : I presume that the linear order (compile time) or chronological order of
 : applying roles decides the order in which overlaid methods are
 : Cwraped/overlaid.

 The original Traits paper specifies that it's illegal to compose two
 methods of the same name into the class, and you have to rename one of
 them to get them both visible.  This is why the authors specifically
 rejected mixins, because they hide errors like this.

I'm not convinced these are errors. Having a role override methods makes
sense in a lot of ways.

Consider, for example, a caching or persistence implementation that
overrides the .STORE method of its victims.

It seems to me there's an argument both ways --

1. Code written in the absence of a role won't anticipate the role and
therefore won't take (unknowable) steps to disambiguate method calls. Ergo
method overloads are bad.

2. Roles may be written to deliberately supercede the methods of their
victims. Method overloads are vital.

This doesn't take into account role vs. role conflicts (which seem more
likely to be serendipitous).

Perhaps an exact signature rule would work? Such that if the method was an
exact replacement, then no error occurs, otherwise it becomes either an
error or a multi?

Alternatively, perhaps a role must declare its method to be multi or not,
and if not then the role's method overloads the original class's.

(Which takes us to retroactive multi-fication. Ugh.)

Or perhaps you just have to say this method overloads.

 As for the relationship of Trait methods to other methods
 declarations, an explicit method declaration in the class proper
 overrides the composed methods, while composed methods override
 anything else in the inheritance hierarchy.

At compile time, right? Whereas composition (but=) overrides declaration at
run-time?

 : Which is it, by the way? Or is there MTOWTDI, such as a method
 modifier for
 : specifying polymorph behavior?

 The default way might well be the way specified in the Traits paper.
 However, their underlying language didn't support any kind of multi
 dispatch.  Perl 6 will be able to multi any set of names in the same
 namespace as long as the arguments are differentiable by type.  So it
 might be possible to insert a stub method declaration in the class
 proper that says treat all composed methods of this name as multis.
 That presumes the methods take differing arguments, of course.

 :   method CONFORM is wrapped { ... call ... }

 That would be another way to do it, except that you might still have
 to switch on something to tell it which role method to call.

 :  A property is a simple kind of role that supplies a single attribute.
 :  The type of a property is identical to its role name.  Roles can have
 :  subtypes that function as enums when the subtypes are constrained to a
 :  single value.
 :
 : This seems really clunky for enums. It works okay for boolean, but even
 : doing month-names is going to suck pretty hard:
 :
 :   role Month;
 :
 :   role January   does Month[0];
 :   role February  does Month[1];
 :   role March does Month[2];
 :   role April does Month[3];
 :   role May   does Month[4];
 :   role June  does Month[5];
 :   role July  does Month[6];
 :   role Augustdoes Month[7];
 :   role September does Month[8];
 :   role October   does Month[9];
 :   role November  does Month[10];
 :   role December  does Month[11];
 :
 :   role Month does Int[January..December];

 That's why I suggested some syntactic sugar for it.  But I admit that
 treating each enum as a subtype is a stretch.  They could be constant
 methods, for instance.  In any event, the various enum names should
 probably be hidden in the Month role and not be exported by default.

Yeah, the concept is useful enough that it's probably worth a spoonful of
sugar. Perhaps it were better to think of a clever way of defining a batch
of named constants in a class declaration, so that enums could be full
classes if they want to be:

  class Month is Int {
method name {...};
has values [ January = 0, February, ..., December ];
  }

 :  You can use one of these subtypes without specifically
 implying the role
 : name.  So saying
 : 
 :  $bar but Red
 : 
 

Re: Vocabulary

2003-12-13 Thread Larry Wall
On Sat, Dec 13, 2003 at 12:07:40PM -0500, Austin Hastings wrote:
:  From: Larry Wall [mailto:[EMAIL PROTECTED]
:  The behavior probably doesn't expire unless you've cloned the object
:  and the clone expires.  However, if a role goes out of its lexical
:  scope, it can't be named, so it's effectively not usable unless you
:  dig out a name for it via reflection.  But the information is still
:  cached there, so the object could be smarter the next time it takes
:  on the same role.
: 
: It's a role closure, in other words?

Erm.  That's a fancy word, and I don't claim to know what it means
all the time.  I suspect the name of the role is closed but the role
itself isn't.  Alice: If the name of the role is called Teach...

: That being the case, how to you unapply a role?
: 
:   $frank does no Teach;
: 
:   $frank doesnt Teach;

$frank.role_manager(
action = delete,
mode = override_all_safety_mechanisms, 
name_the_role_is_called = Teach
);

Or something like that.  :-)

:  That being said, a role applied with Ctemp probably *should* be
:  stripped out when it goes out of scope.  Could get messy though...
: 
: I can't think of a way to apply a role with temp (to a non-temp object). How
: do you do it?

Well, we did set up a way for a method to be temporizable, so it
probably comes down to whether Cbut is just syntactic sugar for a
method call that knows how to undo itself.

Larry


Re: Vocabulary

2003-12-13 Thread Larry Wall
On Sat, Dec 13, 2003 at 12:50:50PM -0500, Austin Hastings wrote:
:  -Original Message-
:  From: Larry Wall [mailto:[EMAIL PROTECTED]
:  Sent: Friday, December 12, 2003 8:30 PM
: 
:  On Fri, Dec 12, 2003 at 05:17:37PM -0500, Austin Hastings wrote:
:  : I presume that the linear order (compile time) or chronological order of
:  : applying roles decides the order in which overlaid methods are
:  : Cwraped/overlaid.
: 
:  The original Traits paper specifies that it's illegal to compose two
:  methods of the same name into the class, and you have to rename one of
:  them to get them both visible.  This is why the authors specifically
:  rejected mixins, because they hide errors like this.
: 
: I'm not convinced these are errors. Having a role override methods makes
: sense in a lot of ways.

A role method certainly overrides inherited methods, so it's only
methods defined in the class itself we're talking about here.

: Consider, for example, a caching or persistence implementation that
: overrides the .STORE method of its victims.

I think the class is still the final arbiter of what its objects
are--there is no other entity that holds all the reins.  If a class
chooses to include a role, and that role violates the normal rules of
roles, the class is still responsible for that (or else you need some
babysitting code somewhere, hopefully in the metaclass).  Maybe that's
what a trait is--a renegade role.  Instead of

does Storable

maybe it's

is storable

: It seems to me there's an argument both ways --
: 
: 1. Code written in the absence of a role won't anticipate the role and
: therefore won't take (unknowable) steps to disambiguate method calls. Ergo
: method overloads are bad.
: 
: 2. Roles may be written to deliberately supercede the methods of their
: victims. Method overloads are vital.

I think the default has to be 1, with an explicit way to get 2, preferably
with the agreement of the class in question, though that's not absolutely
necessary if you believe in AOP.

: This doesn't take into account role vs. role conflicts (which seem more
: likely to be serendipitous).
: 
: Perhaps an exact signature rule would work? Such that if the method was an
: exact replacement, then no error occurs, otherwise it becomes either an
: error or a multi?

Er, which method?

: Alternatively, perhaps a role must declare its method to be multi or not,
: and if not then the role's method overloads the original class's.

No, I think the default needs to be such that the class's method is
expected to dispatch to the role's method.  If no such method exists
then it falls back on the normal role method dispatch.  In either case,
it would almost always be the case that you'd want multimethod dispatch
to the set of role methods of the same name.

I'm starting to think that any methods declared in roles are automatically
considered multi when composed, whether so declared or not.

: (Which takes us to retroactive multi-fication. Ugh.)

More like proactive multi-fication, I think.

: Or perhaps you just have to say this method overloads.

If it's part of the role contract, it's part of the contract.  But maybe
that makes it a trait.

:  As for the relationship of Trait methods to other methods
:  declarations, an explicit method declaration in the class proper
:  overrides the composed methods, while composed methods override
:  anything else in the inheritance hierarchy.
: 
: At compile time, right? Whereas composition (but=) overrides declaration at
: run-time?

I don't think the rules for run-time roles should be different than the
rules for compile-time roles (because Perl doesn't make a hard-and-fast
distinction between compile time and run time).  And the arbitration
logic has to be somehow associated with the class, either explicitly
by the class's declarations, or by some babysitting code telling the
class how to behave given the new composition.

I've been talking about singleton classes to implement run-time
roles, but that's not quite right.  I think a class caches its
various compositions and either reuses an existing one or creates
a new composition depending on which set of roles has been bound to
the current object.  It might seem like you could have a combinatorial
explosion of the possible number of compositions if multiple properties
are applied at run time, but when you think about, the number of those
cached compositions has to be equal or less than the number of singleton
classes you'd get if you created a new one for every object with one
or more properties.  In general there will be many fewer compositions
than singletons.

So whenever you bind a run-time role, the class looks to see if it
already knows how to do the combination of roles this object wants,
and if so, the role binding is very fast.  Otherwise it creates the new
composition, checks for conflicts, resolves them (or doesn't), and then
binds the new composition as the object's current view of its class.

In a sense, these are 

Vocabulary

2003-12-12 Thread Luke Palmer
So I'm seeing a lot of inconsistent OO-vocabulary around here, and it
makes things pretty hard to understand.

So here's how Perl 6 is using said inconsistent terms, AFAIK:

- attribute
  A concrete data member of a class.  Used with Chas.

- property
  An out-of-band sticky note to be placed on a single object.
Used with Cbut.

- trait
  A compile time sticky note to be placed on a wide variety of things. 
Used with Cis.

- role
  A collection of methods to be incorporated into a class sans
inheritance (and maybe some other stuff, too).  Used with Cdoes.

So for example:

class Dog
does Boolean# role
is extended # trait
is Mammal   # [1]
{
has $.tail; # attribute
has @.legs; # attribute
}

my $fido = Dog.new
but false;  # property

Hope that clears things up.

Luke

[1] This is a base class, which is an overloaded use of Cis.  Though,
upon A12 release, we'll probably find out that it's not overloaded but
instead, elegantly unified, somehow. 


Re: Vocabulary

2003-12-12 Thread Jonathan Scott Duff
On Fri, Dec 12, 2003 at 04:23:02AM -0700, Luke Palmer wrote:
 So I'm seeing a lot of inconsistent OO-vocabulary around here, and it
 makes things pretty hard to understand.
 
 So here's how Perl 6 is using said inconsistent terms, AFAIK:
 
 - attribute
   A concrete data member of a class.  Used with Chas.
 
 - property
   An out-of-band sticky note to be placed on a single object.
 Used with Cbut.

I think an important aspect of properties that you left out here is
that they are run-time.

 - trait
   A compile time sticky note to be placed on a wide variety of things. 
 Used with Cis.
 
 - role
   A collection of methods to be incorporated into a class sans
 inheritance (and maybe some other stuff, too).  Used with Cdoes.

s/class/object/ 

Roles are like sticky behavior (as I understand them) just as properties
are sticky state. And I assume that roles are run-time so that you can
have your objects obtain new behavior (fullfill new roles) as needed
without having to use eval all the time.

I think I'm getting it but I'm not sure.  Does something like this
work?

my role Teach { ... }
my role Operate { ... }
my role Learn { ... }

my Person $frank;
{ temp $frank_the_teacher = $frank does Teach; ... }
{ temp $frank_the_doctor = $frank does Operate; ... }
{ temp $frank_the_student = $frank does Learn; ... }

I.e., we can use dynamic scoping to control how long an object
fulfills a particular role?  Maybe it could also be written like so:

my Person $frank;
{ my role Teach { ... }; $frank does Teach; ... }
{ my role Operate { ... }; $frank does Operate; ... }
{ my role Learn { ... } $frank does Learn; ... }

so that when the role goes out of scope, the object no longer
possesses the abilities of that role.

I confuse myself everytime I think about this stuff.

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Vocabulary

2003-12-12 Thread Larry Wall
On Fri, Dec 12, 2003 at 04:23:02AM -0700, Luke Palmer wrote:
: So I'm seeing a lot of inconsistent OO-vocabulary around here, and it
: makes things pretty hard to understand.

Agreed.

: So here's how Perl 6 is using said inconsistent terms, AFAIK:
: 
: - attribute
:   A concrete data member of a class.  Used with Chas.

Declared with Chas is a little more precise.

: - property
:   An out-of-band sticky note to be placed on a single object.
: Used with Cbut.

Maybe applied with?

: - trait
:   A compile time sticky note to be placed on a wide variety of things. 
: Used with Cis.

Fine.  (Though I like to hyphenate compile-time when it's an adjective,
and not when it's a noun.  Same for run-time, just to be consistent.)

: - role
:   A collection of methods to be incorporated into a class sans

A role can also supply one or more attributes.

: inheritance (and maybe some other stuff, too).  Used with Cdoes.

Here it gets a little fuzzier.  A role can be applied to a class
at compile time via does, or to an object at run time via but.
A property is a simple kind of role that supplies a single attribute.
The type of a property is identical to its role name.  Roles can have
subtypes that function as enums when the subtypes are constrained to a
single value.  You can use one of these subtypes without specifically
implying the role name.  So saying

$bar but Red

might give you a value with the property Color.  You can write the corresponding
boolean test using the smart match operator:

$bar ~~ Red

and it (smartly) picks out the Color property to compare with, provided
it's unambiguous.  You can use that syntax to compare against any
subtype or junction of subtypes:

$bar ~~ RedishWhiteish # pinkish

: So for example:
: 
: class Dog
: does Boolean# role
: is extended # trait
: is Mammal   # [1]
: {
: has $.tail; # attribute
: has @.legs; # attribute
: }
: 
: my $fido = Dog.new
: but false;  # property
: 
: Hope that clears things up.

Yes, it does.

: Luke
: 
: [1] This is a base class, which is an overloaded use of Cis.  Though,
: upon A12 release, we'll probably find out that it's not overloaded but
: instead, elegantly unified, somehow. 

If not, it'll be easy to turn it into an isa.

Larry


RE: Vocabulary

2003-12-12 Thread Austin Hastings


 -Original Message-
 From: Jonathan Scott Duff [mailto:[EMAIL PROTECTED]
 Sent: Friday, December 12, 2003 11:13 AM
 To: Luke Palmer
 Cc: Language List
 Subject: Re: Vocabulary


 On Fri, Dec 12, 2003 at 04:23:02AM -0700, Luke Palmer wrote:
  So I'm seeing a lot of inconsistent OO-vocabulary around here, and it
  makes things pretty hard to understand.
 
  So here's how Perl 6 is using said inconsistent terms, AFAIK:
 
  - attribute
A concrete data member of a class.  Used with Chas.
 
  - property
An out-of-band sticky note to be placed on a single object.
  Used with Cbut.

 I think an important aspect of properties that you left out here is
 that they are run-time.

  - trait
A compile time sticky note to be placed on a wide variety
 of things.
  Used with Cis.
 
  - role
A collection of methods to be incorporated into a class sans
  inheritance (and maybe some other stuff, too).  Used
 with Cdoes.

 s/class/object/

 Roles are like sticky behavior (as I understand them) just as properties
 are sticky state. And I assume that roles are run-time so that you can
 have your objects obtain new behavior (fullfill new roles) as needed
 without having to use eval all the time.


This seems needlessly restrictive. If we're defining roles as having mixin
capabilities, we should be able to use them in classes. Laziness.

 role Work {...};
 role Management {...};

 class Employee {...};

 class Worker is Employee does Work;
 class Manager is Employee does Management;

 role PHB does Manager[does no Work];

 I think I'm getting it but I'm not sure.  Does something like this
 work?

   my role Teach { ... }
   my role Operate { ... }
   my role Learn { ... }

   my Person $frank;
   { temp $frank_the_teacher = $frank does Teach; ... }
   { temp $frank_the_doctor = $frank does Operate; ... }
   { temp $frank_the_student = $frank does Learn; ... }

 I.e., we can use dynamic scoping to control how long an object
 fulfills a particular role?  Maybe it could also be written like so:

   my Person $frank;
   { my role Teach { ... }; $frank does Teach; ... }
   { my role Operate { ... }; $frank does Operate; ... }
   { my role Learn { ... } $frank does Learn; ... }

 so that when the role goes out of scope, the object no longer
 possesses the abilities of that role.

 I confuse myself everytime I think about this stuff.

That's brilliant, if twisted. The object persists, but the behaviors expire.
There's a paradigm there, man. Write a book.

(It's double-e Eevil, too, but that's Damian's problem. :-)

=Austin



FW: Vocabulary

2003-12-12 Thread Austin Hastings

 -Original Message-
 From: Luke Palmer [mailto:[EMAIL PROTECTED]
 Sent: Friday, December 12, 2003 6:23 AM
 
 So I'm seeing a lot of inconsistent OO-vocabulary around here, and it
 makes things pretty hard to understand.
 
 So here's how Perl 6 is using said inconsistent terms, AFAIK:
 
 - attribute
   A concrete data member of a class.  Used with Chas.
 
 - property
   An out-of-band sticky note to be placed on a single object.
 Used with Cbut.
 
 - trait
   A compile time sticky note to be placed on a wide variety 
 of things. Used with Cis.

Did I miss something with IS and OF?

That is, I think:

  Cis means storage type, while Cof means trait or class:

  my @a is Herd of Cat;

declares a Herd (presumably a base class of some collection type) with the trait that, 
in this case, members will be of Class Cat.

Did this change when I wasn't looking?

 - role
   A collection of methods to be incorporated into a class sans
 inheritance (and maybe some other stuff, too).  Used with Cdoes.

No comment, since this is still hovering (see Larry's reply).

 
 So for example:
 
 class Dog
 does Boolean# role
 is extended # trait
 is Mammal   # [1]

The only difference I can see here between Cdoes Boolean and Cis extended would be 
the declaration of Boolean or extended (unless Cis can only be used with built-in 
traits, which seems unnecessarily restrictive...)

 {
 has $.tail; # attribute
 has @.legs; # attribute
 }
 
 my $fido = Dog.new
 but false;  # property
 
 Hope that clears things up.
 
 Luke
 
 [1] This is a base class, which is an overloaded use of Cis.  Though,
 upon A12 release, we'll probably find out that it's not overloaded but
 instead, elegantly unified, somehow. 

Thanks for bringing this out.

=Austin



RE: Vocabulary

2003-12-12 Thread Austin Hastings


 -Original Message-
 From: Larry Wall [mailto:[EMAIL PROTECTED]
 Sent: Friday, December 12, 2003 12:17 PM

 : - role
 :   A collection of methods to be incorporated into a class sans

 A role can also supply one or more attributes.

So a role can constrain values and add behavior and attributes. Presumably
it can do both at the same time?

  enum ParityMode values P_ODD P_EVEN P_NONE;

  role Byte
does Int[0..255]   # Value constraint
does { # extending by adding attributes  methods, and by
overriding the STORE method
  has ParityMode $.parity_mode = NONE;
  has bit $.parity;

   # .CONFORM is redundant with Value constraint above,
which autogenerates this.
  method CONFORM(Int $i) { SUPER  0 = $i = 255; }
  method STORE(Int $i: $v) { $i = .CONFORM($v) || fail; set_parity; }
  method set_parity {...}
};

 : inheritance (and maybe some other stuff, too).  Used
 with Cdoes.

 Here it gets a little fuzzier.  A role can be applied to a class
 at compile time via does, or to an object at run time via but.

Good. I like the mixin being available at either time. This makes properties
a lot more useful since I can provided default or normal values:

  role Celebrated
does Date
does {
  method celebrated($d) { return $d.date; }
  }

  class Birthday does Celebrated {
has $.date;
  }

  my Birthday $d = Birthday.new('February', 29, 2004) but
Celebrated('March', 01, 2004);

  print My birthday is celebrated $d.celebrated;

I presume that the linear order (compile time) or chronological order of
applying roles decides the order in which overlaid methods are
Cwraped/overlaid.

Which is it, by the way? Or is there MTOWTDI, such as a method modifier for
specifying polymorph behavior?

  method CONFORM is wrapped { ... call ... }

 A property is a simple kind of role that supplies a single attribute.
 The type of a property is identical to its role name.  Roles can have
 subtypes that function as enums when the subtypes are constrained to a
 single value.

This seems really clunky for enums. It works okay for boolean, but even
doing month-names is going to suck pretty hard:

  role Month;

  role January   does Month[0];
  role February  does Month[1];
  role March does Month[2];
  role April does Month[3];
  role May   does Month[4];
  role June  does Month[5];
  role July  does Month[6];
  role Augustdoes Month[7];
  role September does Month[8];
  role October   does Month[9];
  role November  does Month[10];
  role December  does Month[11];

  role Month does Int[January..December];

 You can use one of these subtypes without specifically implying the role
name.  So saying

 $bar but Red

 might give you a value with the property Color.

This is smart and helpful. I like it. However, there needs to be a way to
specify what to do when multiple roles share the same values. For example,
if I have NeededBy and Estimated roles:

  my $birthday = 02/29/2004 but March;
  my $ship_date = 01/01/01 but NeededBy(February);

 You can write the corresponding boolean test using the smart match
operator:

 $bar ~~ Red

 and it (smartly) picks out the Color property to compare with, provided
 it's unambiguous.  You can use that syntax to compare against any
 subtype or junction of subtypes:

 $bar ~~ RedishWhiteish   # pinkish


Disambiguation?

  $bar ~~ NeededBy(February)

or

  $bar.NeededBy ~~ February

=Austin



Re: Vocabulary

2003-12-12 Thread Stéphane Payrard
 
 A role can also supply one or more attributes.
 
 : inheritance (and maybe some other stuff, too).  Used with Cdoes.

The smalltalk paper you mentionned which talked about roles (under
the name of traits) said that roles were stateless.

What are the consequences of using stateful roles?

A related question. Will getter and setter methods will have the
same name as the underlying accessed attributes?


--
 stef


Re: Vocabulary

2003-12-12 Thread Dan Sugalski
At 9:16 AM -0800 12/12/03, Larry Wall wrote:
On Fri, Dec 12, 2003 at 04:23:02AM -0700, Luke Palmer wrote:
: - property
:   An out-of-band sticky note to be placed on a single object.
: Used with Cbut.
Maybe applied with?

: - trait
:   A compile time sticky note to be placed on a wide variety of things.
: Used with Cis.
Fine.  (Though I like to hyphenate compile-time when it's an adjective,
and not when it's a noun.  Same for run-time, just to be consistent.)
I would really, *really* like to kill the whole It's a sticky note! 
metaphor dead. If I understand the changes proposed in properties as 
part of the whole shift to roles thing they aren't anything like 
sticky notes at all, as they dynamically subclass the object.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Vocabulary

2003-12-12 Thread Larry Wall
On Fri, Dec 12, 2003 at 07:12:40PM +0100, Stéphane Payrard wrote:
:  
:  A role can also supply one or more attributes.
:  
:  : inheritance (and maybe some other stuff, too).  Used with Cdoes.
: 
: The smalltalk paper you mentionned which talked about roles (under
: the name of traits) said that roles were stateless.

Though they did point out that state was one of the thing they
wanted to look into.

: What are the consequences of using stateful roles?

That's what we're trying to figure out.  :-)

It seems to me that as long as the attributes declared in the role are
elaborated into the real object at class composition time, there's
really very little problem with doing that part of it.  You have to
watch for collisions just as you do with method names, of course.
The tricky part comes later when you start to use it.  The trickiest
thing will be to know if some change in the rest of the object has
invalidated your role's state such that you have to recompute it.

: A related question. Will getter and setter methods will have the
: same name as the underlying accessed attributes?

Yes, the getter/setter method is the same name as the attribute.
(There's only one method, but it can be an lvalue method for rw
attributes).

Larry


Re: FW: Vocabulary

2003-12-12 Thread Larry Wall
On Fri, Dec 12, 2003 at 04:31:32PM -0500, Austin Hastings wrote:
:  - trait
:A compile time sticky note to be placed on a wide variety 
:  of things. Used with Cis.
: 
: Did I miss something with IS and OF?
: 
: That is, I think:
: 
:   Cis means storage type, while Cof means trait or class:
: 
:   my @a is Herd of Cat;
: 
: declares a Herd (presumably a base class of some collection type) with the trait 
that, in this case, members will be of Class Cat.
: 
: Did this change when I wasn't looking?

No, it hasn't changed.  Generally Cis specifies the storage class when
you're applying it to a variable.  We've just been using it a little
weirdly on things that aren't variables, such as class declarations.

:  - role
:A collection of methods to be incorporated into a class sans
:  inheritance (and maybe some other stuff, too).  Used with Cdoes.
: 
: No comment, since this is still hovering (see Larry's reply).

Flutter, flutter.

:  So for example:
:  
:  class Dog
:  does Boolean# role
:  is extended # trait
:  is Mammal   # [1]
: 
: The only difference I can see here between Cdoes Boolean and Cis extended would 
be the declaration of Boolean or extended (unless Cis can only be used with built-in 
traits, which seems unnecessarily restrictive...)

Traits are seeming a lot more like roles than like superclasses
these days.  But they may still be different beasties.  A role will
have some rules about how it's composed into a class, while a trait
can presumably do anything it jolly well pleases.  They may unify at
some point, but maybe only at a temperature of billions of degrees.

Larry


Re: Vocabulary

2003-12-12 Thread Casey West
It was Friday, December 12, 2003 when Luke Palmer took the soap box, saying:
: So I'm seeing a lot of inconsistent OO-vocabulary around here, and it
: makes things pretty hard to understand.

Awesome.  I've taken your original, plus comments so far and created
perlvocab.pod.  Lets give it a couple go-rounds and it can be stored
in CVS for safe keeping (and maintaining).  Send me diffs if you like.

Document below sig.

  Casey West

-- 
Usenet is like Tetris for people who still remember how to read. 
  -- Button from the Computer Museum, Boston, MA

=pod

=head1 NAME

perlvocab - Perl Vocabulary and Glossary

=head1 SYNOPSIS

This document authoritatively defines many potentially ambiguous terms in Perl.

=head1 DESCRIPTION

=head2 Object Oriented Terminology

=over 4

=item attribute

A concrete data member of a class.  Declared with Chas.

=item property

A run-time, out-of-band sticky note to be placed on a single object,
applied with with Cbut.

A property is a simple kind of role that supplies a single attribute.
The type of a property is identical to its role name.  Roles can have
subtypes that function as enums when the subtypes are constrained to a
single value.  You can use one of these subtypes without specifically
implying the role name.  So saying

$bar but Red

might give you a value with the property Color.  You can write the corresponding
boolean test using the smart match operator:

$bar ~~ Red

and it (smartly) picks out the Color property to compare with, provided
it's unambiguous.  You can use that syntax to compare against any
subtype or junction of subtypes:

$bar ~~ RedishWhiteish # pinkish

=item trait

A compile-time sticky note to be placed on a wide variety of things. 
Used with Cis.

=item role

A collection of methods and/or attributes to be incorporated into a class sans
inheritance (and maybe some other stuff, too).   A role can be applied to a class
at compile time via Cdoes, or to an object at run time via Cbut.

So for example:

class Dog
does Boolean# role
is extended # trait
is Mammal   # base class
{
has $.tail; # attribute
has @.legs; # attribute
}

my $fido = Dog.new
but false;  # property

In this example, CMammal is a base class, which is an overloaded use of Cis.  
Though,
upon A12 release, we'll probably find out that it's not overloaded but
instead, elegantly unified, somehow. 

=head1 AUTHOR

Luke Palmer, Original Document

Contributions by Larry Wall, and Jonathan Scott Duff

Compilation by Casey West

=cut


Re: Vocabulary

2003-12-12 Thread Larry Wall
On Fri, Dec 12, 2003 at 05:17:37PM -0500, Austin Hastings wrote:
:  -Original Message-
:  From: Larry Wall [mailto:[EMAIL PROTECTED]
:  Sent: Friday, December 12, 2003 12:17 PM
: 
:  : - role
:  :   A collection of methods to be incorporated into a class sans
: 
:  A role can also supply one or more attributes.
: 
: So a role can constrain values and add behavior and attributes. Presumably
: it can do both at the same time?

I suspect so.  Some added behaviors may only make sense on a constrained
set of values.

:   enum ParityMode values P_ODD P_EVEN P_NONE;
: 
:   role Byte
: does Int[0..255]   # Value constraint
: does { # extending by adding attributes  methods, and by
: overriding the STORE method
:   has ParityMode $.parity_mode = NONE;
:   has bit $.parity;
: 
:# .CONFORM is redundant with Value constraint above,
: which autogenerates this.
:   method CONFORM(Int $i) { SUPER  0 = $i = 255; }
:   method STORE(Int $i: $v) { $i = .CONFORM($v) || fail; set_parity; }
:   method set_parity {...}
: };

Yes, though CONFORM is likely to be spelled PRE.

And I'm not sure your STORE is gonna work by clobbering the invocant
reference like that.  More likely you have to assign to .value or some
such accessor provided by Int.  Or since roles compose into the class,
it may be okay for roles to access attribute variables directly,
and set $.value (presuming that's the attribute provided by Int).
Depends on how fancy we want to get with the cross checking at
composition time.

:  : inheritance (and maybe some other stuff, too).  Used
:  with Cdoes.
: 
:  Here it gets a little fuzzier.  A role can be applied to a class
:  at compile time via does, or to an object at run time via but.
: 
: Good. I like the mixin being available at either time. This makes properties
: a lot more useful since I can provided default or normal values:
: 
:   role Celebrated
: does Date
: does {
:   method celebrated($d) { return $d.date; }
:   }
: 
:   class Birthday does Celebrated {
: has $.date;
:   }
: 
:   my Birthday $d = Birthday.new('February', 29, 2004) but
: Celebrated('March', 01, 2004);
: 
:   print My birthday is celebrated $d.celebrated;

More generally, you can write the rest of the class knowing that the
role is there if it's compiled in.

: I presume that the linear order (compile time) or chronological order of
: applying roles decides the order in which overlaid methods are
: Cwraped/overlaid.

The original Traits paper specifies that it's illegal to compose two
methods of the same name into the class, and you have to rename one of
them to get them both visible.  This is why the authors specifically
rejected mixins, because they hide errors like this.

As for the relationship of Trait methods to other methods
declarations, an explicit method declaration in the class proper
overrides the composed methods, while composed methods override
anything else in the inheritance hierarchy.

: Which is it, by the way? Or is there MTOWTDI, such as a method modifier for
: specifying polymorph behavior?

The default way might well be the way specified in the Traits paper.
However, their underlying language didn't support any kind of multi
dispatch.  Perl 6 will be able to multi any set of names in the same
namespace as long as the arguments are differentiable by type.  So it
might be possible to insert a stub method declaration in the class
proper that says treat all composed methods of this name as multis.
That presumes the methods take differing arguments, of course.

:   method CONFORM is wrapped { ... call ... }

That would be another way to do it, except that you might still have
to switch on something to tell it which role method to call.

:  A property is a simple kind of role that supplies a single attribute.
:  The type of a property is identical to its role name.  Roles can have
:  subtypes that function as enums when the subtypes are constrained to a
:  single value.
: 
: This seems really clunky for enums. It works okay for boolean, but even
: doing month-names is going to suck pretty hard:
: 
:   role Month;
: 
:   role January   does Month[0];
:   role February  does Month[1];
:   role March does Month[2];
:   role April does Month[3];
:   role May   does Month[4];
:   role June  does Month[5];
:   role July  does Month[6];
:   role Augustdoes Month[7];
:   role September does Month[8];
:   role October   does Month[9];
:   role November  does Month[10];
:   role December  does Month[11];
: 
:   role Month does Int[January..December];

That's why I suggested some syntactic sugar for it.  But I admit that
treating each enum as a subtype is a stretch.  They could be constant
methods, for instance.  In any event, the various enum names should
probably be hidden in the Month role and not be exported by default.

:  You can use one of these subtypes without specifically implying the role
: name.