Re: Vocabulary

2003-12-17 Thread Piers Cawley
Larry Wall [EMAIL PROTECTED] writes:

 On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
 : When you say CHECK time, do you mean there'll be a CHECK phase for
 : code that gets required at run time?

 Dunno about that.  When I say CHECK time I'm primarily referring
 to the end of the main compilation.  Perl 5 appears to ignore CHECK
 blocks declared at run time, so in the absence of other considerations
 I suspect Perl 6 might do the same.

I feared that might be the case. 

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


Re: Vocabulary

2003-12-17 Thread Simon Cozens
[EMAIL PROTECTED] (Michael Lazzaro) writes:
 Well, just for clarification; in my anecdotal case (server-side web
 applications), the speed I actually need is as much as I can get,
 and all the time.  Every N cycles I save represents an increase in
 peak traffic capabilities per server, which is, from a marketing
 perspective, essential.

The desire to optimize the hell out of Perl 6 is a good one, but surely
you optimize when there is a problem, not when before. Is there a problem
with the speed you're getting from Perl 6 at the moment?

-- 
evilPetey I often think I'd get better throughput yelling at the modem.


RE: Vocabulary

2003-12-17 Thread Gordon Henriksen
Michael Lazzaro wrote:

 I don't think so; we're just talking about whether you can extend a 
 class at _runtime_, not _compiletime_.  Whether or not Perl can have 
 some degree of confidence that, once a program is compiled, it won't 
 have to assume the worst-case possibility of runtime alteration of 
 every class, upon every single method call, just in case 
 you've screwed with something.

That's a cute way of glossing over the problem.

How do you truly know when runtime is in the first place? Imagine an
application server which parses and loads code from files on-demand.
This shouldn't be difficult. Imagine that that code references a
system of modules.

Imagine if Perl finalizes classes after primary compilation
(after parsing, say, an ApacheHandler file), and proceeds to behave
quite differently indeed afterwards.

Imagine that a perfectly innocent coder finds that his class
library doesn't run the same (doesn't run at all) under the
application server as it does when driven from command line scripts:
His method overrides don't take effect (or, worse, Perl tells him he
can't even compile them because the class is already finalized! And
he thought Perl was a dynamic language!).

What's his recourse? Nothing friendly. Tell Perl that he's going
to subclass the classes he subclasses? Why? He already subclasses
them! Isn't that tell enough? And how? Some obscure configuration
file of the application server, no doubt. And now the app server needs
to be restarted if that list changes. His uptime just went down. And
now he can't have confidence that his system will continue to behave
consistently over time; apachectl restart becomes a part of his
development troubleshooting lexicon.

Java doesn't make him do that; HotSpot can make this optimization at
runtime and back it out if necessary. Maybe he'll just write a JSP
instead.

C# and VB.NET do likewise. ASP.NET isn't looking so bad, either. The
.NET Frameworks are sure a lot less annoying than the Java class
library, after all.


Point of fact, for a large set of important usage cases, Perl simply
can't presume that classes will EVER cease being introduced into the
program. That means it can NEVER make these sorts of optimizations
unless it is prepared to back them out. Even in conventional programs,
dynamic class loading is increasingly unavoidable. Forcing virtuous
programmers to declare virtual (lest their program misbehave or
their perfectly valid bytecode fail to load, or their perfectly valid
source code fail to compile) is far worse than allowing foolish
programmers to declare final.

Making semantic distinctions of this scale between compile time
and runtime will be a significant blow to Perl, which has always been
strengthened by its dynamism. Its competitors do not include such
artifacts; they perform class finalization optimizations on the fly,
and, despite the complexity of the task, are prepared to back out these
optimizations at runtime--while the optimized routines are executing,
if necessary. Yes, this requires synchronization points, notifications
(or inline checks), and limits code motion. Better than the
alternative, I say. It is very simply a huge step backwards to
create a semantic wall between primary compilation and program
execution.

So write the complicated code to make it work right.
- or -
Take the performance hit and go home.

Dynamism has a price. Perl has always paid it in the past. What's
changed?

-- 

Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]



Re: Vocabulary

2003-12-17 Thread Larry Wall
On Wed, Dec 17, 2003 at 06:20:22AM -, Rafael Garcia-Suarez wrote:
: Larry Wall wrote in perl.perl6.language :
:  On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote:
: : When you say CHECK time, do you mean there'll be a CHECK phase for
: : code that gets required at run time?
:  
:  Dunno about that.  When I say CHECK time I'm primarily referring
:  to the end of the main compilation.  Perl 5 appears to ignore CHECK
:  blocks declared at run time, so in the absence of other considerations
:  I suspect Perl 6 might do the same.
: 
: This has proven to be inconvenient except for a few specialized usages,
: such as the B::/O compiler framework.
: 
: There's a need (more or less) for special blocks that can be run at the
: end of the compilation phase of any arbitrary compilation unit.

Well, that's what I'd call an other consideration.  :-)

Larry


Re: Vocabulary

2003-12-17 Thread Larry Wall
On Tue, Dec 16, 2003 at 06:55:56PM -0500, Gordon Henriksen wrote:
: Michael Lazzaro wrote:
: 
:  I don't think so; we're just talking about whether you can extend a 
:  class at _runtime_, not _compiletime_.  Whether or not Perl can have 
:  some degree of confidence that, once a program is compiled, it won't 
:  have to assume the worst-case possibility of runtime alteration of 
:  every class, upon every single method call, just in case 
:  you've screwed with something.
: 
: That's a cute way of glossing over the problem.
: 
: How do you truly know when runtime is in the first place? Imagine an
: application server which parses and loads code from files on-demand.
: This shouldn't be difficult. Imagine that that code references a
: system of modules.
: 
: Imagine if Perl finalizes classes after primary compilation
: (after parsing, say, an ApacheHandler file), and proceeds to behave
: quite differently indeed afterwards.
: 
: Imagine that a perfectly innocent coder finds that his class
: library doesn't run the same (doesn't run at all) under the
: application server as it does when driven from command line scripts:
: His method overrides don't take effect (or, worse, Perl tells him he
: can't even compile them because the class is already finalized! And
: he thought Perl was a dynamic language!).
: 
: What's his recourse? Nothing friendly. Tell Perl that he's going
: to subclass the classes he subclasses? Why? He already subclasses
: them! Isn't that tell enough? And how? Some obscure configuration
: file of the application server, no doubt. And now the app server needs
: to be restarted if that list changes. His uptime just went down. And
: now he can't have confidence that his system will continue to behave
: consistently over time; apachectl restart becomes a part of his
: development troubleshooting lexicon.

Any such application server would probably just

use DYNAMIC_EVERYTHING;

(or whatever we call it) and have done with it.

: Java doesn't make him do that; HotSpot can make this optimization at
: runtime and back it out if necessary. Maybe he'll just write a JSP
: instead.

If Parrot turns out to be able to make this optimization, then the
individual declarations of dynamism merely become hints that it's
not worth trying to optimize a particular class because it'll get
overridden anyway.  It's still useful information on an individual
class basis.  The only thing that is bogus in that case is the global
DYNAMIC_EVERYTHING declaration in the application server.  So I could
be argued into making that the default.  A program that wants a static
analysis at CHECK time for speed would then need to declare that.
The downside of making that the default is that then people won't
declare which classes need to remain extensible under such a regime.
That's another reason such a declaration does not belong with the
class itself, but with the users of the class.  If necessary, the main
program can pick out all the classes it things need to remain dymanic:

module Main;
use STATIC_CLASS_CHECK;
use class Foo is dynamic;
use class Bar is dynamic;

or whatever the new Cuse syntax will be in A11...

: C# and VB.NET do likewise. ASP.NET isn't looking so bad, either. The
: .NET Frameworks are sure a lot less annoying than the Java class
: library, after all.

On the other hand, those guys are also doing a lot more mandatory
static typing to get their speed, and that's also annoying.
(Admittedly, they're working on supporting dynamic languages better.)

: Point of fact, for a large set of important usage cases, Perl simply
: can't presume that classes will EVER cease being introduced into the
: program. That means it can NEVER make these sorts of optimizations
: unless it is prepared to back them out. Even in conventional programs,
: dynamic class loading is increasingly unavoidable. Forcing virtuous
: programmers to declare virtual (lest their program misbehave or
: their perfectly valid bytecode fail to load, or their perfectly valid
: source code fail to compile) is far worse than allowing foolish
: programmers to declare final.

The relative merit depends on who declares the final, methinks.  But
if we can avoid both problems, I think we should.

: Making semantic distinctions of this scale between compile time
: and runtime will be a significant blow to Perl, which has always been
: strengthened by its dynamism. Its competitors do not include such
: artifacts; they perform class finalization optimizations on the fly,
: and, despite the complexity of the task, are prepared to back out these
: optimizations at runtime--while the optimized routines are executing,
: if necessary. Yes, this requires synchronization points, notifications
: (or inline checks), and limits code motion. Better than the
: alternative, I say. It is very simply a huge step backwards to
: create a semantic wall between primary compilation and program
: execution.
: 
: So write the complicated code to make it work