Re: Next Apocalypse

2004-06-29 Thread Jonadab the Unsightly One
Austin Hastings [EMAIL PROTECTED] writes:

 Of course, how hard can it be to implement the .parent property?

.parent and also .children, plus .moveto and .remove (which doesn't
actually destroy the object but sets its parent to undef, basically,
cleaning up the .children property of its parent), and a couple of
extra routines for testing ancestor relationships and stuff, but...

 You'll want it on just about everything, though, 

Right, it would be less useful if not all objects had it.  Although it
would be easy enough to implement a class of container object that
implemented the forest and each contained a .node which could hold
some other kind of object.  That adds a layer of indirection, but it
would allow for organizing arbitrary objects using the forest.

In Inform any object with no parent is on the forest floor, but it
would be easier to implement I think by making the forest floor an
object, and having all of the forest-container objects be located
there by default and having .remove move them there.  Then the forest
floor's .children would take care of the ability to iterate over all
the toplevel objects.

This approach has the advantage of not needing any changes in core.

Maybe I just won't sweat it.  A lot of the things Inform programmers
do with the object forest can be done in Perl in other ways, combining
hashes and references and arrays and stuff (stuff Inform doesn't
really have per se; well it has (statically-sized) arrays, and it sort
of fakes references...  but it's not the same).  Properties are the
thing that was harder to get around a lack of, and we're getting those :-)

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b-()}}
split//,[EMAIL PROTECTED]/ --;$\=$ ;- ();print$/



Re: Next Apocalypse

2004-06-29 Thread Dan Sugalski
On Tue, 29 Jun 2004, Jonadab the Unsightly One wrote:

 Austin Hastings [EMAIL PROTECTED] writes:

  Of course, how hard can it be to implement the .parent property?

 .parent and also .children, plus .moveto and .remove (which doesn't
 actually destroy the object but sets its parent to undef, basically,
 cleaning up the .children property of its parent), and a couple of
 extra routines for testing ancestor relationships and stuff, but...

Sure, no big deal. Also, don't forget the trival matter of moving from a
class-based object system to a prototype based one. (Since right now
objects don't *have* parent objects (just parent classes), or child
anythings) While making things still look like they're a class-based
system for all the code and programmers who're used to that.

No problems there, I'm sure. Patches, of course, are welcome.

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk



Re: Next Apocalypse

2004-06-29 Thread Jonadab the Unsightly One
Dan Sugalski [EMAIL PROTECTED] writes:

 Sure, no big deal. Also, don't forget the trival matter of moving
 from a class-based object system

No, the object system in question is still class-based.  The object
forest is orthogonal to that.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b-()}}
split//,[EMAIL PROTECTED]/ --;$\=$ ;- ();print$/



Re: Next Apocalypse

2004-06-28 Thread Jonadab the Unsightly One
Dan Sugalski [EMAIL PROTECTED] writes:

 Speaking of objects...  are we going to have a built-in object
 forest, like Inform has, where irrespective of class any given
 object can have up to one parent at any given time,

 Multiple parent classes, yes. 

Not remotely the same thing.

 Parent objects, no.

Oh, well.
  
 and be able to declare objects as starting out their lives with a
 given parent object, move them at runtime from one parent to
 another (taking any of their own children that they might have
 along with them), fetch a list of the children or siblings of an
 object, and so forth?

 Erm I don't think so. I get the feeling that Inform had a
 different view of OO than we do.

I was asking mainly because Perl6 was moving in that general
direction.  Having compile-time traits but also being able to tag
properties on at runtime is very Inform-like.  Inform's object model
also fits pretty well with Perl's notions of context, things like
being able to treat someobject.someproperty as a value and not care
whether it's actually a value (the more common case) or whether it's
really a routine that returns a value each time (the more flexible
case), for example; in Perl6 this will be accomplished more likely by
returning an object that has the desired numerify routine so that the
caller can just assume it's a number and not get surprised, but that
amounts to ultimately the same flexibility.

So I was just wondering about the _other_ useful feature of Inform's
object model, the object forest.  However, this is the sort of thing
that could be added to a later version without breaking any existing
code.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b-()}}
split//,[EMAIL PROTECTED]/ --;$\=$ ;- ();print$/



Re: Next Apocalypse

2004-06-28 Thread Austin Hastings
--- Jonadab the Unsightly One [EMAIL PROTECTED] wrote:
 Dan Sugalski [EMAIL PROTECTED] writes:
 
  Speaking of objects...  are we going to have a built-in object
  forest, like Inform has, where irrespective of class any given
  object can have up to one parent at any given time,
 
  Multiple parent classes, yes. 
 
 Not remotely the same thing.
 
  Parent objects, no.
 
 Oh, well.

Of course, how hard can it be to implement the .parent property?

You'll want it on just about everything, though, so the change will
probably be to CORE::MetaClass. It still shouldn't be that hard to do.
Maybe Luke Palmer will post a solution... :-)

=Austin


Re: Next Apocalypse

2004-06-28 Thread Luke Palmer
Austin Hastings writes:
 Of course, how hard can it be to implement the .parent property?
 
 You'll want it on just about everything, though, so the change will
 probably be to CORE::MetaClass. It still shouldn't be that hard to do.
 Maybe Luke Palmer will post a solution... :-)

use Class::Classless;

?

Luke


Re: Next Apocalypse

2003-09-19 Thread Dan Sugalski
On Thu, 18 Sep 2003, Andy Wardley wrote:

 chromatic wrote:
  The thinking at the last design meeting was that you'd explicitly say
  Consider this class closed; I won't muck with it in this application
  at compile time if you need the extra optimization in a particular
  application.
 
 In Dylan, this is called a sealed class.  It tells the compiler that it's 
 safe to resolve method names to slot numbers at parse time, IIRC.  Seems
 like a nice idea.

We'll probably have an optimizer setting for this so you can declare 
classes sealed at the end of compilation. Parrot'll have to have a means 
of yelling loudly (and probably throwing a fatal exception) if you try and 
alter a sealed class at runtime.

This'll likely be a language-dependent setting, as some languages will 
seal classes by default, which makes some amount of sense in some 
circumstances.

Dan



RE: Next Apocalypse

2003-09-19 Thread Gordon Henriksen
chromatic wrote:

 The point is not for module authors to say no one can ever extend or 
 modify this class.  It's for module users to say I'm not 
 extending or modifying this class.

Ah, shouldn't optimization be automatic? Much preferrable to provide
opt-out optimizations instead of opt-in optimizations. C++ const
qualifiers, anybody? final in Java? Compressable line noise in most
programs, those Even--especially--programmers who are writing code
in main:: can be naïve of the code they call; they're no more
trustworthy than the module author. It's the local user of a class which
knows what's going on, and not even then in the case of polymorphic
classes. The optimizations enabled by final/sealed are very broadly
applicable to most code; would be good to turn them on by default and
automatically turn them off when they become inapplicable. DWIM +
performance + flexibility. Thus the notifications discussion.

If method call-function call optimization were dicking with a routine
and making it misbehave (e.g., forcing excessive recompilation), though,
I would want to see a plain old pragma to broadly turn off the
optimization. Just:

no optimized method_calls;

That way, I could move the pragma down as far as an unnamed block, if I
wanted to isolate its effects to one method call, or as far out as the
entire module if I was lazy and wanted to do that instead. But no
I-promise-not-to-override-methods-of-this-class-anywhere-in-the-entire-p
rogram pragma for me, thanks. Way too much action at a distance.

Potentially disruptive optimizations, off by default, could be
intuitively enabled by the same pragma, too:

# Try hard to vectorize hyper arithmetic for SSE and AltiVec.
use optimized vector_hyper_ops;

And it sets up a namespace for optimizations, which might help make
optimizations extensible, transparent, or even pluggable.

--
 
Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]




Re: Next Apocalypse

2003-09-19 Thread Stéphane Payrard
On Thu, Sep 18, 2003 at 02:12:31PM -0700, chromatic wrote:
 On Thursday, September 18, 2003, at 12:33 PM, Gordon Henriksen wrote:
 
 Ah, shouldn't optimization be automatic? Much preferrable to provide
 opt-out optimizations instead of opt-in optimizations.
 
 No.  That's why I tend to opt-out of writing in C and opt-in to writing 
 Perl.
 
 Perl (all versions) and Parrot are built around the assumption that 
 just about anything can change at run-time. Optimizing the language for 
 the sake of optimization at the expense of programmer convenience 
 doesn't feel very Perlish to me.

With Perl6, few people will compile whole librairies but most
will load bytecode. At this late stage there is little place for
tunable optimization except JITting or it would defeat the
sharing of such code between different intances of Perl6. Nothing
will preclude to dynamically extend classes. I note that in Perl6
many optimizations were autoloading for deferring compilation of
material until it's really needed. With bytecode, it makes sense
(at least optimization-wise) that the programmer decides if his
classes will be sealed or some methods to be final because at the
user level it is too late to decide.

--
 stef


 
 -- c
 


Re: Next Apocalypse

2003-09-18 Thread Andy Wardley
chromatic wrote:
 The thinking at the last design meeting was that you'd explicitly say
 Consider this class closed; I won't muck with it in this application
 at compile time if you need the extra optimization in a particular
 application.

In Dylan, this is called a sealed class.  It tells the compiler that it's 
safe to resolve method names to slot numbers at parse time, IIRC.  Seems
like a nice idea.

A


Re: Next Apocalypse

2003-09-18 Thread Austin Hastings

--- Andy Wardley [EMAIL PROTECTED] wrote:
 chromatic wrote:
  The thinking at the last design meeting was that you'd explicitly
 say
  Consider this class closed; I won't muck with it in this
 application
  at compile time if you need the extra optimization in a particular
  application.
 
 In Dylan, this is called a sealed class.  It tells the compiler that
 it's safe to resolve method names to slot numbers at parse time, 
 IIRC. Seems like a nice idea.

Sounds like a potential keyword, or perhaps a ubiquitous method, or
both. But how to differentiate sealed under optimization versus
sealed under inheritance? 

Perhaps it would be better to specify an optimizability attribute at
some level?

  package Foo is optimized all;
  sub foo is optimized(!call) {...}

=Austin



Re: Next Apocalypse

2003-09-18 Thread chromatic
On Thursday, September 18, 2003, at 07:49 AM, Austin Hastings wrote:

Sounds like a potential keyword, or perhaps a ubiquitous method, or
both. But how to differentiate sealed under optimization versus
sealed under inheritance?
I don't understand the question.

The point is not for module authors to say no one can ever extend or 
modify this class.  It's for module users to say I'm not extending or 
modifying this class.

Perhaps it would be better to specify an optimizability attribute at
some level?
That seems possible, from the same level.

-- c



Re: Next Apocalypse

2003-09-18 Thread Austin Hastings

--- chromatic [EMAIL PROTECTED] wrote:
 On Thursday, September 18, 2003, at 07:49 AM, Austin Hastings wrote:
 
  Sounds like a potential keyword, or perhaps a ubiquitous method, or
  both. But how to differentiate sealed under optimization versus
  sealed under inheritance?
 
 I don't understand the question.

I want CSE and loop unrolling, say, but don't want to prevent
polymorphic dispatch by declaring Cmy Dog $spot is sealed; -- if
someone gives me a Beagle, I want to call Beagle::bark, not Dog::bark.

 
 The point is not for module authors to say no one can ever extend or
 
 modify this class.  It's for module users to say I'm not extending
 or 
 modifying this class.
 
  Perhaps it would be better to specify an optimizability attribute
 at
  some level?
 
 That seems possible, from the same level.

Yes.

=Austin



Re: Next Apocalypse

2003-09-18 Thread chromatic
On Thursday, September 18, 2003, at 12:33 PM, Gordon Henriksen wrote:

Ah, shouldn't optimization be automatic? Much preferrable to provide
opt-out optimizations instead of opt-in optimizations.
No.  That's why I tend to opt-out of writing in C and opt-in to writing 
Perl.

Perl (all versions) and Parrot are built around the assumption that 
just about anything can change at run-time. Optimizing the language for 
the sake of optimization at the expense of programmer convenience 
doesn't feel very Perlish to me.

-- c



Speculative optimizations (was RE: Next Apocalypse)

2003-09-17 Thread Gordon Henriksen
Austin Hastings wrote:

 [... code example ...]

Good point to raise, but I'm not sure about your conclusion.

12 and 13 don't exist *in registers,* but they do certainly do exist at
various points: in the original source, in the AST, and in the
unoptimized PASM (if any). The registers were optimized away because the
values are statically knowable. So the variables could be resurrected,
but the variable-to-register mapping would need to be adjusted. So the
optimizer could just emit temp_a = constant 12 instead of temp_a =
I8 in its metadata for the sequence point.

So my conclusion is not that this optimization is impossible, but that
register file consistency isn't enough: High-level variables need to be
brought into a findable state. If straight PBC/PASM (not IMCC) were
being optimized, it would be the contents of the parrot register file
(as written) that must be findable, even though the registers might have
been reallocated. By contrast, when IMCC was being optimized (as in your
example), then its variables are much more like C variables than they
are like registers.

The level at which consistency is achieved at sequence points must be
the same level at which the speculative optimization was performed.
parrot just provides a particularly large number of intermediate
representations, and thus a large number of levels at which said
speculative optimizations might be performed: Perl 6 ==parser= AST
==compileroptimizer= IMCC ==compileroptimizer= PASM ==assembler=
PBC ==compileroptimizer= machine code. So eek. But any optimizer in
that chain which doesn't attempt speculative optimizations wouldn't have
to worry about them.


 2- In the living dangerously category, go with my original
 suggestion: GC the compiled code blocks, and just keep executing what
 you've got until you leave the block.

Now, maybe when the invalidated compilation is set loose, parrot could
check whether it's on the stack and just flat out emit a warning if so,
letting execution of the routine continue even knowing that it might now
misbehave. But the programmer will have been informed of it through the
warning, so it's kindof okay--so long as the optimizations are applied
in a consistent manner. i.e., It is^Wwould be unacceptable that^Wif
these crashes^Wwarnings only appear after serving a web page 100,000
times, when HotSpot^Wparrot finally decides to attempt a
heavily-optimized compile of your jsp^WMason component.

This strategy makes some amount of sense. Rewriting optimized stack
frames is a VERY hard problem--Java 1.4.x provides prior art of that,
demonstrating a very long period of instability after HotSpot was
introduced. It is VERY difficult to exercise the code thoroughly, has
plenty of opportunity to make everything crash, and it's a lot of work
(and a lot of code) for something which probably doesn't affect much
good code anyhow.

How much work is it actually worth to solve this problem, rather than
giving the programmer (a) enough information to isolate and diagnose the
side-effects and (b) pragmas to turn off the optimizations when needed?
The advantages of speculative optimizations and dynamism can both
*easily* retained if some (relatively minor?) caveats are accepted.

Let's compare that to Perl 5.

The first example was of an inlined sub returning a constant. This
strategy is better than perl 5 would do--perl 5 would just emit a
warning and continue to use the old inlined value.

Advantage: parrot

My example was a method-call-becomes-[inlined]-function-call
optimization, like that one in HotSpot that Sun's so proud of. For this,
perl 5 does better: It would never attempt the optimization, and thus
would always behave correctly.

Advantage: perl 5

Your example is overloading the infix + operator in mid-stream. Again,
Perl 5 doesn't attempt the optimization, so always does the right thing.

Advantage: perl 5

So not entirely clear-cut, although perl 5 does provide (very) limited
prior art of the (limited) acceptability of side-effects due to
speculative optimizations.


 Arguably, sequence points could be used here to partition the blocks
 into smaller elements.

(Breaking code fragments down at sequence points creates a lot more
memory fragmentation, reduces locality, adds memory allocation overhead,
and complicates branching from a standard PC-relative branch to, what, a
PC-relative load + register indirect branch? Ick. I don't think branch
history caches would be very happy.)

Sounds like Dan's not keen on sequence points in the first place, since
sequence points prohibit code motion optimizations. Assuming that
code-motion optimizations take precedence over speculative
optimizations, then stack frame re-writing is impossible within that
framework, and this entire class of speculative optimizations either (a)
must not be implemented, (b) must check before proceeding with the
optimized path [and that check may be more expensive than not performing
the optimization in the first place], or (c) might 

Re: Next Apocalypse

2003-09-16 Thread Jonathan Scott Duff
On Mon, Sep 15, 2003 at 03:30:06PM -0600, Luke Palmer wrote:
 The focus here, I think, is the following problem class:
 
 sub twenty_five() { 25 }# Optimized to inline
 sub foo() {
 print twenty_five;  # Inlined
 twenty_five := { 36 };
 print twenty_five;  # Uh oh, inlined from before
 }
 
 The problem is we need to somehow un-optimize while we're running.  That
 is most likely a very very hard thing to do, so another solution is
 probably needed.

A naive approach would be to cache the names and positions of things
that are optimized such that when one of the cached things are
modified, the optimization could be replaced with either another
optimization (as in the case above) or an instruction to execute some
other code (when we can't optimize the change).

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Next Apocalypse

2003-09-16 Thread Dan Sugalski
On Tue, 16 Sep 2003 [EMAIL PROTECTED] wrote:

 On Mon, 15 Sep 2003, Dan Sugalski wrote:
   Great. But will it also be possible to add methods (or modify them)
   to an existing class at runtime?
 
  Unless the class has been explicitly closed, yes.
 
 That strikes me as back-to-front.
 
 The easy-to-optimise case should be the easy-to-type case; otherwise a lot
 of optimisation that should be possible isn't because the programmers are
 too inexperienced/lazy/confused to put the closed tags in.

It would, in fact, be back-to-front if performance was the primary 
goal. And, while it is *a* goal, it is not *the* goal.

From Parrot's standpoint, it doesn't make much difference--once you start 
spitting out assembly you can be expected to be explicit, so it's a matter 
of what the language designer wants. While Larry's the ultimate arbiter, 
and I am *not* Larry, generally he favors flexibility over speed as the 
default, especially when you can get it at the current speed (that is, 
perl 5's speed) or faster. You don't lose anything over what you have now 
with that flexibility enabled, and if you want to restrict yourself for 
the extra boost, you can explicitly do that.

Then again, he may also decide that things are open until the end of 
primary compilation, at which point things are closed--you never know... 
:)

Dan



Re: Next Apocalypse

2003-09-16 Thread Robin Berjon
My, is this a conspiracy to drag -internals onto -language to make it look alive? :)

You guys almost made me drop my coffee mug...

--
Robin Berjon [EMAIL PROTECTED]
Research Scientist, Expway  http://expway.com/
7FC0 6F5F D864 EFB8 08CE  8E74 58E6 D5DB 4889 2488


Re: Next Apocalypse

2003-09-16 Thread Dan Sugalski
On Tue, 16 Sep 2003, Ph. Marek wrote:

  You can, of course, stop even potential optimization once the first I can
  change the rules operation is found, but since even assignment can change
  the rules that's where we are right now. We'd like to get better by
  optimizing based on what we can see at compile time, but that's a very,
  very difficult thing to do.

 How about retaining some debug info, (line number come to mind), but only at 
 expression level??

This is insufficient, since many (potentially most) optimizations result 
in reordered, refactored, moved, and/or mangled code that doesn't have a 
line-for-line, or expression-for-expression, correspondence to the 
original. If it did, this would all be much easier.

The alternative, of course, is to not apply those transforms, but then 
you're left with pretty much no optimizations.

Dan



Re: Next Apocalypse

2003-09-16 Thread David Storrs
On Mon, Sep 15, 2003 at 11:49:52AM -0400, Gordon Henriksen wrote:
 Austin Hastings wrote:
 
  Given that threads are present, and given the continuation based
  nature of the interpreter, I assume that code blocks can be closured.
  So why not allocate JITed methods on the heap and manage them as first
  class closures, so that the stackref will hold them until the stack
  exits?
 
 
 Austin,
 
 That's a fine and dandy way to do some things, like progressive
 optimization ala HotSpot. (e.g., Oh! I've been called 10,000 times.
 Maybe you should bother to run a peephole analyzer over me?) But when
 an assumption made by the already-executing routine is actually
 violated, it causes incorrect behavior. Here's an example:
 
 class My::PluginBase;
 
 method say_hi() {
 # Default implementation.
 print Hello, world.\n;
 }
 
 
 package main;
 
 load_plugin($filepath) { ... }
 
 my $plugin is My::PluginBase;
 $plugin = load_plugin($ARGV[0]);
 $plugin.SayHi();
 
 Now, while it would obviously seem a bad idea to you, it would be
 reasonable for perl to initially optimize the method call
 $plugin.say_hi() to the function call My::PluginBase::say_hi($plugin).
 But when load_plugin loads a subclass of My::PluginBase from the file
 specified in $ARGV[0], then that assumption is violated. Now, the
 optimization has to be backed out, or the program will never call the
 subclass's say_hi. Letting the GC clean up the old version of main when
 the notification is received isn't enough--the existing stack frame must
 actually be rewritten to use the newly-compiled version.


This discussion seems to contain two separate problems, and I'm not
always sure which one is being addressed.  The components I see are:

1) Detecting when the assumptions have been violated and the code has
   to be changed; and,

2) Actually making the change after we know that we need to.


I have at least a vague idea of why #1 would be difficult.  As to
#2...assuming that the original source is available (or can be
recovered), then regenerating the expression does not seem difficult.
Or am I missing something?


--Dks


RE: Next Apocalypse

2003-09-16 Thread Gordon Henriksen
David Storrs wrote:

 This discussion seems to contain two separate problems, and I'm not
 always sure which one is being addressed.  The components I see are:
 
 1) Detecting when the assumptions have been violated and the code has
to be changed; and,
 
 2) Actually making the change after we know that we need to.
 
 
 I have at least a vague idea of why #1 would be difficult.  As to
 #2...assuming that the original source is available (or can be
 recovered), then regenerating the expression does not seem difficult.
 Or am I missing something?

David,

Recompiling isn't hard (assuming that compiling is already implemented).
Nor is notification of changes truly very difficult.

What you're missing is what I was trying to demonstrate with my plugin
example, and what Dan also pointed out with his mutating a subroutine
that returns a constant (and was presumably inlined). If the routine is
RUNNING at the time an assumption made by the optimizer becomes invalid,
then the stack frame needs to be munged from the old, optimized version
compilation to the new, pessimized version. THAT is the hard
problem--one of register  variable remapping and PC mutation--and it is
impossible to solve after code motion optimizations, for the same reason
that C++ debuggers get horribly confused when running over -O3 code.

--
 
Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]




RE: Next Apocalypse

2003-09-16 Thread Austin Hastings

--- Gordon Henriksen [EMAIL PROTECTED] wrote:
 David Storrs wrote:
 
  This discussion seems to contain two separate problems, and I'm not
  always sure which one is being addressed.  The components I see
 are:
  
  1) Detecting when the assumptions have been violated and the code
 has
 to be changed; and,
  
  2) Actually making the change after we know that we need to.
  
  
  I have at least a vague idea of why #1 would be difficult.  As to
  #2...assuming that the original source is available (or can be
  recovered), then regenerating the expression does not seem
 difficult.
  Or am I missing something?
 
 David,
 
 Recompiling isn't hard (assuming that compiling is already
 implemented).
 Nor is notification of changes truly very difficult.
 

Let's try again:

 1: sub othersub {...}
 2:
 3: sub foo {
 4:   my $a = 13;
 5:   my $b = 12;
 6: 
 7:othersub;
 8:my $c = $a + $b;
 9: 
10:print $c;
11: }
12:
13: eval sub othersub { ::infix+ := ::infix-; };
14: foo;

In theory, we should get Object(1) as a result.

So let's compile that:

4: temp_a = 13;
5: temp_b = 12;
7: call othersub
8: push temp_b
8: push temp_a
8: call ::infix+
9: call print

Now let's optimize it:

; temp_a, temp_b are MY, so suppress them.
7:  call othersub
 ; We do CSE at compile time
 ; We don't use C after the print, so drop it
9:  push 25
9:  call print

So when we execute othersub, and notice that the world has shifted
beneath our feet, what do we do?

We don't even have the right registers laying about. There are no a
and b values to pass to the operator+ routine. (An advantage,
actually: losing an intermediate value would be a worse scenario.)

Two possibilities:

1- Some classes of optimization could be forced to occupy a certain
minimum size. The act of breaking the optimization assumptions could
replace the optimization (e.g., CSE) with a thunk.

  Example:
cse: push 25
 call print

  Becomes:
cse: push 25
 branch $+3
 nop
 nop
 call print

  So that we could replace it with:
cse: call undo_cse
 call print

  Where presumably undo_cse performed the operations in source code
order. (What's more, it would make perl binaries *very* compressible
:-)

2- In the living dangerously category, go with my original
suggestion: GC the compiled code blocks, and just keep executing what
you've got until you leave the block. Arguably, sequence points could
be used here to partition the blocks into smaller elements.

This tends to make event loops act really stupid, but ...

=Austin



Re: Next Apocalypse

2003-09-15 Thread Piers Cawley
Luke Palmer [EMAIL PROTECTED] writes:
 Also, the standard library, however large or small that will be, will
 definitely be mutable at runtime.  There'll be none of that Java you
 can't subclass String, because we think you shouldn't crap.

Great. But will it also be possible to add methods (or modify them)
to an existing class at runtime? You only have to look at a Smalltalk
image to see packages adding helper methods to Object and the like
(better to add a do nothing method to Object that find yourself doing
C$thing.do_that if $thing.can('do_that') all the time...)


Re: Next Apocalypse

2003-09-15 Thread Luke Palmer
Piers Cawley writes:
 Luke Palmer [EMAIL PROTECTED] writes:
  Also, the standard library, however large or small that will be, will
  definitely be mutable at runtime.  There'll be none of that Java you
  can't subclass String, because we think you shouldn't crap.
 
 Great. But will it also be possible to add methods (or modify them)
 to an existing class at runtime? 

Parrot supports it, so I don't see why Perl wouldn't. 

 You only have to look at a Smalltalk image to see packages adding
 helper methods to Object and the like (better to add a do nothing
 method to Object that find yourself doing C$thing.do_that if
 $thing.can('do_that') all the time...)

Agreed completely.  Plus, there are some cool things you can do by
mutating methods of Object, like implementing auto-rollback variables.
(A6 Clet foo() behavior).

Luke


Re: Next Apocalypse

2003-09-15 Thread Piers Cawley
Luke Palmer [EMAIL PROTECTED] writes:

 Piers Cawley writes:
 Luke Palmer [EMAIL PROTECTED] writes:
  Also, the standard library, however large or small that will be, will
  definitely be mutable at runtime.  There'll be none of that Java you
  can't subclass String, because we think you shouldn't crap.
 
 Great. But will it also be possible to add methods (or modify them)
 to an existing class at runtime? 

 Parrot supports it, so I don't see why Perl wouldn't. 

 You only have to look at a Smalltalk image to see packages adding
 helper methods to Object and the like (better to add a do nothing
 method to Object that find yourself doing C$thing.do_that if
 $thing.can('do_that') all the time...)

 Agreed completely.  Plus, there are some cool things you can do by
 mutating methods of Object, like implementing auto-rollback variables.
 (A6 Clet foo() behavior).

Shhh!


Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
On Sun, 14 Sep 2003, Gordon Henriksen wrote:

 On Saturday, September 13, 2003, at 11:33 , [EMAIL PROTECTED] 
 wrote:
 
  On Sat, 13 Sep 2003, Luke Palmer wrote:
 
  Of course having a no subclasses tag means the compiler can change a 
  method call into a direct subroutine call, but I would hope that method 
  calling will be fast enough that it won't need to.
 
 A strategy to actually keep that optimization, and apply it to much more 
 code, would be that the JIT compiler could optimize for the case that 
 there are no known subclasses, and pessimize that only if a subclass 
 were later loaded.
 
 I think this is one of the features Leo's been on about with respect to 
 notifications.

That's one of the reasons notifications were designed in, yes. There's a 
growing body of interesting work on what's essentially disposable 
or partially-useful optimizations. Given the dynamic nature of most of the 
languages we care about for parrot, throwaway optimizations make a lot of 
sense--we can build optimized versions of functions for the current 
structure, and redo them if the structure changes.

This isn't entirely an easy task, however, since you can't throw away or 
redo a function/method/sub/whatever that you're already in somewhere in 
the call-chain, which means any optimizations will have to be either 
checked at runtime or undoable when code is in the middle of them. (Which 
is a decidedly non-trivial thing, and impossible in general, though 
possibly not in specific cases)

I don't see any reason to not allow marking a class as final (though 
that's not hugely useful), closed (which is far more useful), or declared 
objects as exact types rather than subtypable. (Which is, in conjunction 
with closing a class, hugely useful)

Which is to say that marking a class as unenhanceable and Foo variables as 
holding *only* objects of type Foo and not child classes, gets us a lot 
more than marking a class as final. (Which does the same thing, but 
globally, and not necessarily usefully)

Dan



Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
On 13 Sep 2003, Jonadab the Unsightly One wrote:

 Dan Sugalski [EMAIL PROTECTED] writes:
 
  Next Apocalypse is objects, and that'll take time. 
 
 Objects are *worth* more time than a lot of the other topics.
 Arguably, they're just as important as subroutines, in a modern
 language.

Oh, I dunno -- it's not like there's all that much to objects, but I might 
be a touch biased here. (I'd say they're worth more time because people 
get so worked up over them, not because they're particularly complex, 
complicated, or difficult)
 
 Speaking of objects...  are we going to have a built-in object forest,
 like Inform has, where irrespective of class any given object can have
 up to one parent at any given time,

Multiple parent classes, yes. Parent objects, no. (Unless you consider 
composition of objects from multiple parent classes with each class having 
instance variables in the objects as multiple parent objects. In which 
case the answer's yes)

 which can change at runtime, 

Well, the inheritance hierarchy for a class can change at runtime, though 
we'd really rather you didn't do that, so I suppose you could do it for 
individual objects--they'd just get a transparent singleton class that 
you'd mess around with from there. I think I may be missing your point.

 and
 be able to declare objects as starting out their lives with a given
 parent object, move them at runtime from one parent to another (taking
 any of their own children that they might have along with them), fetch
 a list of the children or siblings of an object, and so forth?

Erm I don't think so. I get the feeling that Inform had a different 
view of OO than we do.

Dan



Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
On Mon, 15 Sep 2003, Piers Cawley wrote:

 Luke Palmer [EMAIL PROTECTED] writes:
  Also, the standard library, however large or small that will be, will
  definitely be mutable at runtime.  There'll be none of that Java you
  can't subclass String, because we think you shouldn't crap.
 
 Great. But will it also be possible to add methods (or modify them)
 to an existing class at runtime? 

Unless the class has been explicitly closed, yes.

Dan



Re: Next Apocalypse

2003-09-15 Thread Mark J. Reed
[Recipients trimmed back to just the list, because it had gotten very
silly.  When replying to someone who's on the list, there's no need to
copy them personally, too; they just end up with duplicates. :)]

On 2003-09-15 at 09:21:18, Piers Cawley wrote:
 Great. But will it also be possible to add methods (or modify them)
 to an existing class at runtime? You only have to look at a Smalltalk
 image to see packages adding helper methods to Object and the like
 (better to add a do nothing method to Object that find yourself doing
 C$thing.do_that if $thing.can('do_that') all the time...)

No need to go as far as Smalltalk; just look at Ruby (which probably took
the idea from Smalltalk, but that's beside the point. :)).  There are
all sorts of libraries that do their thing by adding methods to
other classes.  Not just to Object, but to specific built-in classes
such as Time, Date, Numeric, Fixnum, String, etc.  

I'm not saying that this is necessarily the cleanest way to
implement an add-on; it's easy to argue that this sort of thing is
best accomplished by a static class or module method that takes an
instance of the class in question as a parameter.  But it's a nice
thing to have in the toolkit.

-- 
Mark REED| CNN Internet Technology
1 CNN Center Rm SW0831G  | [EMAIL PROTECTED]
Atlanta, GA 30348  USA   | +1 404 827 4754


Re: Next Apocalypse

2003-09-15 Thread Simon Cozens
[EMAIL PROTECTED] (Piers Cawley) writes:
 Great. But will it also be possible to add methods (or modify them)
 to an existing class at runtime? You only have to look at a Smalltalk
 image to see packages adding helper methods to Object and the like

People get upset when CPAN authors add stuff to UNIVERSAL:: :)

-- 
Anyone attempting to generate random numbers by deterministic means is, of
course, living in a state of sin.
-- John Von Neumann


Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
On 15 Sep 2003, Simon Cozens wrote:

 [EMAIL PROTECTED] (Piers Cawley) writes:
  Great. But will it also be possible to add methods (or modify them)
  to an existing class at runtime? You only have to look at a Smalltalk
  image to see packages adding helper methods to Object and the like
 
 People get upset when CPAN authors add stuff to UNIVERSAL:: :)

Yeah, but does that actually *stop* anyone? :-P

Dan



Re: Next Apocalypse

2003-09-15 Thread Austin Hastings

--- Dan Sugalski [EMAIL PROTECTED] wrote:
 On Sun, 14 Sep 2003, Gordon Henriksen wrote:
 
  On Saturday, September 13, 2003, at 11:33 , [EMAIL PROTECTED]
 
  wrote:
  
   On Sat, 13 Sep 2003, Luke Palmer wrote:
  
   Of course having a no subclasses tag means the compiler can
 change a 
   method call into a direct subroutine call, but I would hope that
 method 
   calling will be fast enough that it won't need to.
  
  A strategy to actually keep that optimization, and apply it to much
 more 
  code, would be that the JIT compiler could optimize for the case
 that 
  there are no known subclasses, and pessimize that only if a
 subclass 
  were later loaded.
  
  I think this is one of the features Leo's been on about with
 respect to 
  notifications.
 
 That's one of the reasons notifications were designed in, yes.
 There's a growing body of interesting work on what's essentially
 disposable or partially-useful optimizations. Given the dynamic
 nature of most of the languages we care about for parrot, 
 throwaway optimizations make a lot of 
 sense--we can build optimized versions of functions for the current 
 structure, and redo them if the structure changes.
 
 This isn't entirely an easy task, however, since you can't throw away
 or redo a function/method/sub/whatever that you're already in 
 somewhere in the call-chain, which means any optimizations will 
 have to be either checked at runtime or undoable when code is in 
 the middle of them.

Why is this?

Given that threads are present, and given the continuation based nature
of the interpreter, I assume that code blocks can be closured. So why
not allocate JITed methods on the heap and manage them as first class
closures, so that the stackref will hold them until the stack exits?

=Austin



Re: Next Apocalypse

2003-09-15 Thread Piers Cawley
Simon Cozens [EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] (Piers Cawley) writes:
 Great. But will it also be possible to add methods (or modify them)
 to an existing class at runtime? You only have to look at a Smalltalk
 image to see packages adding helper methods to Object and the like

 People get upset when CPAN authors add stuff to UNIVERSAL:: :)

More fool people.


Re: Next Apocalypse

2003-09-15 Thread Piers Cawley
Austin Hastings [EMAIL PROTECTED] writes:
 There's a growing body of interesting work on what's essentially
 disposable or partially-useful optimizations. Given the dynamic
 nature of most of the languages we care about for parrot, throwaway
 optimizations make a lot of sense--we can build optimized versions
 of functions for the current structure, and redo them if the
 structure changes.
 
 This isn't entirely an easy task, however, since you can't throw
 away or redo a function/method/sub/whatever that you're already in
 somewhere in the call-chain, which means any optimizations will
 have to be either checked at runtime or undoable when code is in
 the middle of them.

 Why is this?

 Given that threads are present, and given the continuation based
 nature of the interpreter, I assume that code blocks can be
 closured. So why not allocate JITed methods on the heap and manage
 them as first class closures, so that the stackref will hold them
 until the stack exits?

Ooh, cunning.


Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
On Mon, 15 Sep 2003, Austin Hastings wrote:

 --- Dan Sugalski [EMAIL PROTECTED] wrote:
  This isn't entirely an easy task, however, since you can't throw away
  or redo a function/method/sub/whatever that you're already in 
  somewhere in the call-chain, which means any optimizations will 
  have to be either checked at runtime or undoable when code is in 
  the middle of them.
 
 Why is this?

Because there are some assertions that can lead the optimizer to make some 
fundamental assumptions, and if those assumptions get violated or 
redefined while you're in the middle of executing a function that makes 
use of those assumptions, well...

Changing a function from pure to impure, adding an overloaded operator, or 
changing the core structure of a class can all result in code that needs 
regeneration. That's no big deal for code you haven't executed yet, but if 
you have:

a = 1;
b = 12;
foo();
c = a + b;

and a and b are both passive classes, that can get transformed to

a = 1;
b = 12;
foo();
c = 13;

but if foo changes the rules of the game (adding an overloaded + to a or
b's class) then the code in that sub could be incorrect.

You can, of course, stop even potential optimization once the first I can 
change the rules operation is found, but since even assignment can change 
the rules that's where we are right now. We'd like to get better by 
optimizing based on what we can see at compile time, but that's a very, 
very difficult thing to do.

Dan



RE: Next Apocalypse

2003-09-15 Thread Gordon Henriksen
Austin Hastings wrote:

 Dan Sugalski [EMAIL PROTECTED] wrote:
 
  There's a growing body of interesting work on what's essentially
  disposable or partially-useful optimizations. Given the dynamic
  nature of most of the languages we care about for parrot, 
  throwaway optimizations make a lot of sense--we can build optimized
  versions of functions for the current structure, and redo them if
  the structure changes.
  
  This isn't entirely an easy task, however, since you can't throw
  away or redo a function/method/sub/whatever that you're already in 
  somewhere in the call-chain, which means any optimizations will 
  have to be either checked at runtime or undoable when code is in 
  the middle of them.
 
 Why is this?
 
 Given that threads are present, and given the continuation based
 nature of the interpreter, I assume that code blocks can be closured.
 So why not allocate JITed methods on the heap and manage them as first
 class closures, so that the stackref will hold them until the stack
 exits?


Austin,

That's a fine and dandy way to do some things, like progressive
optimization ala HotSpot. (e.g., Oh! I've been called 10,000 times.
Maybe you should bother to run a peephole analyzer over me?) But when
an assumption made by the already-executing routine is actually
violated, it causes incorrect behavior. Here's an example:

class My::PluginBase;

method say_hi() {
# Default implementation.
print Hello, world.\n;
}


package main;

load_plugin($filepath) { ... }

my $plugin is My::PluginBase;
$plugin = load_plugin($ARGV[0]);
$plugin.SayHi();

Now, while it would obviously seem a bad idea to you, it would be
reasonable for perl to initially optimize the method call
$plugin.say_hi() to the function call My::PluginBase::say_hi($plugin).
But when load_plugin loads a subclass of My::PluginBase from the file
specified in $ARGV[0], then that assumption is violated. Now, the
optimization has to be backed out, or the program will never call the
subclass's say_hi. Letting the GC clean up the old version of main when
the notification is received isn't enough--the existing stack frame must
actually be rewritten to use the newly-compiled version.

--
 
Gordon Henriksen
IT Manager
ICLUBcentral Inc.
[EMAIL PROTECTED]




Re: Next Apocalypse

2003-09-15 Thread Nicholas Clark
On Mon, Sep 15, 2003 at 11:19:22AM -0400, Dan Sugalski wrote:

 Changing a function from pure to impure, adding an overloaded operator, or 
 changing the core structure of a class can all result in code that needs 
 regeneration. That's no big deal for code you haven't executed yet, but if 
 you have:
 
 a = 1;
 b = 12;
 foo();
 c = a + b;

 but if foo changes the rules of the game (adding an overloaded + to a or
 b's class) then the code in that sub could be incorrect.
 
 You can, of course, stop even potential optimization once the first I can 
 change the rules operation is found, but since even assignment can change 
 the rules that's where we are right now. We'd like to get better by 
 optimizing based on what we can see at compile time, but that's a very, 
 very difficult thing to do.

Sorry if this is a crack fuelled idea, and sorry that I don't have a patch
handy to implement it, but might the following work:

0: retain the original bytecode
1: JIT the above subroutine as if a and b remain integers
   However, at all the change the world points
   (presumably they are de facto sequence points, and will we need to
take the concept from C?)
   put an op in the JIT stream check if world changed
2: If the world has changed, jump out of the JIT code back into the
   bytecode interpreter at that point

I fear that this would mean that the JIT wouldn't just have to insert lots
of has world changed ops, but also an awful lot of fixup code that
nearly never gets executed; code to stuff values back from processor
registers into parrot registers.

Then again, as this code is rarely executed and isn't speed critical
(after the world has changed you're likely to be in the the switch core,
which isn't slow) maybe the fixup would actually be better as densely
compressed special bytecode instructions on which processor registers to
save where in the parrot interpreter struct.

Effectively the JIT code would have an escape hatch instruction for every
debuggable position in the original parrot source. This seems expensive on
space, but is the only way I can think of to implement JITting of (say)
arithmetic on loops where there are calls midway which could tie/overload/
whatever the very variables used in the loops.

Nicholas Clark


Re: Next Apocalypse

2003-09-15 Thread Luke Palmer
Nicholas Clark writes:
 On Mon, Sep 15, 2003 at 11:19:22AM -0400, Dan Sugalski wrote:
 
  Changing a function from pure to impure, adding an overloaded operator, or 
  changing the core structure of a class can all result in code that needs 
  regeneration. That's no big deal for code you haven't executed yet, but if 
  you have:
  
  a = 1;
  b = 12;
  foo();
  c = a + b;
 
  but if foo changes the rules of the game (adding an overloaded + to a or
  b's class) then the code in that sub could be incorrect.
  
  You can, of course, stop even potential optimization once the first I can 
  change the rules operation is found, but since even assignment can change 
  the rules that's where we are right now. We'd like to get better by 
  optimizing based on what we can see at compile time, but that's a very, 
  very difficult thing to do.
 
 Sorry if this is a crack fuelled idea, and sorry that I don't have a patch
 handy to implement it, but might the following work:
 
 0: retain the original bytecode
 1: JIT the above subroutine as if a and b remain integers
However, at all the change the world points
(presumably they are de facto sequence points, and will we need to
 take the concept from C?)
put an op in the JIT stream check if world changed
 2: If the world has changed, jump out of the JIT code back into the
bytecode interpreter at that point

No, I think Parrot will still only JIT IN registers.  Optimization
includes way more than just JIT.

I was thinking, the compiler could emit two functions:  one that works
on regular PMC's, and one that works with I registers.  The latter would
then be JITted as usual, and JIT doesn't need to do anything special.

The focus here, I think, is the following problem class:

sub twenty_five() { 25 }# Optimized to inline
sub foo() {
print twenty_five;  # Inlined
twenty_five := { 36 };
print twenty_five;  # Uh oh, inlined from before
}

The problem is we need to somehow un-optimize while we're running.  That
is most likely a very very hard thing to do, so another solution is
probably needed.

Luke


Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
At 3:30 PM -0600 9/15/03, Luke Palmer wrote:
The problem is we need to somehow un-optimize while we're running.  That
is most likely a very very hard thing to do, so another solution is
probably needed.
It is, indeed, a very hard problem. It's solvable if you disallow 
several classes of optimization (basically ones that involve code 
motion) that make things less than optimal. You can also scatter a 
lot of tests for invalidations and have the notification system set 
the flags, though there are still code motion problems there. (Loops 
are particularly troublesome)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Next Apocalypse

2003-09-15 Thread Dan Sugalski
At 5:07 PM -0500 9/15/03, Jonathan Scott Duff wrote:
On Mon, Sep 15, 2003 at 03:30:06PM -0600, Luke Palmer wrote:
 The focus here, I think, is the following problem class:

 sub twenty_five() { 25 }# Optimized to inline
 sub foo() {
 print twenty_five;  # Inlined
 twenty_five := { 36 };
 print twenty_five;  # Uh oh, inlined from before
 }
 The problem is we need to somehow un-optimize while we're running.  That
 is most likely a very very hard thing to do, so another solution is
 probably needed.
A naive approach would be to cache the names and positions of things
that are optimized such that when one of the cached things are
modified, the optimization could be replaced with either another
optimization (as in the case above) or an instruction to execute some
other code (when we can't optimize the change).
That doesn't work in the face of code motion, reordering, or 
simplification, unfortunately. :(
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Next Apocalypse

2003-09-15 Thread martin
On Mon, 15 Sep 2003, Dan Sugalski wrote:
  Great. But will it also be possible to add methods (or modify them)
  to an existing class at runtime?

 Unless the class has been explicitly closed, yes.

That strikes me as back-to-front.

The easy-to-optimise case should be the easy-to-type case; otherwise a lot
of optimisation that should be possible isn't because the programmers are
too inexperienced/lazy/confused to put the closed tags in.

And it would be a better chance to warn at compile time about doing things
which are potentially troublesome.

But whichever way this goes, I take it we'll have warnings like:

Changed method definition Class::foo may not take effect in
pending initialiser
 at program.pl line 9.

Overridden method definition MyClass::foo (new subclass of Class) may not
take effect in pending function bar()
 in zot() at Zot.pm line 5
 in other() at Other.pm line 10
 at program.pl line 123.




Re: Next Apocalypse

2003-09-15 Thread chromatic
On Mon, 2003-09-15 at 17:39, [EMAIL PROTECTED] wrote:

 The easy-to-optimise case should be the easy-to-type case; otherwise a lot
 of optimisation that should be possible isn't because the programmers are
 too inexperienced/lazy/confused to put the closed tags in.

The thinking at the last design meeting was that you'd explicitly say
Consider this class closed; I won't muck with it in this application
at compile time if you need the extra optimization in a particular
application.

It's up to the user of a library to ask for that much optimization, not
the library designer to consult the entrails whether anyone might ever
possibly consider wanting to do something he didn't foresee.

-- c



Re: Next Apocalypse

2003-09-15 Thread Ph. Marek
 Because there are some assertions that can lead the optimizer to make some
 fundamental assumptions, and if those assumptions get violated or
 redefined while you're in the middle of executing a function that makes
 use of those assumptions, well...

 Changing a function from pure to impure, adding an overloaded operator, or
 changing the core structure of a class can all result in code that needs
 regeneration. That's no big deal for code you haven't executed yet, but if
 you have:

 a = 1;
 b = 12;
 foo();
 c = a + b;

 and a and b are both passive classes, that can get transformed to

 a = 1;
 b = 12;
 foo();
 c = 13;

 but if foo changes the rules of the game (adding an overloaded + to a or
 b's class) then the code in that sub could be incorrect.

 You can, of course, stop even potential optimization once the first I can
 change the rules operation is found, but since even assignment can change
 the rules that's where we are right now. We'd like to get better by
 optimizing based on what we can see at compile time, but that's a very,
 very difficult thing to do.
How about retaining some debug info, (line number come to mind), but only at 
expression level??
So in your example if foo() changed the + operator, it would return into the 
calling_sub() at expression 4 (numbered from 1 here :-), notice that 
something has changed, recompile the sub, and continue processing at 
expression 4.

Phil



Re: Next Apocalypse

2003-09-14 Thread Gordon Henriksen
On Saturday, September 13, 2003, at 11:33 , [EMAIL PROTECTED] 
wrote:

On Sat, 13 Sep 2003, Luke Palmer wrote:

Also, the standard library, however large or small that will be, 
will definitely be mutable at runtime.  There'll be none of that Java 
you can't subclass String, because we think you shouldn't crap.
Java's standard class library is a mishmash of things that represent 
containers (variables) and things that represent values (and even some 
broken things that try to be both), with no syntactic help to 
distinguish them.  And its syntax reserves const but doesn't use it 
for anything.

As long as we have is rw and its friends, we can -- with suitable 
care -- make sure that a subclass of a value-representing class is also 
a value-representing class, so there's no semantic need to say never 
any subclasses but we can still do CSE and other neat stuff at compile 
time.

Of course having a no subclasses tag means the compiler can change a 
method call into a direct subroutine call, but I would hope that method 
calling will be fast enough that it won't need to.
A strategy to actually keep that optimization, and apply it to much more 
code, would be that the JIT compiler could optimize for the case that 
there are no known subclasses, and pessimize that only if a subclass 
were later loaded.

I think this is one of the features Leo's been on about with respect to 
notifications. The class loader could fire a notification that a new 
subclass had been loaded. The subroutine which needs to pessimize should 
its assumption be violated would observe that notification, so that it 
could discard its compiled form and be re-JITted when next invoked. 
(This gets harder when the routine is running, though. Prior art: A good 
2 years of infuriating instability in the 1.4.x JDK after HotSpot was 
introduced.)

Even better, the JIT could optimize for the case that there are no known 
overrides of a method, which would allow this optimization to apply to 
much, much more code. Or it could mix the two for reduced overhead: 
Observe classWasSubclassed if there are no subclasses at compile time. 
If there are subclasses at compile time, but no overrides of the method 
which was invoked, observe methodWasOverridden instead.

These strategies get the best of both dynamism and performance whenever 
possible, not just when the programmer felt like hamstringing himself in 
advance by declaring a class or method to be final.



Gordon Henriksen
[EMAIL PROTECTED]


Re: Next Apocalypse

2003-09-13 Thread Jonadab the Unsightly One
Dan Sugalski [EMAIL PROTECTED] writes:

 Next Apocalypse is objects, and that'll take time. 

Objects are *worth* more time than a lot of the other topics.
Arguably, they're just as important as subroutines, in a modern
language.

Speaking of objects...  are we going to have a built-in object forest,
like Inform has, where irrespective of class any given object can have
up to one parent at any given time, which can change at runtime, and
be able to declare objects as starting out their lives with a given
parent object, move them at runtime from one parent to another (taking
any of their own children that they might have along with them), fetch
a list of the children or siblings of an object, and so forth?

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b-()}}
split//,[EMAIL PROTECTED]/ --;$\=$ ;- ();print$/



Re: Next Apocalypse

2003-09-13 Thread Luke Palmer
Jonadab the Unsightly One writes:
 Dan Sugalski [EMAIL PROTECTED] writes:
 
  Next Apocalypse is objects, and that'll take time. 
 
 Objects are *worth* more time than a lot of the other topics.
 Arguably, they're just as important as subroutines, in a modern
 language.
 
 Speaking of objects...  are we going to have a built-in object forest,
 like Inform has, where irrespective of class any given object can have
 up to one parent at any given time, which can change at runtime, and
 be able to declare objects as starting out their lives with a given
 parent object, move them at runtime from one parent to another (taking
 any of their own children that they might have along with them), fetch
 a list of the children or siblings of an object, and so forth?

Not.. exactly that.

There are a lot of useful object systems around, and it's not like Perl
to choose just one of them.  In Perl 5, it was possible to change your
parent Iclasses at runtime, but that's because data wasn't a part of
Perl 5's (minimalist) classes.  In Perl 6, classes associate attributes
with themselves, so I imagine that it's only possible to switch parent
objects, not parent classes.

And I also presume that an object can have as many parents as it likes.

Also, the standard library, however large or small that will be, will
definitely be mutable at runtime.  There'll be none of that Java you
can't subclass String, because we think you shouldn't crap.

Luke


Re: Next Apocalypse

2003-09-13 Thread martin
On Sat, 13 Sep 2003, Luke Palmer wrote:
 Also, the standard library, however large or small that will be, will
 definitely be mutable at runtime.  There'll be none of that Java you
 can't subclass String, because we think you shouldn't crap.

Java's standard class library is a mishmash of things that represent
containers (variables) and things that represent values (and even some
broken things that try to be both), with no syntactic help to distinguish
them.  And its syntax reserves const but doesn't use it for anything.

As long as we have is rw and its friends, we can -- with suitable care --
make sure that a subclass of a value-representing class is also a
value-representing class, so there's no semantic need to say never any
subclasses but we can still do CSE and other neat stuff at compile time.

Of course having a no subclasses tag means the compiler can change a
method call into a direct subroutine call, but I would hope that method
calling will be fast enough that it won't need to.

Will we require methods in subclasses to use the same signatures as the
methods they're overriding?

-Martin

-- 
4GL ... it's code Jim, but not as we know it.




Re: Next Apocalypse

2003-09-10 Thread Andy Wardley
Jonathan Scott Duff wrote:
 This is mostly just a gratuitous message so that Piers has something
 to talk about in the next summary 

I bet Leon has something to say about that.

 Better would be We're working on X and have hashed out the details 
 of Y but are having problems with Z

Something like: We're working on Perl 6 and have hashed out the details 
of Perl 6 but are having problems with users who keep wanting regular 
updates  :-)

A



Re: Next Apocalypse

2003-09-10 Thread Dan Sugalski
On Tue, 9 Sep 2003, Jonathan Scott Duff wrote:

 
 This is mostly just a gratuitous message so that Piers has something
 to talk about in the next summary ;-), but when's the next
 Apocalypse due out?

Well, I don't know if Leon (Hi Piers!) has better information than I do,
but the short answer is Not for a while. Next Apocalypse is objects, and
that'll take time. Damian may well get E7, formats, out sooner, but he's
on vacation for the first time in too long, so he'd better not answer for
a few weeks.  :)

Dan