Re: JIT compilation

2001-11-18 Thread Benoit Cerrina

 There is an effort to compile ruby to the CLR, I don't know more,
 because I can't read japanese :-) And there is someone working on python
 support in the mono compiler, too.
 BTW: we just got our compiler running on linux and compiling a simple
 program, hopefully by next week it can be used for some more real code
 generation.

 lupus
I know, I follow the mono list too.  I don't doubt that dynamic languages
can't
be implemented on the clr, like I said, experience shows that it can be done
on
the jvm which is more restricting, what I'm not sure (and I don't think that
there
are precedents for this) is that they can be efficient when implemented on
the clr.
Benoit





Re: JIT compilation

2001-11-18 Thread Paolo Molaro

On 11/17/01 Dan Sugalski wrote:
 BTW: we just got our compiler running on linux and compiling a simple
 program, hopefully by next week it can be used for some more real code
 generation.
 
 Yahoo! Congrats. Are we still slower than you are? :)

It's a couple of months I'm in features-and-correctness mode, so I guess
the current mono interpreter is slower than parrot: we have a design to
make it twice as fast as it is now, but it would be a waste of time
since with the JIT we are at least 30-40 times faster anyway :-)

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: JIT compilation

2001-11-17 Thread Dan Sugalski

At 03:32 PM 11/16/2001 +0100, Paolo Molaro wrote:
On 11/08/01 Benoit Cerrina wrote:
  I heard that, I was thinking that it would be great to run ruby on mono but
  ruby is very dynamic (like perl but since its so much easier to use and
  program
  it is also easier to redefine the methods and done more often)

There is an effort to compile ruby to the CLR, I don't know more,
because I can't read japanese :-) And there is someone working on python
support in the mono compiler, too.

Keen. We'll do so at some point too, I expect.

BTW: we just got our compiler running on linux and compiling a simple
program, hopefully by next week it can be used for some more real code
generation.

Yahoo! Congrats. Are we still slower than you are? :)

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JIT compilation

2001-11-16 Thread Simon Cozens

On Fri, Nov 16, 2001 at 03:32:06PM +0100, Paolo Molaro wrote:
 And there is someone working on python
 support in the mono compiler, too.

I know, I've just seen. Wouldn't it be really wonderful, Paolo, if
someone wrote some Perl bindings for it as well? :)

-- 
I did write and prove correct a 20-line program in January, but I made
the mistake of testing it on our VAX and it had an error, which two
weeks of searching didn't uncover, so there went one publication out the
window.  - David Gries, 1980



Re: JIT compilation

2001-11-16 Thread Paolo Molaro

On 11/16/01 Simon Cozens wrote:
 On Fri, Nov 16, 2001 at 03:32:06PM +0100, Paolo Molaro wrote:
  And there is someone working on python
  support in the mono compiler, too.
 
 I know, I've just seen. Wouldn't it be really wonderful, Paolo, if
 someone wrote some Perl bindings for it as well? :)

It would be wonderful :-)
It would be even better if the ActiveState people could share
either the code for the work they already did or their wisdom so that
whoever undertakes the task knows what development paths to avoid.
I know they found it quite hard to implement perl in CLR, I guess both
in speed and featurewise: their input would be very appreciated and I
guess they are listening on this list :-)
If there is already more info in the package they provide on the web
site, I'll go read that provided I can download it in some useful format
(hint, hint).

Anyway, implementing perl is hard in any language as also parrot
will show soon, when the real guts of perl will need to be implemented.
I wonder if the main problem they had was mapping a perl scalar to a
System.String? I'd also like to know what system they used for dynamic
invocation of methods...

lupus / back to System.Reflection.Emit ...

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: JIT compilation

2001-11-11 Thread James Mastros

On Sat, 10 Nov 2001, Dan Sugalski wrote:
 There's a minimum charge you're going to have to pay for the privilege of 
 dynamicity, or running a language not built by an organization with 20 
 full-time engineers dedicated to it.
Umm, this isn't really the place for it, so just a quick question:
How many people work for ActiveState? (For a broad defintion of work
for; is on [a] board counts.)  How many of the things on their
Initaiatives page do we consider to be major targets of Parrot?

BTW, my quick count for that second question is 5 of 8.  Perl, PHP,
Python, and Tcl we plan (AFAIK) on being compilable to Parrot, and
possibly XSLT as well.  (I have no idea how much sense considering XSLT
being compiled to a VM makes.)

Mozilla, .NET Framework, Web Services, and XML I see as N/A on a Parrot
level.

-=-James Mastros
-- 
Put bin Laden out like a bad cigar: http://www.fieler.com/terror
You know what happens when you bomb Afghanastan?  Thats right, you knock
over the rubble.   -=- SLM




Re: JIT compilation

2001-11-10 Thread Dan Sugalski

At 07:13 PM 11/8/2001 -0500, Ken Fox wrote:
Dan Sugalski wrote:
  [native code regexps] There's a hugely good case for JITting.

Yes, for JITing the regexp engine. That looks like a much easier
problem to solve than JITing all of Parrot.

The problem there's no different than for the rest of the opcodes. Granted, 
doing a real JIT of the integer math routines is a lot easier than doing it 
for, say, the map opcode, because they generally decompose to a handful of 
machine instructions that are easy to generate with templates.

  If you think about it, the interpreter loop is essentially:
 
 while (code) {
   code = (func_table[*code])();
 }

That's *an* interpreter loop. ;)

Yes, I know.

Some day I'm going to have to write 
HOW_TO_RECOGNIZE_CODE_WHICH_IS_SIMPLIFIED_FOR_PURPOSES_OF_EXAMPLE.pod... :)

  you pay a lot of overhead there in looping (yes, even with the computed
  goto, that computation's loop overhead), plus all the pipeline misses from
  the indirect branching. (And the contention for D cache with your real 
 data)

The gcc goto-label dispatcher cuts the overhead to a single indirect
jump through a register.

Based on data in the D stream. Say Hi to Mr. Pipeline Flush there.

If Perl code *requires* that everything goes through vtable PMC ops,
then the cost of the vtable fetch, the method offset, the address
fetch and the function call will completely dominate the dispatcher.

Oh, absolutely. Even with perl 5, the opcode overhead's not a hugely 
significant part of the cost of execution. Snipping out 3-5% isn't shabby, 
though, and we *will* need all the speed we can muster with regexes being 
part of the generic opcode stream.

  Dynamic languages have potential overheads that can't be generally 
 factored
  out the way you can with static languages--nature of the beast, and one of
  the things that makes dynamic languages nice.

I know just enough to be dangerous. Lots of people have done work in
this area -- and on languages like Self and Smalltalk which are as
hard to optimize as Perl. Are we aiming high enough with our
performance goals?

Our aim is higher than perl 5. No, it's not where a lot of folks want to 
aim, but we can get there using well-known techniques and proven theories. 
Perl is, above all, supposed to be practical. The core interpreter's not a 
place to get too experimental or esoteric. (That's what the cores written 
to prove I'm a chuckle-headed moron with no imagination are for :)

I'll be happy with a clean re-design of Perl. I'd be *happier* with
an implementation that only charges me for cool features when I use
them.

There's a minimum charge you're going to have to pay for the privilege of 
dynamicity, or running a language not built by an organization with 20 
full-time engineers dedicated to it. Yes, with sufficient cleverness we can 
identify those programs that are, for optimization purposes, FORTRAN in 
perl, but that sort of cleverness is in short supply, and given a choice 
I'd rather it go to making method dispatch faster, or making the parser 
interface cleaner.

(Oops. That's what Bjarne said and look where that took him... ;)

Yeah, that way lies madness. Or C++. (But I repeat myself)

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JIT compilation

2001-11-08 Thread Norbert Bollow

Ken Fox [EMAIL PROTECTED] wrote:
 Simon Cozens wrote:
  ... Mono's work on JIT compilation ... they've got some pretty
  interesting x86 code generation stuff going on.
 
 Mono is doing some very cool stuff, but it's kind of hard
 to understand at this time. The x86 code generation macros are
 easy to use, but the instruction selection is based on a
 re-implementation (?) of BURG and it will take some time to
 dig through the way that code works. BURG is a code-generator
 generator in the style of yacc. (More trips to the library...)

BURG means Bottom-Up Rewrite Grammar... a way to generate
optimized code quickly *if* you have plenty of memory available.
We're not planning to use this technique in DotGNU because we
think that the resulting heap thrash problem will likely
destroy the performance gains from BURG.

 If all the current JIT work is focused on JVM and CIL, then Parrot's
 JIT is going to break new ground.
 
 There's a more fundamental issue though. After spending time
 looking at the benefits of a JIT and thinking about the yet
 another switch/goto implementation conversation, I'm starting
 to think that a JIT will be almost useless for Parrot.

I don't think so.  Of course Perl is rich enough that you can
create code which is so intricately complicated that trying to
JIT it makes no sense.

But I believe that such code will always be the exception, not
the rule.

 JITs help when the VM is focused on lots of small instructions
 with well-known, static semantics. Perl's use of Parrot is going
 to be focused almost completely on PMC vtable ops.

A lot of real-life Perl code contains just function calls and
operations on scalar variables that contain either a string or a
number, as well as arrays and hashes which contain such values.
I would like this kind of code to be compiled to instructions
with static meaning, together with some kind of marker that this
chunk of code should be JITed.

 Is there any interest in a less dynamic dialect of Perl that can
 take advantage of a JIT?

I think the compiler should automatically identify the parts of
a program where JITing makes sense.

(Disclaimer:  I'm not volunteering to actually implement
this... I'm just speaking as a user who highly values Perl's
flexibility but who still writes pretty simple, straightforward
code most of the time.)

Greetings, Norbert.

-- 
A member of FreeDevelopers and the DotGNU Steering Committee: dotgnu.org
Norbert Bollow, Weidlistr.18, CH-8624 Gruet   (near Zurich, Switzerland)
Tel +41 1 972 20 59   Fax +41 1 972 20 69  http://thinkcoach.com
Your own domain with all your Mailman lists: $15/month  http://cisto.com



Re: JIT compilation

2001-11-08 Thread Paolo Molaro

On 11/07/01 Ken Fox wrote:
 Simon Cozens wrote:
  ... Mono's work on JIT compilation ... they've got some pretty
  interesting x86 code generation stuff going on.
 
 Mono is doing some very cool stuff, but it's kind of hard
 to understand at this time. The x86 code generation macros are
 easy to use, but the instruction selection is based on a
 re-implementation (?) of BURG and it will take some time to
 dig through the way that code works. BURG is a code-generator
 generator in the style of yacc. (More trips to the library...)

monoburg is different because it allows using functions to determine the
cost of a rule, instead of using constants only. It makes it easier to
create optimizing rules (for a specific processor, for example).
Moreover, the previous *burg implementations were not free software.
You can read any BURG related papers to understand how that code works.

 I also poked around the Lightning project which is documented a
 bit better. It's also more widely ported (although I'm sure Mono
 is going to catch up soon -- those guys code like maniacs.)
 Here's a quote from Lightning that worries me a bit:

GNU lightning is fine for some quickdirty JIT work, but it produces
code in a platform-independednt way and that prevents optimizations...
We are not interested in maximum performance right now in mono, but we
want to be able to get there down the line, so we have a design that
lets us move quickly with the implementation and we can optimize later.

 There's a more fundamental issue though. After spending time
 looking at the benefits of a JIT and thinking about the yet
 another switch/goto implementation conversation, I'm starting
 to think that a JIT will be almost useless for Parrot.

It depends how much the dispatch is going to cost in an average parrot
program and how smart the runtime can be: if the interp can detected
that the arguments to a method are of a specific type, it can create a
jitted implementation and exec that, instead of going with the slow
path. This is the approach taken by the python optimizer, IIRC.
Or all the code can be JITted and you avoid at least the dispatch cost.

 About the only place where a JIT might really win big is in
 regexps.

Since when I wrote x86-codegen.h I wanted to use it and compile a regex
down to native code, I'm convinced it can be a *huge* win.
Alas, my free time is very little lately.

 Is there any interest in a less dynamic dialect of Perl that can
 take advantage of a JIT? Should we feed a request to p6-language
 to think about this?

IMHO, a less dynamic perl is perl no more, though some consideration
should be made to make it easier to implement the language on virtual
machines such as the JVM and CLR.
That said, I'm open to sneak into mono opcode handling that may make it
easier to run parrot code there, if it is needed.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: JIT compilation

2001-11-08 Thread Paolo Molaro

On 11/08/01 Norbert Bollow wrote:
 BURG means Bottom-Up Rewrite Grammar... a way to generate
 optimized code quickly *if* you have plenty of memory available.

Maybe, if  32 KB for a large method is plenty (about 600 bytes of IL code):
basically all the methods in out corlib are below that size.
That can be improved when time for optimization comes (we already use a
mempool to reduce greatly the calls to malloc). And anyway a method is
jitted only when it's actually called, so I don't think this is a great
deal.

 We're not planning to use this technique in DotGNU because we
 think that the resulting heap thrash problem will likely
 destroy the performance gains from BURG.

There are two main advantages with BURG: it can generate optimized code
fast, yes, but the main advantage is that it makes porting easier.
Anyway, time will tell, we have still a lot of development to do.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: JIT compilation

2001-11-08 Thread Dave Goehrig

On Wed, Nov 07, 2001 at 06:46:20PM -0500, Ken Fox wrote:
 
 JITs help when the VM is focused on lots of small instructions
 with well-known, static semantics. Perl's use of Parrot is going
 to be focused almost completely on PMC vtable ops. A JIT has
 no advantage over a threaded interpreter.

Unless we can get Perl to use the other features of Parrot outside
the scope of the PMC vtable ops, Parrot won't give you any real
benefits other than a clean rewrite of the Perl internals to
look more like Ruby :)

 About the only place where a JIT might really win big is in
 regexps.

It would be interesting to see how JIT regexs would compare with
something like libpcre (which is pretty damn fast).

 Have other people come to the same conclusion?

I think there is still a good cause for JIT.

 Is there any interest in a less dynamic dialect of Perl that can
 take advantage of a JIT?

I will admit to probably being one of the worst offenders of the special
cases Dan has been mentioning, but I know that most of my code is of
the sort that would benefit from JIT.  More importantly, I think a lot
of the problems currently facing Parrot really doing its job well, is
the nasty habits people have relying upon the side effects of perl5
lexical scoping.  The same results could be had, but in a different fashion.






Re: JIT compilation

2001-11-08 Thread Benoit Cerrina


 IMHO, a less dynamic perl is perl no more, though some consideration
 should be made to make it easier to implement the language on virtual
 machines such as the JVM and CLR.
 That said, I'm open to sneak into mono opcode handling that may make it
 easier to run parrot code there, if it is needed.

 lupus

I heard that, I was thinking that it would be great to run ruby on mono but
ruby is very dynamic (like perl but since its so much easier to use and
program
it is also easier to redefine the methods and done more often)
This is actually the main reason why I'm interested in Parrot, to use it to
interpret ruby program, since it is supposed to work for language like
python
and perl which have the same type of flexibility
Benoit

 --
 -
 [EMAIL PROTECTED] debian/rules
 [EMAIL PROTECTED] Monkeys do it better




Re: JIT compilation

2001-11-08 Thread Benoit Cerrina

 On Wed, Nov 07, 2001 at 06:46:20PM -0500, Ken Fox wrote:
  
  JITs help when the VM is focused on lots of small instructions
  with well-known, static semantics. Perl's use of Parrot is going
  to be focused almost completely on PMC vtable ops. A JIT has
  no advantage over a threaded interpreter.
 
 Unless we can get Perl to use the other features of Parrot outside
 the scope of the PMC vtable ops, Parrot won't give you any real
 benefits other than a clean rewrite of the Perl internals to
 look more like Ruby :)
That would be great... I recently had to write a perl extension and 
the same for ruby, writing the ruby version after the perl was such 
a joy I almost wept (note the almost I'm not that desperate)
Benoit




Re: JIT compilation

2001-11-08 Thread Dan Sugalski

At 02:46 AM 11/8/2001 -0600, Dave Goehrig wrote:
On Wed, Nov 07, 2001 at 06:46:20PM -0500, Ken Fox wrote:
 
  JITs help when the VM is focused on lots of small instructions
  with well-known, static semantics. Perl's use of Parrot is going
  to be focused almost completely on PMC vtable ops. A JIT has
  no advantage over a threaded interpreter.

Unless we can get Perl to use the other features of Parrot outside
the scope of the PMC vtable ops, Parrot won't give you any real
benefits other than a clean rewrite of the Perl internals to
look more like Ruby :)

Never underestimate the advantages of a clean rewrite. Threads, anyone? :)

We should be faster as well, and we get the side-effect of interoperability 
with other languages that compile down to parrot code.

Faster and clean are the two core purposes behind the rewrite for perl 6. 
The rest is (highly desirable, and very leverageable) gravy.

  About the only place where a JIT might really win big is in
  regexps.

It would be interesting to see how JIT regexs would compare with
something like libpcre (which is pretty damn fast).

Unless libpcre generates machine code, a JITted interpreter would blow the 
doors off of it.

The RE engines I've seen, which includes perl's, are essentially small 
interpreters. (Or state machines, if you want to look at it that way) Ones 
that are very special-purpose, sure, but interpreters anyway, complete with 
the overhead that goes with them. You can make them fast, sure--recent 
experiments with parrot bear that one out. But regardless, no matter what 
you do to optimize things, this:

  do_x
  do_y
  do_z

will be faster than

  do_x
  goto op[*pc++]
  do_y
  goto op[*pc++]
  do_z

Fewer instructions executed, and fewer branches presented to the CPU which 
means fewer pipeline flushes.

Plus it also means that your RE, being executable, lives in your 
processor's I cache, rather than competing with your data for D cache space.

  Have other people come to the same conclusion?

I think there is still a good cause for JIT.

There's a hugely good case for JITting.

  Is there any interest in a less dynamic dialect of Perl that can
  take advantage of a JIT?

I will admit to probably being one of the worst offenders of the special
cases Dan has been mentioning, but I know that most of my code is of
the sort that would benefit from JIT.  More importantly, I think a lot
of the problems currently facing Parrot really doing its job well, is
the nasty habits people have relying upon the side effects of perl5
lexical scoping.  The same results could be had, but in a different fashion.

Everyone's code can benefit from JIT compilation. It doesn't suffer from 
the issues that come up with optimizations. In fact, the only issue there 
really is is the need to invalidate the JIT version of a sub if that sub 
changes.

JITting is really just the next step past TIL. It doesn't reorder the code, 
inline things, or otherwise transform your program. All it really does is 
cut out a lot of the otherwise unneeded overhead.

If you think about it, the interpreter loop is essentially:

   while (code) {
 code = (func_table[*code])();
   }

you pay a lot of overhead there in looping (yes, even with the computed 
goto, that computation's loop overhead), plus all the pipeline misses from 
the indirect branching. (And the contention for D cache with your real data)

TIL code turns the program into

function_a();
function_b();
function_c();
function_a();

which cuts out all the loop overhead, turns your program into executable 
code so it lives in I cache, and makes things generally more predictable 
for the processor, so that's good.

JITting your code turns the program into

{insert function_a body}
{insert function_b body}
{insert function_c body}

which, in addition to the wins from TILling the code cuts out the function 
preamble and postambles.

Now, we can't do this for all our opcodes as we're allowing them to be 
overridden (and we require it for Safe mode execution to be reasonably 
safe) but for the core opcodes, which I'd guess something like 85-95% of a 
program will be made up of, we can do it with no change in program 
behaviour other than the odd speedup.

Dynamic languages have potential overheads that can't be generally factored 
out the way you can with static languages--nature of the beast, and one of 
the things that makes dynamic languages nice. So maybe perl code will never 
be as fast as the equivalent FORTRAN. That's life, but it doesn't mean that 
Parrot executing perl code can't be a darned sight faster than Perl 5 
executing the same code...

Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JIT compilation

2001-11-08 Thread Ken Fox

Dan Sugalski wrote:
 [native code regexps] There's a hugely good case for JITting.

Yes, for JITing the regexp engine. That looks like a much easier
problem to solve than JITing all of Parrot.

 If you think about it, the interpreter loop is essentially:
 
while (code) {
  code = (func_table[*code])();
}

That's *an* interpreter loop. ;)

 you pay a lot of overhead there in looping (yes, even with the computed 
 goto, that computation's loop overhead), plus all the pipeline misses from 
 the indirect branching. (And the contention for D cache with your real data)

The gcc goto-label dispatcher cuts the overhead to a single indirect
jump through a register. No table lookups. No extra branches. This is
even on the register starved x86. Yeah, there's a stall potential,
but it looks easy enough to fill with other stuff.

If Perl code *requires* that everything goes through vtable PMC ops,
then the cost of the vtable fetch, the method offset, the address
fetch and the function call will completely dominate the dispatcher.

 Dynamic languages have potential overheads that can't be generally factored 
 out the way you can with static languages--nature of the beast, and one of 
 the things that makes dynamic languages nice.

I know just enough to be dangerous. Lots of people have done work in
this area -- and on languages like Self and Smalltalk which are as
hard to optimize as Perl. Are we aiming high enough with our
performance goals?

I'll be happy with a clean re-design of Perl. I'd be *happier* with
an implementation that only charges me for cool features when I use
them. (Oops. That's what Bjarne said and look where that took him... ;)

- Ken



Re: JIT compilation

2001-11-07 Thread Ken Fox

Simon Cozens wrote:
 ... Mono's work on JIT compilation ... they've got some pretty
 interesting x86 code generation stuff going on.

Mono is doing some very cool stuff, but it's kind of hard
to understand at this time. The x86 code generation macros are
easy to use, but the instruction selection is based on a
re-implementation (?) of BURG and it will take some time to
dig through the way that code works. BURG is a code-generator
generator in the style of yacc. (More trips to the library...)

I also poked around the Lightning project which is documented a
bit better. It's also more widely ported (although I'm sure Mono
is going to catch up soon -- those guys code like maniacs.)
Here's a quote from Lightning that worries me a bit:

from doc/body.texi:
| @lightning{} has been useful in practice; however, it does have
| at least four drawbacks: it has limited registers ...
|
| The low number of available registers (six) is also an important
| limitation.  However, let's take the primary application of dynamic
| code generation, that is, bytecode translators.  The underlying
| virtual machines tend to have very few general purpose registers
| (usually 0 to 2) and the translators seldom rely on sophisticated
| graph-coloring algorithms to allocate registers to temporary
| variables. ...

If all the current JIT work is focused on JVM and CIL, then Parrot's
JIT is going to break new ground.

There's a more fundamental issue though. After spending time
looking at the benefits of a JIT and thinking about the yet
another switch/goto implementation conversation, I'm starting
to think that a JIT will be almost useless for Parrot.

JITs help when the VM is focused on lots of small instructions
with well-known, static semantics. Perl's use of Parrot is going
to be focused almost completely on PMC vtable ops. A JIT has
no advantage over a threaded interpreter.

About the only place where a JIT might really win big is in
regexps.

Have other people come to the same conclusion?

Is there any interest in a less dynamic dialect of Perl that can
take advantage of a JIT? Should we feed a request to p6-language
to think about this?

- Ken



Re: JIT compilation

2001-11-07 Thread Uri Guttman

 KF == Ken Fox [EMAIL PROTECTED] writes:

  KF JITs help when the VM is focused on lots of small instructions
  KF with well-known, static semantics. Perl's use of Parrot is going
  KF to be focused almost completely on PMC vtable ops. A JIT has
  KF no advantage over a threaded interpreter.

there is also TIL code which is a way to remove the dispatch loop and
make that happen as in line machine code. we generate machine code calls
to each op instead of the op code loop calling them. the calls will
still be made through vtables but the overhead of the dispatcher will be
gone. if the vtable code is of decent speed, a TIL system could get nice
results. maybe some of the vtable access and handling could also be
optimized at machine level as well.

a full JIT would require the same info that dan says we won't have for
the optimizer. you have to guarantee that none of those sneaky and
powerful backdoor tricks (like string eval, tying, %MY::, etc) are being
done before you can generate pure machine code. that is one major win
and loss with highly dynamic languages like perl - runtime and compile
time can affect each other so much.

uri

-- 
Uri Guttman  --  [EMAIL PROTECTED]   http://www.stemsystems.com
-- Stem is an Open Source Network Development Toolkit and Application Suite -
- Stem and Perl Development, Systems Architecture, Design and Coding 
Search or Offer Perl Jobs    http://jobs.perl.org