Re: "All tests successful" considered harmful

2005-11-01 Thread Piers Cawley
chromatic <[EMAIL PROTECTED]> writes:

> On Thu, 2005-10-27 at 10:26 -0700, jerry gay wrote:
>
>> we're missing some parts of a testing framework. we don't have the
>> ability to write test files in PIR, so we're dependent on a perl
>> install for testing. perl's a great language for writing tests anyway,
>> and right now we're dependent on perl for parrot's configure and build
>> as well. that said, breaking this dependency will make parrot just a
>> bit closer to standing on its own.
>
> We have a Test::Builder port in PIR.  I will move up my plan to port
> Parrot::Test to use it.

Somewhere I have the beginnings of an xUnit style 'parrotunit' testing
framework, but that was written ages ago, so I'd need to start it again, but it
wasn't that hard to implement.

-- 
Piers Cawley <[EMAIL PROTECTED]>
http://www.bofh.org.uk/


Re: loadlib and libraries with '.' in the name

2005-09-26 Thread Piers Cawley
Joshua Juran <[EMAIL PROTECTED]> writes:

> On Sep 23, 2005, at 3:47 AM, Leopold Toetsch wrote:
>
>> On Sep 23, 2005, at 7:51, Ross McFarland wrote:
>>
>>> i was planning on playing around with gtk+ bindings and parrot and went
>>> about looking around for the work that had already been done and didn't turn
>>> anything up. if anyone knows where i can find it or who i should talk to i
>>> would appreciate that info as well.
>>
>> Google for "NCI gtk". There is also a weekly summary entry but the xrl.us
>> shortcut seems to have expired.
>
> I was wondering about that.  I Googled for "tinyurl considered harmful" and 
> was
> surprised to find only one message, discussing the phishing risks.  I found no
> mention of the risk of outsourcing a bottleneck to a third party who has zero
> obligation or direct interest to continue providing the service.
>
>  From <http://metamark.net/about#expire>:
>
>> Do Metamark links expire?
>>
>> The Metamark urls expire after five years or two years after the last usage -
>> whichever comes later. However, if a link is never used, it will expire after
>> two years. This should mean that as long as a link is on a public page, some
>> search engine will visit it and keep it alive.
>>
>> Of course, this is subject to change and is no promise but just my intentions
>> as of this writing. If you want guarantees you can make your own service.
>
> To be quite frank, I'm astonished the practice exists here in the first place.
> In my opinion it goes directly against the spirit of the Web envisioned by Tim
> Berners-Lee.  A better practice would be to post long URL's within angled
> brackets.  And there's no reason you can't do both, either.

Which is why the archived summaries at deve.perl.org and perl.com all use the
long form URLs. The metamarked URLs only ever appear as a convenience for
readers on the mailing list. I am not about to start polluting my mailed
summaries with such monstrosities as

<http://groups.google.com/[EMAIL PROTECTED]>

any time soon. You're welcome to write your own summaries that do use the full
URLs of course. Or, if it bothers you that much, write something to run from
cron once a month or so that grabs shortened summary URLs and does a simple GET
on them.

-- 
Piers Cawley <[EMAIL PROTECTED]>
http://www.bofh.org.uk/


Re: Summarizer Suggestion...

2005-07-06 Thread Piers Cawley
Matt Fowles <[EMAIL PROTECTED]> writes:

> Will~
>
> On 7/6/05, Will Coleda <[EMAIL PROTECTED]> wrote:
>> 
>> It would be nice if the summarizers also summarized the various
>> Planet RSS feeds of journal entries, if those entries were
>> sufficiently relevant.
>
> I would be willing to do that, but I can't speak for Piers...

>From the various journal entries I've ready, they often stand pretty well as
summaries anyway. Which is why, in my last summary, instead of summarizing
journals I simply pointed to planetsix -- I presume there are others.


Re: Attack of the fifty foot register allocator vs. the undead continuation monster

2005-06-12 Thread Piers Cawley
Matt Fowles <[EMAIL PROTECTED]> writes:

> Chip~
>
> On 6/12/05, Chip Salzenberg <[EMAIL PROTECTED]> wrote:
>> I'd like like to note for other readers and the p6i archives that
>> Piers has failed to grasp the problem, so the solution seems pointless
>> to him.  I'm sorry that's the case, but I've already explained enough.
>
> This response worries me firstly because of its rudeness and second
> because of the problem itself.  As I see it there are four
> possibilities a:
>
> 1) Chip is right, Piers is wrong.  This is a complex problem and
> refusing to explain it means that others will doubtless also
> misunderstand it, which you have a chance to preempt here.
>
> 2) Chip is wrong, Piers is right.  This is a complex problem and
> refusing discussion on it would be a costly mistake.
>
> 3) Chip is right, Piers is right. The two of you have are working from
> a different base set of definitions/axioms or misunderstood each other
> in some other way.
>
> 4) Chip is wrong, Piers is wrong.  Shutting down open conversation so
> forcefully and caustically will prevent discussion in the future and
> this problem will continue to haunt parrot as no viable solution has
> been seen.
>
> Regardless of which of these possibilities is true.  I see a need for
> more discussion of this issue.  Preferably a discussion that does not
> degrade into backhanded insults.  I have my own ideas about this
> problem, but I will save that for another response.

Don't worry Matt, we're still talking. It takes more than sarcasm to stop me.


Re: Attack of the fifty foot register allocator vs. the undead continuation monster

2005-06-12 Thread Piers Cawley
Chip Salzenberg <[EMAIL PROTECTED]> writes:

> On Sun, Jun 12, 2005 at 03:15:22PM +0100, Piers Cawley wrote:
>> But if you fallow the calling conventions that looks like:
>> 
>>sub foo {
>>  $a = 1.
>>  $c = 10;
>>  print $c
>>  
>> save_dollar_a_and_only_dollar_a_because_im_going_to_use_it_after_this_function_call
>>  foo()
>> _implicit_label_for_return_continuation:
>>  restore_dollar_a
>> _ooh_i_dont_have_to_save_anything
>>  $b = bar()
>> _nor_do_i_have_to_restore_anything
>> print $b
>>}
>
> You have greatly misunderstood.  We're talking about how &foo manages
> its callee-saves registers.  The registers involved, the ones that I'm
> calling $a and $b, are P16-P31.
>
>> Of course, if you're going to actually use GOTO to get to some label
>> that you should only get to via a continuation ...
>
> For purposes of allocating the callee-saves registers, a continuation
> may as well _be_ a goto.

No it's not. A continuation should carry all the information required to
restore the registers to the correct state when it is taken. A goto
doesn't. For the purposes of allocating the registers in foo you can allocate
$a to P16, and $b to p16, because when the call to bar takes the continuation
back to bar, the 'restore' phase should grab $a from the continuation and bung
it back on P16. The continuation doesn't even need to know where to restore $a
to, because the 'caller restores' code should take care of that.

> Don't feel bad, though.  I thought the same thing the first time *I*
> heard about this problem.

I think you should have held that thought.


Re: Attack of the fifty foot register allocator vs. the undead continuation monster

2005-06-12 Thread Piers Cawley
Chip Salzenberg <[EMAIL PROTECTED]> writes:

> On Wed, Jun 08, 2005 at 10:26:59PM +0100, The Perl 6 Summarizer wrote:
>>   Loop Improvements
>> Oh no! It's the register allocator problems again. One of these days I
>> swear I'm going to swot up on this stuff properly, work out whether it's
>> really the case that full continuations break any conceivable register
>> allocator and summarize all the issues for everyone in a nice white
>> paper/summary.
>
> "It's not really that complicated.  It just takes a long time to explain."
>-- Dr. Howland Owll, on SubGenius doctrine
>
> Consider this code:
>
> sub example1 {
>my $a = foo();
>print $a;
>my $b = bar();
>print $b;
> }
>
> It's obvious that $a and $b can be allocated the same register because
> once $b has been set, $a is never used again.
>
> (Please ignore that $a and $b are variables that can't be stored
> purely in registers.  The real world case that I'm illustrating deals
> with temporaries and other values that lack user-visible names.  But
> the issue is best illustrated this way.)
>
> Now consider this:
>
> sub example2 {
>my $a = foo();
>do {
>print $a;
>$b = bar();
>print $b;
>} while ($b);
> }
>
> You can see that it's now *not* OK to allocate $a and $b to the same
> register, because the flow of control can jump back to the print $a
> even after $b is assigned.
>
> Look at the first function again, and consider what happens if &foo
> captures its return continuation _and_&bar_invokes_it_.  It would
> effectively amount to the same issue as example2:
>
> sub foo {
>$a = 1;
>foo();
>  _implicit_label_for_return_continuation:
>print $a;
>$b = bar();
>print $b;
> }
>
> bar() {
>if rand() < 0.5 { goto _implicit_label_for_return_continuation }
>return "lucky";
> }
>
> Therefore, register allocation must allow for implicit flow of control
> from *every* function call to *every* function return ... or, more
> precisely, to where *every* continuation is taken, including function
> return continuations.

Buf if you fallow the calling conventions that looks like:

   sub foo {
 $a = 1.
 $c = 10;
 print $c
 
save_dollar_a_and_only_dollar_a_because_im_going_to_use_it_after_this_function_call
 foo()
_implicit_label_for_return_continuation:
 restore_dollar_a
_ooh_i_dont_have_to_save_anything
 $b = bar()
_nor_do_i_have_to_restore_anything
print $b
   }

That's what caller saves means. You only have to save everything that you're
going to care about after the function returns. You don't have to save the
world, because if it was important it's already been saved further up the call
chain.

This means, of course, that the continuation needs to save the state of the
restore stack, but I thought we already knew that.

Of course, if you're going to actually use GOTO to get to some label that you
should only get to via a continuation (as you do in the code example) then you
deserve to get everything you've got coming to you. A continuation must contain
everything needed to restore the user registers to the correct state no matter
how many times they were taken.

I really don't see how this affects register allocation; the call to bar
doesn't need to save $a because it's not referred to (lexically) after the call
returns. So what if the call to bar might take a continuation. Taking that
continuation should be exactly equivalent to returning from the call to
foo. You really have to stop thinking of continuations as gotos.


Re: Objects, classes, metaclasses, and other things that go bump in the night

2004-12-16 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 11:13 AM +0100 12/14/04, Leopold Toetsch wrote:
>>Dan Sugalski <[EMAIL PROTECTED]> wrote:
>>
>>>   subclass - To create a subclass of a class object
>>
>>Is existing and used.
>
> Right. I was listing the things we need in the protocol. Some of them 
> we've got, some we don't, and some of the stuff we have we probably 
> need to toss out or redo.
>
>>  >  add_parent - To add a parent to the class this is invoked on
>>>   become_parent - Called on the class passed as a parameter to 
>>> add_parent
>>
>>What is the latter used for?
>
> To give the newly added parent class a chance to do some setup in the 
> child class, if there's a need for it. There probably won't be in 
> most cases, but when mixing in classes of different families I think 
> we're going to need this.

The chap who's writing Ruby on Rails, a very capable framework reckons
that some of these 'meta' method calls that Ruby has in abundance have
really made his life a lot easier; they're not the sort of things you
need to use very often, but they're fabulously useful when you do.


Re: continuation enhanced arcs

2004-12-08 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>>  ... While S registers hold pointers, they have
>>>  value semantics.
>
>> Is that guaranteed? Because it probably needs to be.
>
> It's the current implementation and tested.
>
>>>  This would restore the register contents to the first state shown above.
>>>  That is, not only I and N registers would be clobbered also S registers
>>>  are involved.
>
>> That's correct. What's the problem? Okay, you've created an infinite
>> loop, but what you're describing is absolutely the correct behaviour for
>> a continuation.
>
> Ok. It's a bit mind-twisting but OTOH it's the same as setjmp/longjmp
> with all implications on CPU registers. C has the volatile keyword to
> avoid clobbering of a register due to a longjmp.
>
>>>  Above code could only use P registers. Or in other words: I, N, and S
>>>  registers are almost[1] useless.
>
>> No they're not. But you should expect them to be reset if you take a
>> (full) continuation back to them.
>
> The problem I have is: do we know where registers may be reset? For
> example:
>
> $I0 = 10
>   loop:
> $P0 = shift array
> dec $I0
> if $I0 goto loop
>
> What happens if the array PMC's C get overloaded and does some
> fancy stuff with continuations. My gut feeling is that the loop might
> suddenly turn into an infinite loop, depending on some code behind the
> scenes ($I0 might be allocated into the preserved register range or not
> depending on allocation pressure).
>
> Second: if we don't have a notion that a continuation may capture and
> restore a register frame, a compiler can hardly use any I,S,N registers
> because some library code or external function might just restore these
> registers.

This is, of course, why so many languages that have full continuations
use reference types throughout, even for numbers. And immutable strings...


Re: continuation enhanced arcs

2004-12-07 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> Further to my last response. If you have things set up so that you can
>> return multiple times from the same function invocation then the return
>> continuation should have been replaced with a full continuation before
>> the first return, so even the first return will use copying semantics,
>> and the registers will always be restored to the state they were in when
>> the function was first called, which is absolutely the right thing to
>> do.
>
> Here is again the example I've brought recently. Please go through it
> and tell me what's wrong with my conclusion.
>
>
>$I0 = 42 # set I16, 42 42
>$N0 = 2.5# set N16, 2.5..101...
>$S0 = "a"# set S16, "a"0x1004  -> "a"
>$P0 = "a"# set P16, "a"0x2008  -> "a"
>   loop:
>foo()# set P0, ...; invokecc
>
>  We have some temporary variables and a function call. Variables are used
>  beyond that point, so the register allocator puts these in the preserved
>  register range. The function C might or might not capture the
>  continuation created by the C opcode.
>
>  Let's assume, it is captured, and stored into a global, if it wasn't
>  already, i.e. the first time. According to Dan's plan, the function
>  return restores the register contents to the state of the creation of
>  the return continuation, which is shown in the right column.
>
>$I0 += 1 # add I16, 1  43
>$N0 *= 2.0   # mul N16, 2.0.101
>$S0 .= "b"   # concat S16, "b" 0x1008  -> "ab"
>inc $P0  # inc P16 0x2008  -> "b"
>dec a# dec P17 0x200c  -> 1
>if a goto loop   # if P17, loop
>
>  A note WRT strings: the concat might or might not assign a new string to
>  S16. It depends on the capacity of the string buffer. But generally:
>  string operations do create new string headers with a different memory
>  address like shown here. While S registers hold pointers, they have
>  value semantics.

Is that guaranteed? Because it probably needs to be.

>
>  Now we loop once over the function call. This creates a new return
>  continuation and on function return registers are restored to their new
>  values (44, 10.0, "abb", "c"). All fine till here.
>
>  The loop counter "a" reaches zero. Now the next instruction is
>  another function call.
>
>bar()# set P0, ... invokecc
>
>  The "bar()" function extracts the return continuation captured in the
>  first call to "foo()" from the global and invokes it. Control flow
>  continues right after the "invokecc" opcode that called "foo()".
>
>  This would restore the register contents to the first state shown above.
>  That is, not only I and N registers would be clobbered also S registers
>  are involved.

That's correct. What's the problem? Okay, you've created an infinite
loop, but what you're describing is absolutely the correct behaviour for
a continuation. If you need any state to be 'protected' from taking the
continuation then it needs to be in a lexical or a mutated PMC. This is
just how continuations are supposed to work. 

>  Above code could only use P registers. Or in other words: I, N, and S
>  registers are almost[1] useless.

No they're not. But you should expect them to be reset if you take a
(full) continuation back to them. 

Presumably if foo() doesn't store a full continuation, the restoration
just reuses an existing register frame and, if foo has made a full
continuation its return does a restore by copying?


Re: continuation enhanced arcs

2004-12-06 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> Matt Fowles <[EMAIL PROTECTED]> wrote:
>>>
>>>> Thanks for the clear explanation.  I did not realize that S registers
>>>> could switch pointers, that does make things a little harder.  I have
>>>> a recommendation for a possible hybrid solution.  Incur the cost of
>>>> spilling I,S,N registers heavily.  Restore the state of P register.
>>>
>>> My conclusion was that with the copying approach I,S,N registers are
>>> unusable.
>
>> But you only need to copy when the frame you're restoring is a full
>> continuation
>
> Yes. With the effect that semantics of I,S,N (i.e. value registers)
> suddenly changes.
>
>> I'd submit that, in the vast majority of cases you're not going to be
>> dealing with full continuations, and on the occasions when you are the
>> programmer using them will be aware of the cost and will be willing to
>> pay it.
>
> *If* the programmer is aware of the fact that a subroutine can return
> multiple times, he can annotate the source so that a correct CFG is
> created that prevents register reusing alltogether. The problem is
> gone in the first place.
>
> *If* that's not true, you'd get the effect that suddenly I,S,N registers
> restore to some older values which makes this registers de facto
> unusable.

Further to my last response. If you have things set up so that you can
return multiple times from the same function invocation then the return
continuation should have been replaced with a full continuation before
the first return, so even the first return will use copying semantics,
and the registers will always be restored to the state they were in when
the function was first called, which is absolutely the right thing to
do.



Re: continuation enhanced arcs

2004-12-06 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> Matt Fowles <[EMAIL PROTECTED]> wrote:
>>>
>>>> Thanks for the clear explanation.  I did not realize that S registers
>>>> could switch pointers, that does make things a little harder.  I have
>>>> a recommendation for a possible hybrid solution.  Incur the cost of
>>>> spilling I,S,N registers heavily.  Restore the state of P register.
>>>
>>> My conclusion was that with the copying approach I,S,N registers are
>>> unusable.
>
>> But you only need to copy when the frame you're restoring is a full
>> continuation
>
> Yes. With the effect that semantics of I,S,N (i.e. value registers)
> suddenly changes.
>
>> I'd submit that, in the vast majority of cases you're not going to be
>> dealing with full continuations, and on the occasions when you are the
>> programmer using them will be aware of the cost and will be willing to
>> pay it.
>
> *If* the programmer is aware of the fact that a subroutine can return
> multiple times, he can annotate the source so that a correct CFG is
> created that prevents register reusing alltogether. The problem is
> gone in the first place.
>
> *If* that's not true, you'd get the effect that suddenly I,S,N registers
> restore to some older values which makes this registers de facto unusable.

But they're bloody value registers. They're *supposed* to restore to the
state they were in when the function was originally called. Which is
what copying semantics does.



Re: continuation enhanced arcs

2004-12-05 Thread Piers Cawley
Luke Palmer <[EMAIL PROTECTED]> writes:

> Piers Cawley writes:
>> I'd submit that, in the vast majority of cases you're not going to be
>> dealing with full continuations, and on the occasions when you are the
>> programmer using them will be aware of the cost and will be willing to
>> pay it.
>
> Yeah probably.  Except the problem isn't the cost.  The problem is the
> semantics.  If you copy the registers, then when you invoke the
> continuation, their *values* restore to what they were when you made the
> continuation.  These are not proper semantics, and would result in
> subtle, incorrect infinite loops.

PMCs don't relocate, so the values you're restoring are simply the
addresses of said PMCs. The Numeric registers are value registers
anyway so no problem there (since there's no way of making a pointer to
the contents of such a register AFAICT). I'm not sure about string
registers.

And anyway, copying is how it used to work, and work it did, albeit slowly.


Re: continuation enhanced arcs

2004-12-05 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Matt Fowles <[EMAIL PROTECTED]> wrote:
>
>> Thanks for the clear explanation.  I did not realize that S registers
>> could switch pointers, that does make things a little harder.  I have
>> a recommendation for a possible hybrid solution.  Incur the cost of
>> spilling I,S,N registers heavily.  Restore the state of P register.
>
> My conclusion was that with the copying approach I,S,N registers are
> unusable.

But you only need to copy when the frame you're restoring is a full
continuation (and, actually, if copy on write works at a per register
level, copy on write might be the way to go). If it's a return
continuation you can simply use the stored state. 

I'd submit that, in the vast majority of cases you're not going to be
dealing with full continuations, and on the occasions when you are the
programmer using them will be aware of the cost and will be willing to
pay it.



Re: continuation enhanced arcs

2004-11-28 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> We don't have a problem WRT register preservation, the problem arises
>>> due to register re-using.
>
>> Ah! [a light goes on over Piers's head].
>
>>>> Or am I missing something fundamental?
>
>>> I don't know ;)
>
>> I was. Hmm... bugger. So, unless we make the register allocator solve
>> the halting problem, the rule becomes "If you're playing silly beggars
>> with continuations and you're expecting to get at something in a
>> 'surprising' way, stuff it in a lexical or we guarantee that you will be
>> anally violated by an enraged waterbuffalo that's just sick to death of
>> non-determinism"?
>
> This would make quite a fine explanation in the docs, except that's a
> bit unclear about "stuff *it*". The waterbuffalo is concerned of
> preserved *temporary* variables too.

I just thought of a heuristic that might help with register
preservation:

A variable/register should be preserved over a function call if either of the
following is true:

1. The variable is referred to again (lexically) after the function has
   returned.  
2. The variable is used as the argument of a function call within the
   current compilation unit.

Condition 2 is something of a bugger if you have big compilation units,
but register allocation is always going to be a pain when there are big
compilation units around.



Re: continuation enhanced arcs

2004-11-26 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Okay, I'm confused, I thought that the whole point of a caller saves,
>> continuation passing regime was that the caller only saves what it's
>> interested in using after the function returns.
>
> We don't have a problem WRT register preservation, the problem arises
> due to register re-using.
>
>> ... Exactly *where* that
>> return happens, and whether it happens more than once, is completely
>> irrelevant from the point of view of the caller.
>
> The return can only happen, where the normal function call would have
> returned, but anyway.
>
>> ... ISTM that the register
>> allocator should work on the principle that anything it didn't save
>> before it made the call will be toast afterwards.
>
> Yes. But - please remember your example "Fun with nondeterministic searches".
> Here's the relevant piece of code from main:
>
>arr1=[1,3,5]
>arr2=[1,5,9]
>x = choose(arr1)
>y = choose(arr2)
>$P0 = find_lex "fail"
>$P0()
>
> You know, both "choose" calls capture the continuation and backtrack via
> "fail" (basically). But the register allocator isn't aware of that. The
> control flow graph (CFG) is linear top down, with new basic blocks
> starting after each function call. "arr2" is obviously used around a
> call and allocated in the preserved (non-volatile) register area. This
> works fine.
>
> Now the register allocator assigns a register to "$P0". It finds the
> register that "arr2" had usable, because in a linear CFG, there's no way
> that "arr2" might be used again. So that register is considered being
> available. Now if $P0 happens to get the register that "arr2" had,
> backtracking through the call to "fail()" obviously fails, as "arr2" is
> now the Closure PMC. And that was exactly the case.

Ah! [a light goes on over Piers's head].

>
>> Or am I missing something fundamental?
>
> I don't know ;) 

I was. Hmm... bugger. So, unless we make the register allocator solve
the halting problem, the rule becomes "If you're playing silly beggars
with continuations and you're expecting to get at something in a
'surprising' way, stuff it in a lexical or we guarantee that you will be
anally violated by an enraged waterbuffalo that's just sick to death of
non-determinism"?
 




Re: continuation enhanced arcs

2004-11-25 Thread Piers Cawley
Okay, I'm confused, I thought that the whole point of a caller saves,
continuation passing regime was that the caller only saves what it's
interested in using after the function returns. Exactly *where* that
return happens, and whether it happens more than once, is completely
irrelevant from the point of view of the caller. ISTM that the register
allocator should work on the principle that anything it didn't save
before it made the call will be toast afterwards. Doing anything more
sophisticated than optimizing register allocation on a sub by sub basis
seems like a license for getting completely and utterly plaited.

Or am I missing something fundamental?


Re: Why is the fib benchmark still slow - part 1

2004-11-05 Thread Piers Cawley
Miroslav Silovic <[EMAIL PROTECTED]> writes:

> Leopold Toetsch wrote:
>
>>> I believe that you shouldn't litter (i.e. create an immediately
>>> GCable object) on each function call - at least not without
>>> generational collector specifically optimised to work with this.
>>
>>
>> The problem isn't the object creation per se, but the sweep through
>> the *whole object memory* to detect dead objects. It's of course true,
>> that we don't need the return continuation PMC for the fib benchmark.
>
> Well, creation is also the problem if you crawl the entire free heap
> before triggering the next GC round. You get a potential cache miss on
> each creation and on each mark and on each destruction. To keep GC out
> of the way, the entire arena has to be confined to cache size or less.
>
>> But a HLL translated fib would use Integer PMCs for calculations.
>
> Hmm, I'm nitpicking here, but it's not how e.g. Psyco works. It
> specialises each function to specific argument types and recompiles for
> each new argument type set. Assuming that you'll call only very few
> functions with more than 1-2 type combinations, this is a good tradeoff.
> It also removes a lot of consing, especially for arithmetics.
>
>>
>>> ...  This would entail the first generation that fits into the CPU
>>> cache and copying out live objects from it. And this means copying GC
>>> for Parrot, something that (IMHO) would be highly nontrivial to
>>> retrofit.
>>
>>
>> A copying GC isn't really hard to implement. And it has the additional
>> benefit of providing better cache locality. Nontrivial to retrofit or
>> not, we need a generational GC.

The catch with generation GC is that, once you have guaranteed
destructors being called promptly, you still have to sweep the whole
arena every time you leave a scope.


Re: Closures and subs

2004-11-05 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> Klaas-Jan Stol <[EMAIL PROTECTED]> wrote:
>>>> Hello,
>>>
>>>> I've been playing with closures and subs but I have a little bit of
>>>> trouble with those.
>>>
>>>  newsub $P0, .Closure, _foo
>>>  $P0(q)
>>>  newsub $P0, .Closure, _foo
>>>  $P0(q)
>>>
>>> Closures have to be distinct.
>
>> Does this *really* mean that, if I create a closure in a function and
>> return it to my caller, that closure can only be invoked once?
>
> No, it can be invoked as often you like.
>
> But above case seems to be different and very similar to what I already
> asked:
>
>   (define (choose . all-choices)
> (let ((old-fail fail))
>   (call-with-current-continuation
>
> You remember that snippet, it's now a test in t/op/gc.t. I had to insert
> the line below XXX and use a second closure, which has the "arr2" in it's
> context.
>
>  newsub choose, .Closure, _choose
>  x = choose(arr1)
>
>  # XXX need this these closures have different state
>  newsub choose, .Closure, _choose
>  y = choose(arr2)
>
> The question was, if that's technically correct.

Ah... of course, I was asking a stupid question. Always the most
plausible hypothesis I think. 

-- 
Piers 
Oh, predicting the future's easy, you just make a continuation at the
point you're asked, say anything and head off into the future to find what
happens, then take the continuation back to the question and give a more
accurate answer. -- me in #parrot



Re: Closures and subs

2004-11-04 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Klaas-Jan Stol <[EMAIL PROTECTED]> wrote:
>> Hello,
>
>> I've been playing with closures and subs but I have a little bit of
>> trouble with those.
>
>  newsub $P0, .Closure, _foo
>  $P0(q)
>  newsub $P0, .Closure, _foo
>  $P0(q)
>
> Closures have to be distinct.

Does this *really* mean that, if I create a closure in a function and
return it to my caller, that closure can only be invoked once?

If it does, this is slightly more broken than a very broken thing.


Re: [CVS ci] indirect register frames 14 - cache the register frame

2004-11-02 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 2:30 PM -0500 11/2/04, Matt Fowles wrote:
>>All~
>>
>>I don't like the idea of having to dig down through the entire return
>>chain promoting these guys.  Is there a reason not to use DOD/GC to
>>recycle continuations?
>
> Yes. Speed.
>
> While you can skip some of the digging (since you can stop at the 
> first promoted one) the reality is that 90%+ of the return 
> continuations will *never* need promoting. Not bothering to make the 
> return continuations true full continuations until they're actually 
> needed as one lets us immediately recycle return continuations as 
> soon as they're used in the near-overwhelming majority of the cases 
> -- that is, when nothing can possibly have a hold of 'em. That leaves 
> a lot fewer objects for the DOD to sweep through, as well as speeding 
> up allocation (since we're more likely to have ones at hand, and 
> likely in cache too) of the things in the first place.

And, dammit, making a full continuation isn't something a programmer
should do lightly. 


Re: Are we done with big changes?

2004-11-02 Thread Piers Cawley
Jeff Clites <[EMAIL PROTECTED]> writes:

> On Nov 1, 2004, at 6:14 AM, Dan Sugalski wrote:
>
>> Because I need to get strings working right, so I'm going to be
>> implementing the encoding/charset library stuff, which is going to
>> cause some major disruptions.
>
> Please tag cvs before checking this in.

Release candidate?



Re: [pid-mode.el] cannot edit

2004-10-04 Thread Piers Cawley
Stéphane Payrard <[EMAIL PROTECTED]> writes:

> On Fri, Oct 01, 2004 at 06:09:37PM +0200, Jerome Quelin wrote:
>> Hi,
>> 
>> I tried the pir-mode provided in the editor/ subdir. And when opening a
>> .imc file (I've associated .pir with pir-mode + font-lock-mode), I
>> cannot type spaces or carriage returns:
>> 
>> (24) (warning/warning) Error caught in `font-lock-pre-idle-hook':
>> (invalid-regexp Invalid syntax designator)
>> 
>> And the minibuffer tells me:
>> Symbol's function definition is void: line-beginning-position
>> 
>> I'm using xemacs 21.4.14
>> 
>> Is the pir-mode.el file complete? Or am I encountering a bug in
>> it?
>
> This function is defined in emacs:
>
>   line-beginning-position is a built-in function.
>   (line-beginning-position &optional N)
>
>   Return the character position of the first character on the current line.
>   With argument N not nil or 1, move forward N - 1 lines first.
>   If scan reaches end of buffer, return that position.
>
>   The scan does not cross a field boundary unless doing so would move
>   beyond there to a different line; if N is nil or 1, and scan starts at a
>   field boundary, the scan stops as soon as it starts.  To ignore field
>   boundaries bind `inhibit-field-text-motion' to t.
>
>   This function does not move point.
>
> switch to emacs. :)

Or patch pir-mode.el, your choice.


Re: Continuation re-invocation

2004-09-27 Thread Piers Cawley
Jeff Clites <[EMAIL PROTECTED]> writes:

> Two questions:
>
> 1) Is it supposed to be possible to invoke a given continuation more 
> than once, or is it "used up" once invoked?

Yes, you should be able to invoke one more than once.

>
> 2) Am I supposed to be able to "jump down" the stack by invoking a 
> continuation? To be specific, if "A calls B calls C", and C invokes the 
> return continuation which takes it directly back to A, can something 
> later invoke the return continuation which leads to B? (Emphasizing 
> there the notion that something "skipped over" a continuation, then 
> later came back and used it.)

Yes.


Re: towards a new call scheme

2004-09-23 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski wrote:
>
>> At 4:15 PM +0200 9/23/04, Leopold Toetsch wrote:
>>>   get_cc(OUT Px)  # 1) get current continuation, i.e. the return cont.
>> In a rare, possibly unique burts of opcode parsimoniousness... perhaps
>> this would be a good thing for the interpinfo op. 
>
> That's fine too.
>
>
>>>   return_cc() # 2) return via current continuation
>>>
>>> 1) is only needed for special porposes, like passing the continuation
>>> on to a different place. The normal way to return from a sub will be
>>> 2)
>>>
>>> If that's in, access to C as a return continuation will be
>>> deprecated.
>> Well... should we? We're still passing the return continuation *in* in
>> P1, unless you want to move it out and unconditionally make it a
>> parameter to invoke. 
>
> Well, basically, if an interpreter template is used (hanging off the 
> subroutine), the previous interpreter template is the return 
> continuation. That means that for the usual case (function call and 
> return) there isn't any need to construct a return continuation and put 
> it somewhere. The return continuation would only be needed to create a 
> real continuation out of it and pass it along somewhere.
>
> If that calling scheme doesn't fly, its still better to have specific 
> locations in the interpreter context that hold the sub and the return 
> continuation to allow simple function backtrace for the error case or 
> introspection. That's currently not possible, because P0 and P1 can be 
> swapped out into the register backing stack and reused to hold something 
> totally different.
>
>> I think I'd also like to make a change to sub invocation, too, to
>> allow passing in alternate return continuations, which makes tail
>> calls easy. 
>
> Ok. Good point. We could get rid of C (call the sub in Px w/o 
> calling conventions) in favor of C. Anyway, the 
> visible part of a return continuation (in above opcode or C 
> would be a continuation. The "normal" case (call/return) could just have 
> that continuation internally in the context. No additional PMC is 
> constructed ifn't needed.

I could be wrong here, but it seems to me that having a special
'tailinvoke' operator which simply reuses the current return
continuation instead of creating a new one would make for rather faster
tail calls than fetching the current continuation out of the interpreter
structure, then invoking the target sub with with this new
continuation (ISTM that doing it this way means you're going to end up
doing a chunk of the work of creating a new return continuation anyway,
which rather defeats the purpose.)



Re: Tight typing by default?

2004-08-25 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> It seems pretty clear that the general opinion is that operations 
> should produce the tightest reasonable type for an operation--integer 
> multiplication should produce an integer unless it can't, for example.
>
> For our purposes I think the typing should go:
>
>platform int->float->bignum
>
> with an operation producing a type no tighter than the loosest type 
> in the operation. (so int/float gives a float, float-bignum gives a 
> bignum)
>
> This seem reasonable?

No. int->bignum->float 

In other words, floats only happen if you specifically introduce them
(or take a square root or something).


Re: Numeric semantics for base pmcs

2004-08-25 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>> At 8:45 PM +0200 8/24/04, Leopold Toetsch wrote:
>>>Dan Sugalski <[EMAIL PROTECTED]> wrote:
>
  Nope -- we don't have bigints. :)
>>>
>>>Pardon, sir?
>
>> We've got the big number code, but I don't see much reason to
>> distinguish between integers and non-integers at this level -- the
>> only difference is exponent twiddling.
>
> Ah, ok. BigInt as a degenerated BigNum. I still prefer the notion that
> adding or multiplying to integers give a BigInt on overflow.
>
> While at num vs int: do we automatically downgrade to int again?
>
>   6.0/2.0 = 3.0 or 3 ?

No. Once a real, always a real. I see no harm in collapsing appropriate
rationals to ints mind...


Re: The new Perl 6 compiler pumpking

2004-08-04 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> There's not been a big public announcement, so it's time to change that.
>
> I'd like everyone to give a welcome to Patrick Michaud, who's 
> volunteered to officially take charge of getting the Perl 6 compiler 
> module written. I've put in yet another nudge to get the 
> parrot-compilers list started, and get the perl6-internals list 
> renamed to parrot-internals (which is what it really is) so we can 
> get things properly sorted out, as I expect Patrick will be digging 
> into the fun stuff pretty darn soon.

Mmm... three list to summarize...



Re: This week's summary

2004-07-29 Thread Piers Cawley
Brent 'Dax' Royal-Gordon <[EMAIL PROTECTED]> writes:

> Piers Cawley wrote:
>> Brent 'Dax' Royal-Gordon <[EMAIL PROTECTED]> writes:
>>>Care to explain what those are, O great math teacher?
>> What's a math teacher?
>
> It's the right^H^H^H^H^HAmerican way to say "maths teacher".

You mean American and 'right' are not equivalent? Wow.



Re: This week's summary

2004-07-28 Thread Piers Cawley
Brent 'Dax' Royal-Gordon <[EMAIL PROTECTED]> writes:

> The Perl 6 Summarizer wrote:
>>   The infinite thread
>> Pushing onto lazy lists continued to exercise the p6l crowd (or at
>> least, a subset of it). Larry said that if someone wanted to hack
>> surreal numbers into Perl 6.1 then that would be cool.
>
> Care to explain what those are, O great math teacher?

What's a math teacher?


Re: This week's summary

2004-07-21 Thread Piers Cawley
Austin Hastings <[EMAIL PROTECTED]> writes:

> --- The Perl 6 Summarizer <[EMAIL PROTECTED]> wrote:
>>  Okay, so the interview was on Tuesday 13th of July. 
>> It went well; I'm going to be a maths teacher.

[...]

> As we all know, time flies like an arrow, but fruit flies like a
> banana. If you found this mathematical summary helpful, please consider
> paying your tuition you ungrateful little bastards."

** Gurfle **


Re: the whole and everything

2004-07-20 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>
>> Leo, we've talked about this before. The sensible and straightforward
>> thing to do in a case like this is to tag in the sub pmc which
>> register frames are used by the sub.
>
> And what, if the sub calls another sub?

Then the call to the inner sub saves the registers that are used by the
inner sub.


Re: The Pie-thon benchmark

2004-06-26 Thread Piers Cawley
Ask Bjørn Hansen <[EMAIL PROTECTED]> writes:

>> Andy Wardley <[EMAIL PROTECTED]> writes:
>>
>>> Dan Sugalski wrote:
 it's not exactly exciting watching two people hit return three times
 in front of a roomful of people.
>>>
>>> Although watching two people hit each other in the face with custard
>>> pies three times in front of a roomful of people may be a lot more
>>> fun.
>>
>> Mutter. Mutter. Lack of bloody money. Mutter. Not coming to
>> OSCON. Mutter. Hope someone videos it. Mutter. And sticks it on the
>> web. Mutter.
>
> I'm planning to bring a DV camera to OSCON -- and I'm sure others will
> too.  =)
>
>
>   - ask
>
> ps. Andy, sucks we won't get to see you though!

Err... I think you got your quoting wrong; it was me that did the
muttering. 



Re: The Pie-thon benchmark

2004-06-24 Thread Piers Cawley
Andy Wardley <[EMAIL PROTECTED]> writes:

> Dan Sugalski wrote:
>> it's not exactly exciting watching two people hit return three times 
>> in front of a roomful of people.
>
> Although watching two people hit each other in the face with custard 
> pies three times in front of a roomful of people may be a lot more
> fun.

Mutter. Mutter. Lack of bloody money. Mutter. Not coming to
OSCON. Mutter. Hope someone videos it. Mutter. And sticks it on the
web. Mutter.


Re: Another small task for the interested

2004-06-24 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> On Sun, 20 Jun 2004, Ion Alexandru Morega wrote:
>
>> Dan Sugalski wrote:
>> > I checked in more of PDD 17, detailing parrot's base types. Some of
>> > those types definitely don't exist (like, say, the string and bignum
>> > type...) and could definitely use implementing. Should be fairly
>> > straightforward, and would be a good way to get up to speed on writing
>> > PMC classes.
>>
>> Hello, i'm new to this list and to parrot programming, so i decided to
>> start with something simple. I implemented a String PMC that is pretty
>> much complete, it compiles, but i haven't tested it yet. It would be
>> great if someone had a look at it, and later when i write some tests
>> i'll check in a patch. The .pmc is attached.
>
> Sorry this one sat so long. (Piers reminded me with the summary) 
  
It worked then '

> I checked in the PMC. Tests would be cool, to be sure. :)



Re: [Summary] Help

2004-06-19 Thread Piers Cawley
Piers Cawley <[EMAIL PROTECTED]> writes:

[...]

> Thanks in advance.

And thanks to Sebastian Riedel, Brent Royal-Gordon, Robert Spier and
the aforementioned Jeffrey Dik I have filled in my lacunae and I'm
ready to get my summary on when Monday rolls around.



Re: [Summary] Help

2004-06-19 Thread Piers Cawley
Piers Cawley <[EMAIL PROTECTED]> writes:

> For various annoying reasons involving a pernickety external drive and
> a service centre that, after more than a week *still* hasn't taken a
> look at my main machine, I find myself missing a tranche of messages to
> perl6-internals and perl6-language. If some kind soul were to send me
> mbox files containing messages in the period from, say, the first of
> June through to the 17th, then they would have earned my undying
> gratitude. 
>
> Thanks in advance.

Thanks to Jeffrey Dik's extreme promptness, I now have an archive for
perl6-internals and I'm just looking for perl6-language. 


[Summary] Help

2004-06-19 Thread Piers Cawley
For various annoying reasons involving a pernickety external drive and
a service centre that, after more than a week *still* hasn't taken a
look at my main machine, I find myself missing a tranche of messages to
perl6-internals and perl6-language. If some kind soul were to send me
mbox files containing messages in the period from, say, the first of
June through to the 17th, then they would have earned my undying
gratitude. 

Thanks in advance.

-- 
Piers


Re: PerlHash using PMCs for keys?

2004-06-07 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> Just use a Key PMC for $P2.
>
>> So, just for fun I added the following test to t/pmc/perlhash.t:
>
>> new P10, .PerlHash
>> new P1, .PerlString
>> set P1, "Bar"
>> new P2, .Key
>> set P2, P1
>   ^^^
>
> C aliases the two PMCs, both are pointing to the same PMC
> then. You'd like to use C here.

Of course I would. Oops.

>
>> new P3, .Key
>> set P2, P1
>   ^^
> typo?

Yes, and should be an assign.


>> Perl 6 supports using full on objects as keys in its hashes. It seems
>> that having parrot do the same would be a Good Thing.
>
> The main problem here is, what does ...
>
>   set P0, P1[P2]
>
> ... mean: keyed_string or keyed_integer, i.e. hash or array lookup.
> The Key PMC provides this information, a plain string could be used in
> both ways:
>
>   set P2, "42"
>   set P0, P1[P2]
>
> It depends of course on the aggregate, what kind of key it would like to
> use, but e.g. for an OrderedHash PMC, which supports lookup by string
> *and* by integer, the usage is ambiguous.

I'm not advocating removing the key object for disambiguating this sort
of thing. 

> We could change C to extract a number with
> C from any PMC and do the same with C but some
> keyed usage needs a defined type.

I'd argue that at the base level all objects should have a 'hash_int'
method (or some other name to be decided) which generates a hash
integer appropriate to the object. See countless smalltalk images for
such usage. 


Re: PerlHash using PMCs for keys?

2004-06-03 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Togos <[EMAIL PROTECTED]> wrote:
>> Should aggregate PMCs (like PerlHash) be able to take
>> PMCs as keys? I mean so that:
>
>>   $P0 = $P1[$P2]
>
> Just use a Key PMC for $P2.
>
>   $P2 = new Key
>   $P2 = "key_string"
>   ...

So, just for fun I added the following test to t/pmc/perlhash.t:

new P10, .PerlHash
new P1, .PerlString
set P1, "Bar"
new P2, .Key
set P2, P1

set P10[P2], "Food\n"
set S0, P10[P2]
print S0

new P3, .Key
set P2, P1
set S1, P10[P2]
print S1

end

Just to test that two Keys created from the same PMC would fetch the
same thing from the hash. 

Imagine my surprise when the test blew up with 'Key not a string!'
before producing any output.

Perl 6 supports using full on objects as keys in its hashes. It seems
that having parrot do the same would be a Good Thing.


Re: One more thing...

2004-06-03 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> Then, at runtime, 'fred' gets set up as the implemntation for an op.
>
>> Which, given your implementation, means that each function call that
>> fred makes should be protected with savetop/restoretop pairs. Oops.
>
> The implementation checks register usage of the called sub at *runtime*,
> or more precisely at first invocation of the sub and caches the value.
> It would need a notification (similar to the method cache), if the sub
> got recompiled.

Who's talking about recompiling. I'm talking about fred being
registered as the handler for some op at runtime. ISTM that, either
fred has to get recompiled (assuming the source is kicking about) so
that every function call it makes is guarded with a saveall, or you
*always* do a saveall as you call fred so that the compiled in,
optimized saves will continue to work.

With the fingerprint approach I outlined, one can at least avoid the
saveall in some cases. 


Re: One more thing...

2004-06-01 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley wrote:
>> But under this scheme, the implementing function will have to do a
>> saveall for every function it calls because it doesn't know what
>> registers its caller cares about. And you're almost certainly going
>> to want to call other functions to do the heavy lifting for all the
>> usual reasons of code reuse. 
>
> Yep that's true. As well as with real caller saves. Which leads back to
> my (almost) warnocked "proposal":

Consider a sub, call it fred, that calls other subs and only uses PMC
registers. At compile time, you wrap those calls in appropriate
pushtopp/poptopp pairs.

Then, at runtime, 'fred' gets set up as the implemntation for an op. 

Which, given your implementation, means that each function call that
fred makes should be protected with savetop/restoretop pairs. Oops.





Re: One more thing...

2004-05-26 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
> [ calculating registers to save ]
>
>>> ... once per sub per location where the sub is called from. But there
>>> isn't any knowledge that a sub might be called. So the cost is actually
>>> more per PMC instruction that might eventually run a PASM MMD. This is,
>>> when its done right, or ...
>
>> No. Once per compilation unit.
>
> An example:
>
>  .sub foo
>
># a lot of string handling code
># and some PMCs
>$P0 = concat $P0, $S0# <<< 1) calculate: save P, S here
># now a lot of float code
># no strings used any more
># and no branch back to 1)
>$N1 = 47.11  # $N1's live starts here
>$P0 = $P1 + $N1  # <<< 2) calculate: save P, N regs
>$P2 = $P0 + $N1  # <<< 3) calculate: save P regs
># no N reg used here
>  .end
>
> At 1) the caller is not interested in preserving N-registers, these
> aren't used there. Saving everything, the caller needs saving, ends up
> with C in non trivial subroutines.
>
> Using your proposal would need a lot of storage for the saved
> register ranges.
>
> If the calculation is done based on the called subroutine, it's not
> unlikely that only a few registers have to be preserved, e.g. no
> N-registers for the overloaded C and no string registers for the
> overloaded C.
>
> This doesn't violate the principle of caller saves: all that needs
> preserving from the caller's POV is preserved.

But under this scheme, the implementing function will have to do a
saveall for every function it calls because it doesn't know what
registers its caller cares about. And you're almost certainly going
to want to call other functions to do the heavy lifting for all the
usual reasons of code reuse. I can see a situation where you end up
with 

   .sub implementing_function
  saveall
  invokecc user_callable_implementing_function
  restoreall
  invoke P1
   .end

   .sub user_callable_implementing_function
  do_this(...)
  do_that(...)
  do_the_other(...)
  ...
   .end

simply because you want to follow good coding practice. You're right
that, in the limiting case, my 'fingerprinting' approach is going to
reduce to a saveall, but the example you give could be broken
up into 

   .sub foo
 $P0 = stringy_stuff($P0)
 ($P0, $P2) = floaty_stuff($P0)
 ...
   .end

which will simply need save P registers (and the called functions will
be able to arrange for efficient saves too...)



Re: One more thing...

2004-05-11 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>> 
>>>Not quite for this case. Or in theory yes, but... As calling the
>>>subroutine mustn't have any changes to the caller's registers, it's just
>>>simpler to save these registers that the subroutine might change.
>
>> But generating the save signature for a given sub is a compile time cost
>> that only needs to be paid once for each sub and shoved on an I register
>
> ... once per sub per location where the sub is called from. But there
> isn't any knowledge that a sub might be called. So the cost is actually
> more per PMC instruction that might eventually run a PASM MMD. This is,
> when its done right, or ...

No. Once per compilation unit. Stick it in a high register and keep it nailed there
for the duration of the sub. Specify this register as part of the
calling conventions; the right value will then get restored at any
function return and there's no need to regenerate it. 


Re: One more thing...

2004-05-11 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> - if it calls a PASM routine, registers have to be preserved. Which
>>>   registers depend on the subroutine that actually gets called (ok, this
>>>   information - which registers are changed by the sub - can be attached
>>>   to the Sub's metadata)
>
>> No, we're in caller saves remember.
>
> Ok, yes. But MMD and delegated functions are a bit different. The caller
> isn't knowing that it's a caller. The PASM is run from the inside of the
> C code.
>
>> ... The registers that need saving are
>> dependent on the caller.
>
> Not quite for this case. Or in theory yes, but... As calling the
> subroutine mustn't have any changes to the caller's registers, it's just
> simpler to save these registers that the subroutine might change.
>
>> ... Since the registers used by a function at any
>> point are statically determined, maybe add's signature could be altered
>> to take an integer 'save flags' argument specifying which registers
>> need to be preserved for the caller,
>
> This has a performance penalty for the non-MMD case. I can imagine that
> overloaded MMD functions are simpler (in respect of register usage) then
> the caller's code. So it seems that saving, what the MMD sub might
> change on behalf of the caller is just more effective.

But generating the save signature for a given sub is a compile time cost
that only needs to be paid once for each sub and shoved on an I register
(which could, of course, be standardized). An MMD sub with a PASM
implementation simply looks at the appropriate register, saves the right
stuff, sets up a return continuation and has the interpreter invoke
it. Which leaves a correctly set up continuation chain and a PASM
implementation which can do whatever the heck it likes, including
making continuations, closures etc that can be returned to multiple
times because it got invoked in the normal runloop.

The work has to be done either way, but by arranging things so that
everything looks like caller saves (and so that there is no MMD barrier
to continuations) just seems to make the most sense. BTW, if it's a
continuation barrier does that also mean it's an exception barrier?


Re: One more thing...

2004-05-07 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>> At 11:35 AM +0200 4/30/04, Leopold Toetsch wrote:
>>>Dan Sugalski <[EMAIL PROTECTED]> wrote:
  If we go MMD all the way, we can skip the bytecode->C->bytecode
  transition for MMD functions that are written in parrot bytecode, and
  instead dispatch to them like any other sub.
>>>
>>>Not really. Or not w/o significant overhead for MMD functions
>>>implemented in C.
>
>> Well... about that. It's actually easily doable with a bit of
>> trickery. We can either:
>
> This still doesn't work. Function calls just look different then
> "plain" opcodes like "add Px, Py, Pz".
> - it's not known, if C calls a PASM subroutine
> - if it calls a PASM routine, registers have to be preserved. Which
>   registers depend on the subroutine that actually gets called (ok, this
>   information - which registers are changed by the sub - can be attached
>   to the Sub's metadata)

No, we're in caller saves remember. The registers that need saving are
dependent on the caller. Since the registers used by a function at any
point are statically determined, maybe add's signature could be altered
to take an integer 'save flags' argument specifying which registers
need to be preserved for the caller, then if MMD determines that the
call needs to go out to a PASM function, the appropriate registers can
be saved.



Re: A12: The dynamic nature of a class

2004-04-28 Thread Piers Cawley
chromatic <[EMAIL PROTECTED]> writes:

> On Fri, 2004-04-23 at 05:42, Dan Sugalski wrote:
>
>> Since any type potentially has assignment behaviour, it has to be a 
>> constructor. For example, if you've got the Joe class set such that 
>> assigning to it prints the contents to stderr, this:
>> 
>> my Joe $foo;
>> $foo = 12;
>> 
>> should print 12 to stderr. Can't do that if you've not put at least a 
>> minimally constructed thing in the slot.
>
> (hypothetical pre-breakfasty musings)
>
> Such as a PerlUndef with the 'expected_type' property set to 'Joe'?

PerlUndef's behaviour is one of the Parroty things that makes me rather
nervous. It blurs (obliterates) the line between container and
value rather spectacularly. ISTM that declaring 'my Joe $foo' should
create a PerlScalar with an expected type of Joe and a PerlUndef as its
contents. The PerlScalar's definedness test would simply be a test for
the definedness of its contents.




Re: A12: The dynamic nature of a class

2004-04-25 Thread Piers Cawley
Jeff Clites <[EMAIL PROTECTED]> writes:

> On Apr 23, 2004, at 11:04 AM, Simon Cozens wrote:
>
>> [EMAIL PROTECTED] (Jeff Clites) writes:
>>> So what does "$foo = 12" in that context actually mean in Perl6?
>>
>> Another interesting question is "in Perl 6, are variables typed,
>> values typed,
>> or a little of both?"
>>
>> It seems that Parrot has been working primarily on the assumption that
>> it's
>> values that are typed, and punting variable typing to the IMCC code
>> generation
>> layer.
>
> That's my worry--whether we have a problem. (No problem with typed
> values, of course.) If variables are typed, _and_ that typing is
> lexically scoped, then I think that we don't have a problem, and it can
> be handled at the compilation level, by inserting the appropriate type
> checks right before each assignment.

Which begs the question of what happens when you do, something like:

my Dog $spot;
some_func(\$spot);

...

sub some_func($thing) {
$$thing = Cat.new();
}

It could be argued that our current PerlUndef is all very well, but it
confuses the roles of container and value. Maybe the declaration
above should translate to something like:

new $P0, 'PerlScalar';
$P0.set_type('Dog');
store_lex '$spot', -1, $P0
 


Re: OO benches

2004-04-17 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Aaron Sherman <[EMAIL PROTECTED]> wrote:
>> On Fri, 2004-04-16 at 18:18, Leopold Toetsch wrote:
>
>> Sorry, I gave the wrong impression. I meant it looks suspiciously like
>> Python is doing a lazy construction on those objects, not that there is
>> anything wrong with the benchmark.
>
> No, I don't think that this is happening. Parrot's slightly slower
> object instantiation is due to register preserving mainly. The "__init"
> code is run from inside the "new PObj, IClass" opcode. As its not known
> that a method call is happening here, we can't use register preserving
> operations that only save needed registers--we have to save all
> registers. These two memcpys are the most heavy part of the operation.

Maybe we should rethink that then and make allocation and
initialization two different phases. Or dictate that 

   new PObj, IClass

should be treated as if it were a function call with all the caller
saves implications that go with it. 




Re: Attribute questions

2004-04-12 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 6:53 PM +0100 4/8/04, Mark Sparshatt wrote:
>>I've got a couple of questions about Atrributes in Parrot.
>>
>> PDD15 says that both classes and objects have a list of attributes and
>> it is possible to add or remove attributes to a class but not an
>> object.
>>
>> Am I right in thinking that the attribute list for an object is just a
>> copy of the attribute list for the class, which is used to store the
>> objects values?
>>
>> It seems that for Ruby instance variables can be modelled using
>> attributes, but I couldn't see any way of handling class
>> variables. So, what is the recommended way of handling them?
>
> First, one takes the bat labeled "Metaclasses" and smacks the designer
> in the head with it. Then, once enlightenment has hit, we implement
> metaclasses and you add class variables as new attributes on a class'
> metaclass. I think.

Enlightenment *has* hit then?

> Alternately you can stick 'em in as plain variables in the class
> namespace, which is probably the right answer for the moment, though not
> for the long run.

I already (sort of) think of classes and objects as being 'a bit like
namespaces', so that makes a certain kind of sense.


Re: This week's Summary

2004-04-08 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> The Perl 6 Summarizer <[EMAIL PROTECTED]> wrote:
>>   Subroutine calls
>> Leo announced that he's added a "pmc_const" opcode to parrot. The idea
>> being that, [ ... ]
>>  you would instead fetch a preexisting Subroutine PMC
>> from the PMC constant pool.
>
> Not quite. I've implemented it here locally and I'm awaiting some
> comments ;)

If it's not obvious from the summary, I think it's a cracking idea.


Re: Behaviour of PMCs on assignment

2004-04-07 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Togos <[EMAIL PROTECTED]> wrote:
>
>>   $I1 = $I2 + $I3
>
>>   $P1 = $P2 + $P3
>
>> Which, of course, doesn't work. But this is what
>> languages like Python or Ruby would expect to be able
>> to do, as they don't need Perl's fancy variable
>> objects -- a register is good enough.
>> sementics of
>
> That and other arguments are of course all correct. I just have the gut
> feeling that having both opcode and vtable variants blows core size up
> to an isane value.

Couldn't you have a single opcode, C, which uses
Pn's vtable assignment, and make C etc simply use
the 'make a new PMC' thing. That means that, to get the current
semantics you'd have to do 

$P2 = $P3 + $P4
assign_content $P1, $P2

But I'm not sure that's an enormous loss. 



Re: Fun with nondeterministic searches

2004-04-02 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> When you make a full continuation with clone, can't you chase up its
>> continuation chain and mark its reachable continuations (and only those
>> continuations) as non recyclable? (This is one of the reasons I think
>> that a Continuation should have an explicit copy of the continuation
>> that was current when it was made, rather than relying on
>> savetop/pushtopp to capture it.)
>
> We need getting at the call chain anyway. But storing P1 elsewhere seems
> not to be the right thing. OTOH a subroutine using integers only would
> preserve it's context just with C, if P1 is saved elsewhere.
> Your proposal smells like: the return continuation is normally hidden
> (i.e.  not in any register, just in the context). Some opcode like
> C makes it available for backtracking or such.

That certainly makes sense to me; can anyone think of cases where
having/making an explicit return continuation is a good thing?


Making tail calls/invoking continuations

2004-04-02 Thread Piers Cawley
Is there any syntactic sugar in IMCC for making a tail call? Right
now, AFAICT, the trick is to do:

   .pcc_begin
   .arg Foo
   .arg bar
   .pcc_call sub, P1 # Or whatever the syntax is to get
 # the current continuation
   .pcc_end

But, looking at the PASM generated by pcc_4.imc (which is where I
picked this up from) that doesn't seem to actually have any benefit
because the resulting code still does a 'savetop' and an 'updatecc',
both of which are utterly unnecessary for a tail call. As Dan's pointed
out on IRC, having IMCC detect tail calls and automatically optimize
them is a no no too, but it'd be very handy if there were some sugar to
allow me to specify that it's a tail call (or a continuation
invokation). How plausible is:

   foo(...), nosave

or 

   foo(...) nosave

AFAICT the grammar should be able accommodate such changes to the
syntax.

Thoughts?


Re: Fun with nondeterministic searches

2004-04-01 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley wrote:
>
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>> 
>>>At (1) the continuation is marked with C. In C
>>>this flag is propagated to the stacks in the continuation's context. At
>>>(2) or any other place, where this stacks are popped off, the stack
>>>chunks are not put onto the stack chunk freelist.
>>>
>> That seems to make sense. 
>
> And it works (for your code)
>
> Here is a proof of concept patchoid:

Fabulous

>
> 1) change to your example code:
>   $P1 = clone P1
>   store_lex 1, "cc", $P1
> (the clone strips off all recycle flags)

Oh nice, much neater than what I was thinking of involving making a
'real' continuation and copying context info across from the return
continuation. Does this pretty much remove the last distinction between
RetContinuation and Continuation?



Re: Fun with nondeterministic searches

2004-04-01 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> Piers Cawley <[EMAIL PROTECTED]> wrote:
>>>
>>>> Remember how Leo wanted an example of how continuations were used?
>>>
>>> Great example - I don't understand how it wotks though :) - but I
>>> understand, why the PIR code might fail:
>
>> Okay, I'll try and explain it.
>
> Great thanks. (I was just going through it and sometimes I have a
> slight clue how it works (or better I know what's going on but I'm for
> sure unable to write such a piece of code from scratch (I'm missing some
> experience with this kind of progamming languages (like lisp et al
>
>> $P0 = find_lex("fail")
>> $P0() # Why can't we do this? Does $P0.() work any better?
>
> It used to give tons of reduce conflicts and wrong code ... wait ... try
> again ... now it works ... fixed.

Oh, cool.

> $P0.() would be a method call w/o method, i.e. a parser error.

Yean, I was thinking analogous to Perl 6's proposed syntax $foo.(...)
says to treat $foo as a function reference and call it with appropriate
arguments.


Re: Fun with nondeterministic searches

2004-04-01 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley wrote:
>
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>>>
>>>Here is a proof of concept patchoid:
>>>
>> Fabulous
>> 
>>>1) change to your example code:
>>>  $P1 = clone P1
>>>  store_lex 1, "cc", $P1
>>>(the clone strips off all recycle flags)
>>>
>> Oh nice, much neater than what I was thinking of involving making a
>> 'real' continuation and copying context info across from the return
>> continuation.
>
> Yep. That was the reason I rewrote the clone code in the first place.
>
>> ... Does this pretty much remove the last distinction between
>> RetContinuation and Continuation?
>
> Pretty much, yes. Continuation still have one relict from COWing times:
> these are warnings and errors flags buffers. But it's very likely, that
> COW copying these buffers is wrong too and a plain copy will do it, as
> it works with all stacks now. When this is removed, RetContinuations and
> Continuations are the same. It looks like the only distinction might be
> the creation of the Continuation:
>
>invokecc   # create Continuation for recycling or
>callemthodcc # same
>
>newsub $P1, .Continuation, label# or
>$P1 = clone P1  # recycling disabled
>
> This OTOH means, that a Continuation created with invokecc shall be
> never silently reused. There is currently one protection in the code
> against that: If ever one Continuation is created explicitely,
> RetContinuation recycling is disabled - forever.

When you make a full continuation with clone, can't you chase up its
continuation chain and mark its reachable continuations (and only those
continuations) as non recyclable? (This is one of the reasons I think
that a Continuation should have an explicit copy of the continuation
that was current when it was made, rather than relying on
savetop/pushtopp to capture it.)


Re: Fun with nondeterministic searches

2004-04-01 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> Remember how Leo wanted an example of how continuations were used?
>
> Great example - I don't understand how it wotks though :) - but I
> understand, why the PIR code might fail:

Okay, I'll try and explain it.

Every time you call choose, it grabs the current continuation (the
place that this particular call to choose will return to) and stuffs it
in a lexical variable and saves the current fail closure as
'old_fail'. Then it creates a closure called try and sticks that in the
lexical pad too. Finally, it calls try with the list of possible
choices.

Try checks to see if their are any choices left. If there are it pulls
the first item off the list of choices for a return value, and sets up
'fail' so that, when invoked it will simply call 'try' with the
remainder of the choices. Then it uses the saved continuation to return
the first item to the point where choose was called from. 

If there *aren't* any choices, it sets fail to be the 'fail' that is
saved in its 'old_fail' and simply calls that. 


So, what does that mean when you have:

x = _choose(array1) # cont1
y = _choose(array2) # cont2, cont3

$I0 = x
$I1 = y
$I2 = x * y

if $I2 == 15 goto success
$P0 = find_lex("fail")
$P0() # Why can't we do this? Does $P0.() work any better?
branch the_end
  success:
print x
print " * "
print y
print " == 15\n"
  the_end:
...


Calling the first choose sets up a try (call it try1), saves 'fail' in
that try's 'old_fail', sets up the new fail to call 'try1(3,5)' and returns
1.

The second choose call sets up a new try (try2), saves the fail that would
call 'try1(3,5)' in its old_fail, sets up the new fail to call 'try2(5,9)' and
returns 1

Obviously, 1 * 1 doesn't equal 15, so fail gets called, which in turn
calls 'try2(5,9)' which sets fail up to call 'try2(9)' and returns 5 to
cont2

y is now 5, but 1 * 5 still doesn't equal 15, so fail calls try2(9),
which sets up fail to call try2() and returns 9 to cont2

9 * 1 doesn't equal 15 either, so we call fail, which calls try2(). 

try2() doesn't have any choices left, so it reinstates its old_fail and
invokes that, which calls try1(3,5).

try1(3,5) sets up fail to call try1(5) and returns 3 to cont1.

So now we call _choose(1,5,9) again, which makes a new try (try3),
which sets fail up to call try3(5,9) and returns 1 to cont3.

Guess what? 1 * 3 *still* doesn't equal 15, so we fail again, which
calls try3(5,9), which returns 5 to cont3 and does what we expect to
the current fail. And finally, 3 * 5 equals 15, so we announce the glad
tidings and continue on our merry way. 

If I weren't such a lazy programmer I'd have implemented a
'reset_searchs' to set fail back to the original 'This search didn't
work!' value, allowing the saved failure 'continuations' to get garbage
collected, and setting things up for new choices.

See, I told you it was easy.


Re: Fun with nondeterministic searches

2004-03-31 Thread Piers Cawley
[EMAIL PROTECTED] (Leopold Toetsch) writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> Remember how Leo wanted an example of how continuations were used?
>
> Great example - I don't understand how it wotks though :) - but I
> understand, why the PIR code might fail:
>
>> .sub _choose
>
> [ ... ]
>
>>  store_lex 1, "cc", P1
>
> You aren't allowed to do that. While P1 is the return continuation
> passed into C<_choose> it's not (or may be not) in P1 during the whole
> function body. That's not guranteed. It's guranteed that returning from
> the sub will use this continuation, that's all.
> You can't assume any fixed PASM register for some PIR item.
>
> WRT RetContinuation vs Continuation - they are still not the same. I'll
> try hard to keep the distinction that RetContinuations are only used
> once (which your program seems not to do). Having that distinction would
> give us more then 50% speed up in plain function (and method) calls.
>
> You can turn off this distinction in src/objects.c:782 with
>
>   #define DISBALE_RETC_RECYCLING 1
>
> or pass to _choose a real Continuation PMC in P1.
>
> (the program still fails, but now differently, I'll have a closer look
> at it tomorrow)

Okay, I fixed it (and slowed it down dramatically) by removing Return
Continuations and getting rid of the stack freelist (you can't stick a
stack frame on the freelist just because you've popped it, there might
be a continuation still looking at it). Honestly, with one item per
chunk, immutable stacks, there really is no point in having a special
case RetContinuation, you just need to maintain a continuation freelist.

I'm going to go through the DOD stuff to see about adding continuation
and stack chunk free lists to the interpreter structure. Then when the
DOD finds dead stack chunks or continuations during the course of its
job, it just shoves 'em onto the appropriate free list and carries on
its way. The new_return_continuation_pmc and new_stack_chunk or
whatever they're called can pull structures off the appropriate free
lists. 

Also, if stack chunks get garbage collected, which they should be,
there's little point in having a doubly linked stack (you can't have
'prev' link back to a free stack frame because the stack is actually a
tree).


Fun with nondeterministic searches

2004-03-31 Thread Piers Cawley
Remember how Leo wanted an example of how continuations were used?

Well, I ported the following Scheme code to PIR. (The PIR is appended
to this message...

  ;;; Indicate that the computation has failed, and that the program
  ;;; should try another path.  We rebind this variable as needed.
  (define fail
(lambda () (error "Program failed")))
  
  ;;; Choose an arbitrary value and return it, with backtracking.
  ;;; You are not expected to understand this.
  (define (choose . all-choices)
(let ((old-fail fail))
  (call-with-current-continuation
   (lambda (continuation)
 (define (try choices)
   (if (null? choices)
   (begin
 (set! fail old-fail)
 (fail))
   (begin
 (set! fail
  (lambda () (continuation (try (cdr choices)
 (car choices
 (try all-choices)
  
  ;;; Find two numbers with a product of 15.
  (let ((x (choose 1 3 5))
(y (choose 1 5 9)))
(for-each display `("Trying " ,x " and " ,y #\newline))
(unless (= (* x y) 15)
  (fail))
(for-each display `("Found " ,x " * " ,y " = 15" #\newline)))

Which (as anyone can plainly see) implements a non deterministic
search (and something like it could come in handy when implementing
Perl 6 Junctions). 

I think I've tweaked a bug in the GC somewhere because 'parrot -t
choose.imc' and 'parrot -tG choose.imc' fail in different places.

Also, I thought Leo's patches to the stacks meant that RetContinuations
had been done away with, but the trace output implies otherwise, and it
may be that the code is failing because of this difference. The call to
fail *should* return to just after the second call to choose by
invoking the lexically held continuation, but this isn't what happens

Rejigging IMCC to use Continuations instead of RetContinuations (using
a simple minded search & replace) makes things fall over with a Bus
Error.

Enjoy.


.sub main
 .local pmc arr1
 .local pmc arr2
 .local pmc x
 .local pmc y
 .local pmc choose
 .local pmc fail
 new_pad 0
 $P0 = new PerlArray
 store_lex 0, "*paths*", $P0
 $P0 = new PerlString
 $P0 = "@"
 store_lex 0, "failsym", $P0
 store_lex 0, "choose", $P0
 store_lex 0, "fail", $P0
 newsub choose, .Closure, _choose
 store_lex "choose", choose
 newsub fail, .Closure, _fail
 store_lex "fail", fail
 arr1 = new PerlArray
 arr1[0] = 1
 arr1[1] = 3
 arr1[2] = 5
 arr2 = new PerlArray
 arr2[0] = 1
 arr2[1] = 5
 arr2[2] = 9
 
 x = choose(arr1)
 print "Chosen "
 print x
 print " from arr1\n"
 y = choose(arr2)
 print "Chosen "
 print y
 print " from arr2\n"
 $I1 = x
 $I2 = y
 $I0 = $I1 * $I2
 if $I0 == 15 goto success
 fail = find_lex "fail"
 fail()
 print "Shouldn't get here without a failure report\n"
 branch the_end
success:
 print x
 print " * "
 print y
 print " == 15!\n"
the_end:
 end
.end
 


.sub _choose
 .param PerlArray choices
 .local pmc our_try
 print "Choose: "
 $S0 = typeof choices
 print $S0
 print "\n"
 new_pad 1
 find_lex $P0, "fail"
 store_lex 1, "old_fail", $P0
 store_lex 1, "cc", P1
 newsub our_try, .Closure, _try
 store_lex 1, "try", our_try
 $P2 = our_try(choices)
 .pcc_begin_return
 .return $P2
 .pcc_end_return
.end

.sub _try
 .param PerlArray choices
 print "In try\n"
 $S0 = typeof choices
 print $S0
 print "\n"
 new_pad 2
 clone $P0, choices
 store_lex 2, "choices", $P0
 if choices goto have_choices
 $P1 = find_lex "old_fail"
 store_lex "fail", $P1
 invokecc $P1
have_choices:
 newsub $P2, .Closure, new_fail
 store_lex "fail", $P2
 $P3 = find_lex "choices"
 $S0 = typeof $P3
 print $S0
 print "\n"
 shift $P4, $P3

 .pcc_begin_return
 .return $P4
 .pcc_end_return

new_fail:
 .local pmc our_try
 .local pmc our_cc
 save P1
 print "In new_fail\n"
 our_cc = find_lex "cc"
 our_try = find_lex "try"
 $P2 = find_lex "choices"
 $S0 = typeof $P2
 print $S0
 print "\n"
 $P3 = our_try($P2)
 restore P1
 unless our_cc == P1 goto do_return
 print "Something's very wrong with continuations!\n"
do_return:
 our_cc($P3)
.end


.sub _fail
 print "Program failed\n"
 .pcc_begin_return
 .pcc_end_return
.end

 
 

Re: Optimizations for Objects

2004-03-29 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:
> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> C becomes C if only P-registers are used. Saving only
>>> 3 or 5 registers isn't possible yet. We have no opcodes for that.
>
>>   save Pn
>>   save Pm
>
> Well, these are already used for pushing items onto the user stack. It
> could be
>
>pushtopp 3   # save P16..P18
>savetop 4# save 4 regs from all I16 ..., S16, N16, P16 ... P19
>
> and so on.

Out of interest, why do we have distinct register and user stacks?

[...]

>> ... Presumably, because IMCC knows that
>> cont_ret is a continuation target, it can create the appropriate
>> real_cont_ret and add the appropriate stack manipulation code in there?
>> This would be really cool.
>
> The code for creating the continuation and the return must be in sync.
> When you pass the continuation on into a subroutine and want to return
> either normally or through the continuation, we need something like:
>
> $P0 = newcont dest_label FOR _invoker
> _invoker($P0)
> ...
>   dest_label:
> ...

But the function the continuation gets passed to is completely
irrelevant. When you make a continuation you want to save exactly the
same state as you'd save if you were making a function call at the same
point. 

Say you had code like

 ...
 $P0 = "Something"
 $P1 = "Something else"
 $P2 = "Some other thing"
 .newcont $P3, dest_label
 ...
   do_return:
 .pcc_begin_return
 .pcc_end_return


   dest_label:
 print $P0
 print $P2
 branch do_return

Then it'd be cool if IMCC could look ahead to see that when (if) the
continuation is invoked, the only registers that get used are $P0 and
$P2 and emit something like:


 ...
 $P0 = "Something"
 $P1 = "Something else"
 $P2 = "Some other thing"
 save P1
 save P2
 save $P0
 save $P2
 $P3 = newcont Continuation, dest_label
 restore $P2
 restore $P0
 restore P2
 restore P1
 ...
   do_return:
 .pcc_begin_return
 .pcc_end_return


   dest_label:
 restore $P2
 restore $P0
 restore P2
 restore P1
 print $P0
 print $P2
 branch do_return

But I have the feeling I'm thinking IMCC is rather more sophisticated
than it is in real life. From the point of view of a programmer, the
important thing is that invoking a continuation should return the upper
and control registers (but not the argument registers) to the state
they were in when the continuation was made. How the continuation is
subsequently stored/passed is completely irrelevant to this. 

> Creating correct code from that is a bit ugly, because the continuation
> is created, before the actual call sequence is generated. So a bit more
> verbose:
>
> .pcc_begin prototyped
> .arg_newcont dest_label
> .pcc_call _invoker
> .pcc_end
> ...
>
>   .CONT_TARGET dest_label:
>
> That's still complicated but doable. That would need searching the
> current unit for subroutine calls that have a C<.arg_newcont> argument,
> compare the labels and create finally the very same register
> restore opcode(s) that the function call has.
>
> OTOH storing a continuation inside a global can't prepare any code for
> the continuation return, because nothing about the continuation's usage
> is known.

This is irrelevant.

>>>   assign $P0, P1   # vtable->set_pmc  (N/Y)
>
>> Assign would be good. I can't really think of an occasion when you'd
>> want to copy anything less than the full context held by the
>> continuation.
>
> What about:
>
>   .sym pmc cont
>   cont = newcont dest
>   ...
>   updatecc cont   # assign current P1's context to cont
>
> With
>
>   assign cont, P1
>
> we need another syntax extension to actually get the real P1 register:
>
>   assign cont, current_P1
>
> or P1 is always restored immediately after function calls (like
> currently P2).
>
> I think, C and restoring P1 immediately would be the most useful
> combination.

Absolutely.


Re: Continuations, stacks, and whatnots

2004-03-29 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 8:46 AM +0100 3/23/04, Leopold Toetsch wrote:
>>Piers Cawley <[EMAIL PROTECTED]> wrote:
>>>  Dan Sugalski <[EMAIL PROTECTED]> writes:
>>
>>>>> And what control stack? The continuation chain is the control
>>>>> stack, surely?
>>>>
>>>>  Nope. There's the exception handlers, at the very least.
>>
>>>  Just add a field to the continuation structure "NextExceptionHandler"
>>>  which points to the continuation of the next exception handler in the
>>>  chain.
>>
>>What about C code that either installs exception handlers or throws
>>exceptions?

C code that installs exception handlers is (admittedly) tricky, but C
code throwing an exception seems reasonably straightforward. Wrap the C
call in a basic exception handler that catches the C exception and
rethrows to the current continuation's exception continuation. 

> Or multiple nested exception handlers, 

Invoke the exception continuation, which restores the appropriate
current contination (and associated exception continuation) rethrow
the exception by invoking the new current exception continuation, which
restores a new current continuation (and associated exception
continuation). Rinse. Repeat.


> or serial exception handlers in a block...

Installing an exception handler sets the current exception handler,
removing it unsets it, then you install a new one. Any function calls
will get appropriate exception continuations depending on the currently
set exception handler.

> And then there's the fun with exception handlers and
> coroutines.
>
> It's a stack, like it or not.

So what happens when you invoke a continuation that jumps deep within a
functional call chain with a couple of exception handlers? What happens
when you do it again? ie: Is the control stack properly garbage collected?



Re: Windows tinder builds

2004-03-29 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> I finally figured out why the windows machine wasn't showing in the
> tinderbox, and fixed that. (System dates. D'oh!) We now have (again) a
> reliable windows machine building parrot for test, both under Cygwin and
> Visual Studio/.NET (though it builds a native executable there rather
> than a .NET one)
>
> The VS/.NET build works fine, though three of the tests fail for odd
> reasons. Those look like potential test harness errors.
>
> The cygwin build sorta kinda works OK, but the link fails because of a
> missing _inet_pton. I seem to remember this cropping up in the past and
> I thought we'd gotten it fixed, but apparently not.
>
> Anyway, these are set for hourly builds at half-hour offsets, so if you
> check in any significant changes it'd be advisable to take a look at the
> results. For those that don't know, all the tinderbox info is
> web-accessable at
> http://tinderbox.perl.org/tinderbox/bdshowbuild.cgi?tree=parrot

Hmm... I note that there appear to be no Macintoshes in the
tinderbox. I can probably spare some cycles to this; what's the
procedure?


ParrotUnit

2004-03-25 Thread Piers Cawley
Here's version 0.01 of ParrotUnit, my port of the xUnit testing
framework to Parrot. It allows you to write your tests for parrot
applications using object oriented parrot. Untar it in your parrot
directory then do 

$ parrot t/test.imc
1..3
ok 1 testTemplateMethod
ok 2 testTestFailure
ok 3 testTestCount

and away you go.

Right now you have to do a largish amount by hand because Parrot lacks
the reflective capabilities needed to do automatic generation of
TestSuites etc, and I've yet to come up with a good set of 'assert_*'
methods, but what's there is usable (and splendidly undocumented). Enjoy.





parrotunit-0.01.tar.gz
Description: Binary data


Re: Ulterior Reference Counting for DoD?

2004-03-25 Thread Piers Cawley
"Butler, Gerald" <[EMAIL PROTECTED]> writes:

> How do you make the copy/move of the object from one location in
> memory and the update of the pointer to the pointer ATOMIC? If you
> don't, it doesn't matter how many layers of indirection you have, it
> will still be a problem  ;^)

You only do it during allocation so nothing outside the allocator sees
anything move. 


Re: Ulterior Reference Counting for DoD?

2004-03-25 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> On 25/03/2004, at 9:01 PM, Leopold Toetsch wrote:
>
>>> All these generational collectors don't work with Parrot objects. We
>>> guarantee that objects don't move around.
>
>> Oh, I didn't see a mention of this in a PDD.  What's the reason for why
>> you provide such a guarantee?  Just curious.
>
> PMCs are passed on to (external) C code. When inmidst of C code the
> PMC moves around things break horribly--as they now break with the
> copying collector when you do:
>
>   char *c = string->strstart;
>   ...
>   
>   ...
>   do something with c

I know it's extra indirection, but maybe we should be passing pointers
to pointers rather than plain pointers to C functions. It could be a
win if that gets us faster GC...


Re: [PATCH] single item stack chunks

2004-03-24 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> I've stripped down the whole stack code to use one item per chunk. It
> passes all tests (3 disabled that push infintely and check for
> CHECK_LIMIT and t/pmc/eval_6 which is borken).
>
> This slows down register saving (and other stack operations)
> considerably whithout any additional measures[1]:
>
> $ perl tools/dev/parrotbench.pl -c=parrotbench.conf -b='^oof' -t
> Numbers are cpu times in seconds. (lower is better)
>  p-j-Oc  parr-j  parr-C  perl-th perlpython  ruby
> oofib   4.150s  11.530s 12.450s 4.100s  3.540s  2.140s  2.170s
>
> p-j-Oc = parrot -j -Oc where savetop is optimized to pushtopp
> parr-j = parrot -j, all 4 registers are saved (both unoptimized build)

Interesting. I redid oofib.imc to only save the registers it cares
about rather than using savetop, and here are my numbers (admittedly on
a PowerMac G5:

parrot  parrotj parrotC perlpython  ruby
oofib   3.770s  3.190s  2.950s  2.210s  1.100s  1.770s
oofibt  7.750s  7.370s  6.960s  2.210s  1.140s  1.800s


oofibt is the original version, oofib is my rewrite (attached). The
perl, python & ruby equivalents were generated with a simple copy...

For reference, here are the numbers using a CVS fresh parrot:

parrot  parrotj parrotC perlpython  ruby
oofib   3.770s  3.150s  3.100s  2.210s  1.080s  1.960s
oofibt  6.700s  6.240s  6.170s  2.330s  1.040s  1.890s

So it looks like saving single registers is a win whichever parrot
you're using...


.pcc_sub _main prototyped
.param pmc argv
.sym int argc
argc = argv
.sym pmc N
N = new PerlInt
N = 28
if argc <= 1 goto noarg
$S0 = argv[1]
N = $S0
noarg:
.sym float start
time start

.local pmc A
.local pmc B
.local pmc b

A = newclass "A"
B = subclass  A, "B"

find_type $I0, "B"
b = new  $I0

.sym pmc r
r = b."fib"(N)

.sym float fin
time fin
print "fib("
print N
print ") = "
print r
print " "
sub fin, start
print fin
print "s\n"
end
.end

.namespace ["A"]

.sub fib method
.param pmc n
if n >= 2 goto rec
.pcc_begin_return
.return n
.pcc_end_return
rec:
.sym pmc n1
.sym pmc n2
.sym pmc r1
.sym pmc r2
n1 = new PerlInt
n2 = new PerlInt
n1 = n - 1
n2 = n - 2
P5 = n1
I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
S1 = "fibA"
save P1
save n2
save self
callmethodcc
restore self
r1 = P5
restore P5

I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
S1 = "fibB"
save r1
callmethodcc
restore r1 
restore P1
P5 = P5 + r1
I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
invoke P1
end
.end

.sub fibA method
.param pmc n
if n >= 2 goto rec
.pcc_begin_return
.return n
.pcc_end_return
rec:
.sym pmc n1
.sym pmc n2
.sym pmc r1
.sym pmc r2
n1 = n - 1
n2 = n - 2
P5 = n1
I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
S1 = "fib"
save P1
save n2
save self
callmethodcc
restore self
r1 = P5
restore P5

I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
S1 = "fibB"
save r1
callmethodcc
restore r1 
restore P1
P5 = P5 + r1
I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
invoke P1
.end

.namespace ["B"]

.sub fibB method
.param pmc n
if n >= 2 goto rec
.pcc_begin_return
.return n
.pcc_end_return
rec:
.sym pmc n1
.sym pmc n2
.sym pmc r1
.sym pmc r2
n1 = new PerlInt
n2 = new PerlInt
n1 = n - 1
n2 = n - 2
P5 = n1
I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
S1 = "fib"
save P1
save n2
save self
callmethodcc
restore self
r1 = P5
restore P5

I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
S1 = "fibA"
save r1
callmethodcc
restore r1 
restore P1
P5 = P5 + r1
I0 = 1
I1 = 0
I2 = 0
I3 = 1
I4 = 0
invoke P1
.end


Re: Continuations, stacks, and whatnots

2004-03-22 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 12:59 AM + 3/23/04, Piers Cawley wrote:
>>Leopold Toetsch <[EMAIL PROTECTED]> writes:
>>
>>>  Dan Sugalski <[EMAIL PROTECTED]> wrote:
>>>
>>>>  ... If we go with a one
>>>>  frame stack chunk then we don't have to bother with COW-ing
>>>>  *anything* with the stack.
>>>
>>>  BTW: which stacks: Register frames of course. What about Pad, User, and
>>>  Control?
>>
>>I hope he means "All of 'em".
>>
>>And what control stack? The continuation chain is the control stack, surely?
>
> Nope. There's the exception handlers, at the very least. 

Just add a field to the continuation structure "NextExceptionHandler"
which points to the continuation of the next exception handler in the
chain. To throw an exception you invoke that exception. If that
exception handler needs to rethrow the exception, its P1 will contain
the appropriate continuation.

> Possibly some lexical pad stuff. (Though of that I'm less sure)

I've always wondered why lexical pads have their own stack. I'd hang it
off the Sub object and, when the sub's invoked, shove the current pad
into a control register, which then gets closed over by any
continuations that get made. Invoking a continuation restores the pad
register and away you go. 




Re: Continuations, stacks, and whatnots

2004-03-22 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>
>> ... If we go with a one
>> frame stack chunk then we don't have to bother with COW-ing
>> *anything* with the stack.
>
> BTW: which stacks: Register frames of course. What about Pad, User, and
> Control?

I hope he means "All of 'em".

And what control stack? The continuation chain is the control stack, surely?


Re: Using Ruby Objects with Parrot

2004-03-22 Thread Piers Cawley
Nick Ing-Simmons <[EMAIL PROTECTED]> writes:

> Mark Sparshatt <[EMAIL PROTECTED]> writes:
>>
>>I'm not 100% certain about the details but I think this is how it works.
>>
>>In languages like C++ objects and classes are completely seperate.
>>classes form an inheritance heirachy and objects are instances of a
>>particular class.
>>
>>However in some languages (I think that Smalltalk was the first) there's
>>the idea that everything is an object, including classes. So while an
>>object is an instance of a class, that class is an instance of another
>>class, which is called the metaclass. I don't there's anything special
>>about these classes other than the fact that their instances are also
>>classes.
>>
>>
>>Thinking about it I think you may have the relationship between
>>ParrotObject and ParrotClass the wrong way around. Since a class is an
>>object but and object isn't a class it would be better for ParrotClass
>>to inherit from ParrotObject, rather than the other way round.
>>
>>In Ruby when you create a class Foo, the Ruby interpreter automatically
>>creates a class Foo' and sets the klass attribute of Foo to point to Foo'.
>>
>>This is important since class methods of Foo are actually instance
>>methods of Foo'. Which means that method dispatch is the same whether
>>you are calling an instance of class method.
>
> So in perl5-ese when you call 
>
>Foo->method
>
> you are actually calling sub Foo::method which is in some sense
> a "method" of the %Foo:: "stash" object.
>
> So what you suggest is as if perl5 compiled Foo->method
> into (\%Foo::)->method and the %Foo:: 'stash' was blessed...

Personally, I've always wished that Perl5 *had* done that. I've toyed
with the idea of blessing Stashes, but never got around to actually
implementing anything.




This week's Summary

2004-03-22 Thread Piers Cawley
 to do the actual
execution; then parrot would use the same string conversion routines at
compile and run time.

Leo fixed it.

http://tinyurl.com/yrfvk

  Configure.pl and the history of the world
Dan pointed out that, as the Ponie work goes on, integrating Parrot with
Perl 5, we need to get the embedding interface fixed up so that it plays
well with others.

He was also concerned that we seemed to be reinventing perl's
Configure.SH in a horribly piecemeal fashion and suggested that we
should just dig all the stuff out in one swell foop. Larry pointed
everyone at metaconfig and discussion ensued.

Quite how metaconfig sits with the miniparrot based configuration/build
plan that Dan's talked about was left as an exercise for the interested
reader.

http://tinyurl.com/292d4

  Method caching
Work continued on making objects more efficient. The object PMC had a
good deal of fat/indirection removed, and work started on implementing a
method cache. Dan reckoned that the two most useful avenues for
exploration were method caching and thunked vtable lookups.

Zellyn Hunter suggested people take a look at papers on Smalltalk's
dispatch system by Googling for [smalltalk cache].

Mitchell N Charity suggested a couple of possible optimizations (and
benchmarks to see if they're worth trying).

There was some discussion of the costs of creating return continuations.
(My personal view is that the current continuation and stacks
implementation isn't the Right Thing, but I don't have the C skills to
implement what I perceive to be the Right Thing. Which sucks.)

Leo reckons that, with a method cache and continuation recycling, he's
seeing a 300% improvement in speed on the object oriented Fibonacci
benchmark.

http://tinyurl.com/2ypmc

http://tinyurl.com/2uvzd

  ICU incorporation
Jeff Clites gave everyone a heads up about the work he's doing on a
patch to incorporate the use of ICU (the Unicode library Parrot will be
using) and some changes to our internal representation of strings.
Apparently the changes give us a simpler and faster internal
representation, which can't be bad.

http://tinyurl.com/2gc3t

  Continuation usage
Jens Rieks and Piers Cawley both had problems with continuations. Leo
Tötsch tried explain what they were doing wrong. There seemed to be a
fair amount of talking past each other going on (at least, that's how it
felt from my point of view) but I think communication has been
established now. Hopefully this will lead to a better set of tests for
continuation usage and a better understanding of what they're for and
how to use them.

http://tinyurl.com/yv4ag

http://tinyurl.com/2ra6h

  Optimization in context
Mitchell Charity argued that we should think carefully before doing too
much more optimization of Parrot until we've got stuff working
correctly. Leo agreed, up to a point, but pointed out that optimizing
for speed is lot of fun. Brent Royal-Gordon thought that it was a
balancing act, some things are painfully slow and need optimizing, at
other times, things are painfully none existent and need to be
implemented. Objects were both of those things for a while.

Piers Cawley said that, for all that objects were slow (getting faster),
he thought they were rather lovely.

http://tinyurl.com/38tly

Meanwhile, in perl6-language
  Hash subscriptor
At the back end of the previous week, Larry introduced the idea of
subscripting hashes with "%hash«baz»" when you mean %hash{'baz'}. This
surprised John Williams (and others I'm sure, it certainly surprised me,
but it's one of those "What? Oh... that makes a lot of sense" type
surprises.) Larry explained his thinking on the issue. Apparently it
arose because ":foo('bar')" was too ugly to live, but too useful to die,
so ":foo«bar»" was invented, and once you have that, it is but a short
step to "%foo«bar»". (If you've not read Exegesis 7, you probably don't
know that ":foo«bar»" is equivalent to "foo => 'bar'", but you do now.)
John wasn't convinced though. It remains to be seen if he's convinced
Larry.

Larry: unfortunately it's an unavoidable part of my job description to
decide how people should be surprised.

http://tinyurl.com/3yju9

  Mutating methods
Oh lord... I'm really not following this particular thread. The mutating
methods thread branched out in different directions that made my head
hurt. I *think* we're still getting

$aString.lc;  # None mutating, returns a new lower case string
$aString.=lc; # Mutating, makes $aString lower c

Re: Optimizations for Objects

2004-03-22 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>> Okay, as I see it there are two big things that we can do to speed
>> objects up. (Well, besides speeding up the creation of continuation
>> PMCs, which I am, at the moment, sorely tempted to put in a special
>> pool for fast allocation)
>
> I though about that already. Returncontinuations created via the *cc
> opcode variants are created in the opcode and used exactly once just for
> returning from the sub.  So I'd do for these a per-interpreter
> freelist, where they get put back after invokeing the return
> contination.

How hard would it be to stick all continuations onto a 'weak'
continuation stack (not seen by DOD) then, during DOD, mark the freed
continuations (or the live ones). After DOD do the following

   # Assume cpool_head = top of stack
   #cpool_last = last continuation in stack
   #end_of_chain = guard object, is never free
   

   return if cpool_head.isfree
   last = cpool_head
   this = cpool_head.next
   while !this.is_free {
 last = this
 this = this.next
   }
   last.next = end_of_chain
   cpool_last.next = cpool_head
   cpool_head = this

When you come to allocate a continuation, you know that, if the
head of the continuation list isn't free, there are no free
continuations on the list, so you allocate a new one and push it onto
the list.

If the head of the list is free, grab it, mark it as used, rerun the
above algorithm, populate your continuation and continue on your merry
way. 

If we use a single value per stack frame approach to the stack we can
recycle stack frames in the same way. 

(I know, patches welcome, but my C sucks)

-- 
Piers


Re: typeof ParrotClass

2004-03-22 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Should this really print "ParrotClass":
>  newclass P0, "Foo"
>  typeof S0, P0
>  print S0
>  print "\n"
>  find_type I0, "Foo"
>  new P1, I0
>  typeof S0, P1
>  print S0
>  print "\n"
>  end
> ParrotClass
> Foo

Yes.


Re: Optimizations for Objects

2004-03-22 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> You seem to be mixing up different issues with that statement. Using
>>> plain Continuation PMCs for returning just from subroutines was dead
>>> slow, w or w/o COWed stacks.
>
>> But when a Continuation is simply a collection of pointers to the tops
>> of the various stacks (and I really do think it should include P1 in
>> that list...) will it really be that slow? I'm surprised.
>
> You are missing object creation overhead. P1 and P2 are in the saved
> P-register frame.
>
> And finally: I really don't know, if a Continuation needs just pointers
> to the stacks (and to which) or (COWed) copies (to which).

If a 'single object per stack frame' approah, the continuation only
needs pointers to the various stacktops. In fact all that anything will
need for dealing with such stacks is a pointer to their current
stacktop.

> *Will please somebody describe all the semantics of Continuation usage.*

A Continuation is a closure over the call chain, and the various stacks
in the context. Someone proposed in email that, actually, a
continuation should close over everything but the parameter
registers. Consider the following:

savetop 
$P0 = newcont target
store_global "theCont", $P0
  target:
restoretop
  here:
...

When the continuation is invoked using, say, "cont("Result1", 2, "bibble")
the registers and user stacks should look exactly the same as if you
had just returned from a normal function call that looked like:

a_function_call()
  here:

ie: P1 and P2 would be untouched, as would the high registers, and the
various parameter/return registers would be populated with the returned
values. 

Given the way that the stack stuff works, I wonder if there's a case
for rejigging the calling conventions so that the control registers
(current continuation, self, (methodName?)) are contiguous with the
user registers. If we made P15 the current object, P14 the current
continuation, and S15 the methodname, then savetop could include them
efficiently without IMCC having to do 'set P20 = P1' at the start of
every sub that makes a function call.

If IMCC were then to always allocate 'user' register in ascending order
from 16, presumably it'd be possible to introduce a new op:

saverangep 14, 20
saverangei 16, 18
...

Along with associated 

restorerangep , 

(or should that be
 
restorerangep , 

Thoughts?



Re: Optimizations for Objects

2004-03-22 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> The thing is, 'pushtop' is almost certainly the wrong thing to do. You
>> should only push the registers you care about onto the register
>> stacks.
>
> Yes:
>
> $ time parrot -j  oofib.imc
> fib(28) = 317811 3.050051s
>
> real0m3.077s
>
> $ time parrot -j -Oc  oofib.imc
> fib(28) = 317811 2.234049s
>
> real0m2.262s
>
> C becomes C if only P-registers are used. Saving only
> 3 or 5 registers isn't possible yet. We have no opcodes for that.

  save Pn
  save Pm
  ...
  restore Pm
  restore Pn

Admittedly we don't have opcodes for storing multiple registers at a
time, and having them would be a useful optimization...

>> The catch is that, whilst I know which registers *I* care about, I don't
>> know which registers IMCC cares about. So maybe IMCC's 'newcont' should
>> expand to:
>
>>  save 'imcc_magic_register1'
>>  save 'imcc_magic_register2'
>>  target = newsub .Continuation, dest
>>  restore 'imcc_magic_register2'
>>  restore 'imcc_magic_register1'
>
>> Notice how making a continuation looks remarkably like making a function
>> call.
>
> The usage of above Continuation would be to pass it on into a subroutine
> and eventually return through it to an alternate position? 

Have you looked at the parrotunit code that I (finally) posted? That
creates a continuation and calls a method to set it as an object
attribute. If you could fix whatever I'm doing wrong there I'd be
obscenely grateful btw.

Incidentally, the latest rewrite has the continuation created and saved
on the object using:

 savetop
 P5 = new Continuation
 set_addr P5, catch0
 I0 = 1
 I1 = 0
 I2 = 0
 I3 = 1
 I4 = 0
 S0 = "setFailureHandler"
 callmethodcc
 restoretop

 ...
   finally:
 thisObject."tearDown"() # thisObject is self in a high register...
 .pcc_begin_return
 .pcc_end_return

   catch0:
 restoretop
 ...
 branch finally
  .end   

and the continuation is invoked using 'handler("a string") rather than
simply invoking it. Catch is, 'finally' gets jumped to twice.

Other usages include stuffing the continuation into a
global. 


> If yes, then the Continuation should be created inside the call
> sequence to avoid duplicate register saving and restoring code:
>
>   res = subroutine(args, NEWCONT)



> But that's only one side of the business. The code at the return label
> C) has to restore the same registers too.
>
> If the target of the Continuation isn't the function return, it has to
> look somethin like this:
>
>   goto around_cont_ret
>   cont_ret_target:
>   restoretop   # or whatever was saved
>   around_cont_ret:

Oh yes, I'm fine with that. The problem I have at the moment is that I
don't know *what* to save and restore. It appears that IMCC uses at
least one register in P16-32 to save a copy of P1 so that it'll be
caught by a savetop and, in the cases where I was saving individual
registers, creating the continuation, and restoring the registers, I
was failing to save the current continuation because I didn't know
where it was (this is why I want P1 to be invariant over function
calls/continuation creation). Presumably, because IMCC knows that
cont_ret is a continuation target, it can create the appropriate
real_cont_ret and add the appropriate stack manipulation code in there?
This would be really cool.

>> If the destination of the continuation is within the current
>> compilation unit (which it probably should be, or things get *very*
>> weird) then, potentially, IMCC knows what registers the continuation
>> target cares about and can automagically save the current .locals as
>> well.
>
> Yes.
>
>> Would it be possible to arrange things so that
>
>>  $P0 = new .Continuation
>>  $P0 = P1 # The current RetContinuation
>
>> makes $P0 into a full continuation equivalent to the RetContinuation?
>
> Sure. It depends on what part of the context should be copied into the
> Continuation:
>
>   get_addr Idest, P1
>   set_addr $P0, Idest   # assign dest - implemented
>
> or
>
>   assign $P0, P1   # vtable->set_pmc  (N/Y)

Assign would be good. I can't really think of an occasion when you'd
want to copy anything less than the full context held by the
continuation. 

>
> which could do whatever is appropriate.
>
> ($P0 = P1 just aliases the two and isn't usable for assignment)

D'oh.


Re: Optimizations for Objects

2004-03-21 Thread Piers Cawley
Luke Palmer <[EMAIL PROTECTED]> writes:

> Piers Cawley writes:
>> > You seem to be mixing up different issues with that statement. Using
>> > plain Continuation PMCs for returning just from subroutines was dead
>> > slow, w or w/o COWed stacks.
>> 
>> But when a Continuation is simply a collection of pointers to the tops
>> of the various stacks (and I really do think it should include P1 in
>> that list...) will it really be that slow? I'm surprised.
>
> I implemented this in the register stacks, but I didn't spend any time
> on optimization (and who knows, I may have even been marking the stacks
> COW unnecessarily).  Continuation creation, call, and return had great
> performance.  But the problem was that pushtop, etc. were much slower
> than with the current scheme.  

The thing is, 'pushtop' is almost certainly the wrong thing to do. You
should only push the registers you care about onto the register
stacks. I still find it odd that pushing 16 register onto the stack is
quicker than pushing 3 (for appropriate values of 3) 

The catch is that, whilst I know which registers *I* care about, I don't
know which registers IMCC cares about. So maybe IMCC's 'newcont' should
expand to:

 save 'imcc_magic_register1'
 save 'imcc_magic_register2'
 target = newsub .Continuation, dest
 restore 'imcc_magic_register2'
 restore 'imcc_magic_register1'

Notice how making a continuation looks remarkably like making a function
call.

If the destination of the continuation is within the current
compilation unit (which it probably should be, or things get *very*
weird) then, potentially, IMCC knows what registers the continuation
target cares about and can automagically save the current .locals as
well. 


> I would love to see RetContinuation leave, honestly.  One of the
> greatest things about CPS is that you can grab anybody's continuation
> that they passed you and store it somewhere.  RetContinuation is just a
> computed goto.

Would it be possible to arrange things so that 

 $P0 = new .Continuation
 $P0 = P1 # The current RetContinuation

makes $P0 into a full continuation equivalent to the RetContinuation?



Re: Optimization in context

2004-03-21 Thread Piers Cawley
Brent 'Dax' Royal-Gordon <[EMAIL PROTECTED]> writes:

> Other times, we add lots of new features, and then stop to test them
> and find they're incredibly slow.  (That's objects right now.)

In objects' defence, I'd just like to say that they are rather lovely.



Re: Continuations (again)

2004-03-21 Thread Piers Cawley
Piers Cawley <[EMAIL PROTECTED]> writes:

> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>> Piers Cawley <[EMAIL PROTECTED]> wrote:
>>> So, I'm trying to get my head 'round parrot's continuations. It's my
>>> understanding that, at creation time, a Continuation closes over the
>>> current user stacks, control stack and lexical pad (and possibly some
>>> other stuff but those'll do for now).
>>
>> Yes register stacks. Which is one problem of your code. The calling
>> sequence has a "savetop" inside, which isn't in the Continuations
>> context.
>
> But why do I need the savetop? I only care about the

Oops... what happened to that sentence?

Anyhoo, I looked again at the generated .pasm and noticed that P1 was
getting copied up to P20 so, if you don't do the savetop you're never
going to get the current continuation back. (I know, I'm telling you
something you already know). It seems to me that there's a good case for
saying that P1 should be closed over when a continuation is made and
restored when it's invoked. Not having to manage P1 explicitly until and
unless you want to promote it to a full continuation should help
enormously when you're wanting to write code that does (amongst other
things) tail call optimization because you know that, unless you've
fiddled with it yourself, P1 is always the current continuation. (Right
now you have to restore P1 from whichever hidden PMC register IMCC has
hidden it in, set up the various call registers by hand, stick the sub
in the right place and call it with invoke not invokecc. Since you don't
actually know *where* IMCC has shoved the continuation this is a little
tricky. You end up having to duplicate the work.)

Dan? Could you mandate this? Please?

Preserving self and the current function object could also be rather
handy...






Re: Optimizations for Objects

2004-03-21 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>
>> ... I strongly advocate rejigging the
>> stacks so that one stack frame = 1 stacked thing + 1 link to the next
>> thing in the chain.
>
> Let's do things in correct order. First was method cache. 2nd the
> debatable return continuation recycling. Both accummulated, sum up to
> 300% speedup. Next is - that's true - stack code or better register
> preservation costs.
>
> Please note that the current stack code is in, to make it working again.
> It was broken a long time. Now its by far less broken. All improvements
> are welcome.
>
>> ... No need for COW, no need for memcpy when allocating
>> continuations, no worrying complexity to deal with while you're trying
>> to get the behaviour right.
>
> Implementations of better schemes are much appreciated.
>
>> Oh, an no need for RetContinuations either.
>
> You seem to be mixing up different issues with that statement. Using
> plain Continuation PMCs for returning just from subroutines was dead
> slow, w or w/o COWed stacks.

But when a Continuation is simply a collection of pointers to the tops
of the various stacks (and I really do think it should include P1 in
that list...) will it really be that slow? I'm surprised.



Re: Optimizations for Objects

2004-03-21 Thread Piers Cawley
Matt Fowles <[EMAIL PROTECTED]> writes:

> All~
>
> Piers Cawley wrote:
>> I argue that we have the problems we do (incorrect behaviour of
>> continuations, horrible allocation performance) because we chose the
>> wrong optimization in the first place. The stack optimizations that are
>> in place make sense when you don't have continuations, but once you do,
>> the cost of allocating a continuation and maintaining all that COW
>> complexity becomes prohibitive. I strongly advocate rejigging the
>> stacks so that one stack frame = 1 stacked thing + 1 link to the next
>> thing in the chain. No need for COW, no need for memcpy when allocating
>> continuations, no worrying complexity to deal with while you're trying
>> to get the behaviour right. Oh, an no need for RetContinuations either.
>> 
>
> I feel like Piers has asked this several times and never gotten an
> answer about why it is (or is not) a good idea.  I agree with him that
> it is a good idea, but I am far from an authoritative source.
>
> Also, Piers didn't you implement this once already?  If could you update
> it to the current system without too much trouble?  It would be nice to
> have numbers on this approach.

Nope, never implemented it. I have the C skills of a teeny tiny kitten.



Re: Continuations (again)

2004-03-21 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> So, I'm trying to get my head 'round parrot's continuations. It's my
>> understanding that, at creation time, a Continuation closes over the
>> current user stacks, control stack and lexical pad (and possibly some
>> other stuff but those'll do for now).
>
> Yes register stacks. Which is one problem of your code. The calling
> sequence has a "savetop" inside, which isn't in the Continuations
> context.

But why do I need the savetop? I only care about the

>
> I already posted an working example.
> Here is another one (comments indicate problems with your code:
>
> .sub main
> $P0 = new PerlUndef
> $P0 = "Howdy, world\n"
> save $P0
> # create continuation inside, so that it has this in context
> savetop
> $P1 = newcont after
> P5 = $P1
> P0 = find_global "_invoker"
> invokecc
> after:
> restoretop
> restore $P2
> print $P2
>
> end
> # end *is* needed in main
> .end
>
> .sub _invoker
> #^^ global labels have an underscore
> .param pmc a_cont
> invoke a_cont
> .end
>
>> Weird hunh?
>
> As long as no one really can tell the semantics of Continuations, they
> have to be hand-crafted like above.

So why does the generated pasm work where the PIR doesn't? 

I can see why saving P0-2 would be a good idea, but after doesn't need
any of the other registers.


Continuations (again)

2004-03-21 Thread Piers Cawley
So, I'm trying to get my head 'round parrot's continuations. It's my
understanding that, at creation time, a Continuation closes over the
current user stacks, control stack and lexical pad (and possibly some
other stuff but those'll do for now). 

So, it seems to me that the following code should print "Howdy,
world". 

  .sub main
$P0 = new PerlUndef
$P0 = "Howdy, world\n"
save $P0
$P1 = newcont after
  #$P1 = new .Continuation
  #set_addr $P1, after
invoker($P1)
  sub_end:
.pcc_begin_return
.pcc_end_return

  after:
restore $P2
print $P2
branch sub_end
  .end

  .sub invoker 
.param pmc a_cont
invoke a_cont
.pcc_begin_return
.pcc_end_return
  .end

Except, what actually happens is: 

Parrot VM: PANIC: Illegal rethrow!
C file src/exceptions.c, line 356 
Parrot file (unknown file), line 0

Which isn't quite what I had in mind. Bizarrely, if I do:

$ parrot -o howdy.pasm howdy.imc
$ parrot howdy.pasm
Howdy, world
$

everything's fine.

Weird hunh?


Re: Optimizations for Objects

2004-03-20 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Larry Wall <[EMAIL PROTECTED]> wrote:
>> On Fri, Mar 19, 2004 at 08:57:28AM +0100, Leopold Toetsch wrote:
>
>>: I'd like to have, if possible a clear indication: that's a plain
>>: function or method call and this is not. I think the possible speedup is
>>: worth the effort.
>
>> I have no problem with "plain" being the default.  I suppose you could
>> declare it explicitly if you like:
>
>> sub foo () is plain {...}
>
>> Then obviously the other kind would be:
>
>> sub bar () is peanut {...}
>
> There is of course no need to declare the default, which will be very
> likely beyond 99%. I won't discuss P6 syntax here, which could lead to
>
>   sub bar() will reuse_P1 { ... }
>
> Piers' statement was related to solving the halting problem, which isn't
> really positive. I don't throw or catch pies, but CPS returns are
> currently the major performance loss Parrto has.

I argue that we have the problems we do (incorrect behaviour of
continuations, horrible allocation performance) because we chose the
wrong optimization in the first place. The stack optimizations that are
in place make sense when you don't have continuations, but once you do,
the cost of allocating a continuation and maintaining all that COW
complexity becomes prohibitive. I strongly advocate rejigging the
stacks so that one stack frame = 1 stacked thing + 1 link to the next
thing in the chain. No need for COW, no need for memcpy when allocating
continuations, no worrying complexity to deal with while you're trying
to get the behaviour right. Oh, an no need for RetContinuations either.


Re: Oops, here's the full parrotunit

2004-03-19 Thread Piers Cawley


parrotunit.tar.gz
Description: Binary data


Oops, here's the full parrotunit

2004-03-19 Thread Piers Cawley
I knew I forgot something in my last post... 

If you unpack this in your parrot directory you'll get 

library/parrotunit.imc
library/TestCase.imc
library/TestResult.imc
library/WasRun.imc
t/test.imc

Go to the parrot directory and, do ./parrot t/test.imc and the tests
will run. Annoyingly, everything works perfectly if none of the tests
fail...


Something rotten with the state of continuations...

2004-03-19 Thread Piers Cawley
I've been trying to implement a Parrot port of xUnit so we can write
tests natively in parrot and things were going reasonably well until I
reached the point where I needed to do exception handling. 

Exception handling hurt my head, badly, so eventually I gave up and
used a continuation instead. Here's the basic 'run' and 'exception
failure' parts of my code (The current full suite is in the tar file
attached):

  .sub run method
 .param pmc testResult
 .local Sub handler
 .local Sub exceptRet
 .local pmc name
 .local string nameString

 name = self."name"()
 self."setResult"(testResult)
 save nameString
 handler = new Continuation
 set_addr handler, catch0
 self."setFailureHandler"(handler)
 self."setUp"()
 self.name()

 testResult."pass"(nameString)
  finally:
 self."tearDown"()
 .pcc_begin_return
 .pcc_end_return

  catch0:
 P17 = self."name"()
 S16 = P17
 P16 = P2."testResult"()
 P16."fail"(S16)
 branch finally
  .end

  .sub assertion_failed method
.param string rawMsg
.local pmc handler
handler = self."failureHandler"()
  #  invoke handler
handler()
  .end

Now, depending on whether I use C or C, teardown
gets run 3 (handler()) or two (invoke) times. 

If I create the handler continuation using C
then it makes it to branch0, but there are problems with self, the first
method call works fine, but after that things go haywire and self
appears to get trashed so method lookups fail after the first one. Very
frustrating. 

BTW, you can make Leo's test program go into an infinite loop simply by
replacing the n

conti = newcont _end

with 

new conti, .Continuation
set_addr conti, _end

which is weird because I *thought* they were supposed to do equivalent
things. 


Re: Continuation usage

2004-03-19 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Jens Rieks <[EMAIL PROTECTED]> writes:
>
>>> Hi,
>>>
>>> does the attached test use the Continuation in a correct way?
>>> The test failes, what am I doing wrong?
>
>> Without running it I'm guessing that it prints out something like
>
>> 456=789
>> 456=456
>> 123=123
>
> Why would it print 3 lines?

Actually, it doesn't, I was going mad and mixing it up with the problem
I've been having with ParrotUnit, which does bad things with returning...


Re: Optimizations for Objects

2004-03-19 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>> At 10:38 PM +0100 3/18/04, Leopold Toetsch wrote:
>>>
>>>Which brings up again my warnocked question: How can return
>>>continuations get reused?
>
>> Works like this. (No pasm, but it should be obvious)
>
> Ok. First (and that applies to Jens example too), I'd like to outline
> continuation vs return continuation and its usage:
>
> 1) First there was only a Continuation PMC. Performance with CPS sucked.
> So I did invent a RetContinuation PMC Performance sucked less. The
> difference between these two is: the former has COW copied stacks inside
> its context, the latter has only pointers of the stacks in its context.
>
> 2) A return continuation is automatically created by
>
>   invokecc
>   callmethodcc
>
> It's totally hidden when using PIR function or method call syntax
>
>   _func(a, b)
>   obj."meth"(args)
>
> These internally create above opcodes and create a new return
> continuation on each sub or method invocation
>
> 3) Returning from the sub or method in PIR syntax again totally hides
> the return continuation
>
>   .pcc_begin_return
>   .return result
>   .pcc_end_return
>   .end
>
> or just
>
>   .sub _foo
>   .end# automatic return sequence inserted.
>
> 4) From PIR level there is no official way to even reference this return
> continuation - its invisible.
>
> 5) *But*
>
>> If a continuation's taken from within a sub somewhere, return
>> continuations may (probably will) be used multiple times, once for
>> the original return then once for each time the continuation is
>> invoked.
>
> 6) Yes, that's true. So the questios is: Can we invent some HLL PIR
> syntax that clearly indicates: this subroutine will return multiple
> times through the same return continuation. We have already a similar
> construct for Coroutines:
>
>   .pcc_begin_yield
>   .result  # optional
>   .pcc_end_yield
>
> This is invokeing the coroutine and returns to the caller.
>
> What's the usage of Continuations from HLLs point of view? Can we get
> some hints, what is intended?
>
> I'd like to have, if possible a clear indication: that's a plain
> function or method call and this is not. I think the possible speedup is
> worth the effort.

I think that requires solving the halting problem.


Re: Continuation usage

2004-03-18 Thread Piers Cawley
Jens Rieks <[EMAIL PROTECTED]> writes:

> Hi,
>
> does the attached test use the Continuation in a correct way?
> The test failes, what am I doing wrong?

Without running it I'm guessing that it prints out something like

456=789
456=456
123=123

And on running it, I see that I'm right.

Remember that invoking a continuation does nothing to the existing
register state (which is as it should be).

So why does it work like this?

Letting IMCC handle calling _func saves all the registers and sets P1
to a RetContinuation that points to _func's return point (which is sort
of inside the _func() line because of the stuff IMCC does on a function
return). 

_func then sets P16-18 to 4-6 respectively and you make the
continuation.

Calling _do_something saves all the registers and sets P1 to
(essentially) _end

_do_something sets P16-18 to 7-9 then jumps through the continuation
back to _end.

Which prints out '456=789' because P16-18 haven't been restored

Then it returns. Returning invokes P1, which is currently set to point
to _end. However, returning using .pcc_*_return also restores the
registers, which means P16-18 now contain 4-6, so it prints out
456=456 and returns through P1, which now boints back to the original
place that _func was called from.

Easy.

One part of your problem (The state of P16-18) is, therefore, a bug in
your program. The other part seems to be a bug in the current
implementation of Continuation.

A new Continuation should grab the current P1 continuation. If you
later invoke that Continuation, it should make the jump and reset
P1. Until that's done, all we have is a heavyweight goto.



Re: Method caches

2004-03-18 Thread Piers Cawley
Larry Wall <[EMAIL PROTECTED]> writes:

> On Wed, Mar 17, 2004 at 12:41:20PM -0500, Dan Sugalski wrote:
> : Currently I'm figuring on just nuking the whole cache in any of these 
> : cases. Later on we can consider doing Clever Things, if it seems 
> : worthwhile.
>
> That's what Perl 5 does, FWIW.  But you're caching scheme seems way
> too complicated to me.  In Perl 5, you cache the method simply by
> making it look like it's a member of the current class.  There's very
> little extra mechanism.  You just have to know which methods are the
> "fake" ones you can blow away.

And this is why Perl 5 can't work out SUPER:: type stuff at
runtime. It's possible through cleverness to find out where you were
found the *first* time you're called, but the information isn't
retained in the cache. Which is a complete and utter PITA.



Re: Methods and IMCC

2004-03-16 Thread Piers Cawley
Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 9:49 AM +0100 3/12/04, Leopold Toetsch wrote:
>>Dan Sugalski wrote:
>>
>>>Calling a method:
>>>
>>>object.variable(pararms)
>>
>>Do we need the more explicit pcc_call syntax too:
>>
>>.pcc_begin
>>.arg x
>>.meth_call PObj, ("meth" | PMeth ) [, PReturnContinuation ]
>>.result r
>>.pcc_end
>
> Sure. Or we could make it:
>
> .pcc_begin
> .arg x
> .object y
> .meth_call "foo"
> .result r
> .pcc_end
>
> to make things simpler.

So long as you can also do 

.meth_call "foo", PReturnContinuation 



Re: Classes and metaclasses

2004-03-14 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Piers Cawley <[EMAIL PROTECTED]> wrote:
>> Leopold Toetsch <[EMAIL PROTECTED]> writes:
>
>>> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>>>
>>>> The one question I have is whether we need to have a "call class
>>>> method" operation that, when you invoke it, looks up the class of the
>>>> object and redispatches to it, or whether class methods are just
>>>> methods called on class objects.
>>>
>>> The terms are misleading a bit here.
>>> - a ParrotClass isa delegate PMC
>>> - a ParrotObject isa ParrotClass
>> ^^
>> That definitely seems to be the wrong way 'round.
>
> Why? A ParrotClass is responsible for the method dispatch. The ParrotObject
> inherits that behavior.

But logically, a Class is an Object, and an object is an *instance* of a
class. Surely a class should be responsible for storing and finding
method, but some other thing, call it a Dispatcher object (in Smalltalk
it's the Interpreter, but we've got one of those already), is
responsible for the actual dispatch. By making the dispatcher drive the
dispatch sequence you can do nice things like decoupling the method
cache from the class itself, just have the dispatcher maintain its own
cache. Then when something changes that might invalidate the cache you
just tell the dispatcher to flush its cache and carry on; no need to go
finding every affected class and having them flush their caches. Having
a dispatcher object helps with multimethod dispatch too of course (and
where are you going to stick your multimethod lookup cache if you
*don't* have a dispatcher object).

Of course, if you have OO languages that have weird dispatch rules, you
might need to have multiple dispatchers hanging around but (I'd argue)
you're still better attaching them to classes using composition rather
than inheritance.


Re: Classes and metaclasses

2004-03-14 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>
>> The one question I have is whether we need to have a "call class
>> method" operation that, when you invoke it, looks up the class of the
>> object and redispatches to it, or whether class methods are just
>> methods called on class objects.
>
> The terms are misleading a bit here.
> - a ParrotClass isa delegate PMC
> - a ParrotObject isa ParrotClass
^^
That definitely seems to be the wrong way 'round. 

> - a HLL class isa Parrotclass and a low-level PMC object
> - a HLL object isa all of above



Re: LANGUAGES.STATUS

2004-03-01 Thread Piers Cawley
Mitchell N Charity <[EMAIL PROTECTED]> writes:

> (1) LANGUAGES.STATUS is out of date.
>
> I found (on linux x86 [1]):
>
> These languages failed to build:
>   BASIC/interpreter
>   jako
>   miniperl
>   tcl
>
> And these languages were quite broken (bad make test failures):
>   BASIC/compiler [2]
>   m4
>   ruby
>   scheme
>
> LANGUAGES.STATUS says they all work.
>
> If my result is typical, adding a
>   S: Not working as of 0.0.14 release.
> line to each of these entries seems appropriate.
>
>
> (2) Also, these languages' directories could really use README's:
>   python
>   plot
> README's saying just what their LANGUAGES.STATUS entries say
> ("elsewhere" and "broken and abandoned", respectively).

Ditch plot, it was a quick and dirty attempt at a scheme interpreter
based on closures at OSCON last year that failed dismally. I keep
meaning to do something properly now that we have objects, but it won't
be in the parrot tree 'til it's working.


Re: Release doc tasks

2004-02-20 Thread Piers Cawley
Leopold Toetsch <[EMAIL PROTECTED]> writes:

> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>
>> Nah. Modern filesystems understand that A and a are the same letter.
>> It's those old antique filesystems that don't understand that... :-P
>
> yeP dAn, yOu ArE rIghT.

Still the same letters. Just a strange choice of glyphs.



Re: Some minor decisions and timetables

2004-02-07 Thread Piers Cawley
Uri Guttman <[EMAIL PROTECTED]> writes:

>> "DS" == Dan Sugalski <[EMAIL PROTECTED]> writes:
>
>   DS> At 4:28 PM +0100 2/4/04, Leopold Toetsch wrote:
>   >> Dan Sugalski <[EMAIL PROTECTED]> wrote:
>   >>> Okay, here's a quick scoop and status.
>   >> 
>   >>> *) I'd like to shoot for a Feb 14th release. Names wanted. (I'm
>   >>> partial to the bleeding heart release, but not that partial)
>   >> 
>   >> I had planned towards Feb 29th. A nice dated too this year.
>
>   DS> Works for me.
>
> then how about calling it the bleaping  release? :)

The Leaping Kakapo release? 


Re: References to hash elements?

2004-01-13 Thread Piers Cawley
Simon Cozens <[EMAIL PROTECTED]> writes:

> Arthur Bergman:
>> I am wondering how the references to hash elements are planned to be 
>> done? The call to set_ must somehow be delayed until the time is right.
>
> I would have thought that a hash element would itself be a PMC rather
> than an immediate value, so a reference to that should be treated just
> like any other reference to a PMC.

I believe the correct name for this is 'Pair' isn't it?


Re: Continuations don't close over register stacks

2004-01-08 Thread Piers Cawley
Melvin Smith <[EMAIL PROTECTED]> writes:
> At 06:37 PM 1/7/2004 -0700, Luke Palmer wrote:
>>Leopold Toetsch writes:
>> > Jeff Clites <[EMAIL PROTECTED]> wrote:
>> > > On Jan 7, 2004, at 1:46 AM, Leopold Toetsch wrote:
>> > >> That part is already answered: create a buffer_like structure.
>> > >> *But* again register backing stacks are *not* in the interpreter
>> > >> context.
>> >
>> > > I don't understand what you are getting at. They are not physically
>> > > part of Parrot_Interp.ctx, but it holds pointers to them, right?
>> >
>> > No, they were in the context but aren't any more.
>> >
>> > > ... So,
>> > > they need to be copied when the context is being duplicated. Is that
>> > > your point, or are you trying to say that they are not _logically_ part
>> > > of the context, or are not supposed to be?
>> >
>> > Exactly the latter:
>> > That was AFAIK a design decision, when Dan did introduce CPS. At this
>> > time register backing stacks went out of the continuation or whatelse
>> > context - IIRC did Dan commit that to CVS himself.
>>
>>In which case I feel obliged to contest that decision.  The register
>>backing stacks are as much a part of the current state as the program
>>counter is.
>
> I tend to agree, but maybe Dan can explain. I looked back at the
> CVS history and when I put continuations in, I did originally have
> register stacks in the Parrot_Context (although they weren't yet
> garbage collected). Dan since reverted that and put them back to
> the top level interpreter object.

I also agree. Continuations that don't save the register stacks are
about as much use as a chocolate teapot. Maybe it was supposed to be a
temporary reversion until GC got sorted.


Re: This week's summary

2004-01-05 Thread Piers Cawley
Melvin Smith <[EMAIL PROTECTED]> writes:

> At 09:30 PM 1/5/2004 +0000, Piers Cawley wrote:
>>Melvin Smith <[EMAIL PROTECTED]> writes:
>>
>> > At 07:55 PM 1/5/2004 +0100, Lars Balker Rasmussen wrote:
>> >>The Perl 6 Summarizer <[EMAIL PROTECTED]> writes:
>> >> > people's salaries will depend on Parrot. I confess I
>> >> > wouldn't be surprised if, by the end of the year, we
>> >> > haven't seen the full implementation of at least one of
>> >> > the big non-Perl scripting languages on top of Parrot.
>> >>
>> >>I'm confused, are you optimistic or pessimistic in that last
>> >>sentence?
>> >
>> > Knowing Piers, I would guess: optimistic. :)
>>
>>Have we met? You're right though.
>
> Unless you count our chats on IRC, no.
>
> I can deduce that much from IRC and summaries. We do read them, you
> know. :)

Thank heavens for that. I thought people printed them out and used
them to roll cigarettes with.

-- 
Piers


Re: This week's summary

2004-01-05 Thread Piers Cawley
Melvin Smith <[EMAIL PROTECTED]> writes:

> At 07:55 PM 1/5/2004 +0100, Lars Balker Rasmussen wrote:
>>The Perl 6 Summarizer <[EMAIL PROTECTED]> writes:
>> > people's salaries will depend on Parrot. I confess I wouldn't be
>> > surprised if, by the end of the year, we haven't seen the full
>> > implementation of at least one of the big non-Perl scripting languages
>> > on top of Parrot.
>>
>>I'm confused, are you optimistic or pessimistic in that last sentence?
>
> Knowing Piers, I would guess: optimistic. :)

Have we met? You're right though.

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


Re: This week's summary

2004-01-05 Thread Piers Cawley
Lars Balker Rasmussen <[EMAIL PROTECTED]> writes:
> The Perl 6 Summarizer <[EMAIL PROTECTED]> writes:
>> Me? I think Perl 6's design 'in the large' will be pretty much
>> done once Apocalypse 12 and its corresponding Exegesis are
>> finished. Of course, the devil is in the details, but I don't
>> doubt that the hoped for existence of a working Perl6::Rules by
>> the end of April is going to provide us with a great deal of
>> the leverage we need to get a working Perl 6 alpha ready for
>> OSCON with something rather more solid ready by the end of the
>> year. Parrot continues to amaze and delight with its progress;
>> Dan tells me that he's about ready to roll out a large parrot
>> based application for his employers, so it's approaching the
>> point where people's salaries will depend on Parrot. I confess
>> I wouldn't be surprised if, by the end of the year, we haven't
>> seen the full implementation of at least one of the big
>> non-Perl scripting languages on top of Parrot.
>
> I'm confused, are you optimistic or pessimistic in that last sentence?

Optimistic. Parrot being used for other languages too is a good
thing. We're not going to see a full Perl 6 inside a year 'cos the
design won't be finished (well, not at the detailed level it needs to
be for a full implementation) and Ponie probably won't be finished
because of the complexities of getting the backwards compatilibility
with XS that's Ponie's raison d'etre. But there are other languages
out there that don't have such stringent requirements.

-- 
Beware the Perl 6 early morning joggers -- Allison Randal


  1   2   3   >