RFC: Implicit threading and Implicit event-loop (Was: Re: Continuations)

2009-05-27 Thread Daniel Ruoso
Em Ter, 2009-05-26 às 19:33 -0700, Jon Lang escreveu:
 The exact semantics of autothreading with respect to control
 structures are subject to change over time; it is therefore erroneous
 to pass junctions to any control construct that is not implemented via
 as a normal single or multi dispatch. In particular, threading
 junctions through conditionals correctly could involve continuations,
 which are almost but not quite mandated in Perl 6.0.0.
 What is a continuation?

Continuation here is meant in the most generic sense, which is:

The rest of the thread of execution

It doesn't imply any specific API on manipulating the continuations, nor
it implies that the continuations are re-invocable, cloneable or
anything like that.

It basically means that the interpreter can choose to interrupt your
code at any point and continue it later, after running some other code.
This has the basic effect that Perl 6 points toward *implicit threading*
rather than explicit, and also that it points toward *implicit event
loop* rather than explicit.

In practical terms:

 sub baz (*...@input) {
   for @input - $element {
 say BAZ!;
 $element + 1;
   }
 } 

 sub foo (*...@input) {
   for @input - $element {
 say FOO!;
 $element - 1;
   }
 }

 sub bar (*...@input) {
   for @input - $element {
 say BAR!;
 $element * 2;
   }
 }

 say BEFORE!;
 my @a == baz == foo == bar == $*IN;
 say AFTER;

Is going to open 5 implicit threads (which might be delegated to any
number of worker threads), besides the initial thread. So, at first,
you'll immediatly see in the output:

 BEFORE!
 AFTER!

The implicit threads are:

  1 - read from $*IN and push into a lazy list X
  2 - read from the lazy list X, run an iteration of the for in the sub 
  bar, and push to the lazy list Y
  3 - read from the lazy list Y, run an iteration of the for in the sub
  foo, and push to the lazy list W
  4 - read from the lazy list W, run an iteration of the for in the sub
  baz, and push to the lazy list Z
  5 - read from the lazy list Z and push into the lazy list that
  happens to be stored in '@a'

That basically means that this lazy lists are attached to the
interpreter main-loop (yes, Perl 6 should implement something POE-like
in its core), which will allow the read of IO to be non-blocking, so you
don't need a OS thread for that. It also means that every lazy list
should be somehow attached to that event-loop.

So, as you enter data in $*IN, you should get something like that:

 I entered this line!
 BAR!
 FOO!
 BAZ!
 I entered this other line!
 BAR!
 FOO!
 BAZ!

On the implementation side, I think there is going to be a
ControlExceptionWouldBlock, which is raised by every lazy object when
the data is not immediatly available, allowing the interpreter to put
this continuation in a blocked state, somehow registering a listener
to the event that blocks it.

One of the attributes of the ControlExceptionWouldBlock would be a
Observable object, this Observable object is the thing that is waiting
for the specific event to happen and register additional listeners to
that event. The interpreter itself will register itself as an Observer
to that Observable, so it can re-schedule the thread, marking it as
waiting.

That being said, I think we have a continuation pool which are in
either running, blocked or waiting state. And a scheduler that
takes this continuations and assign to the worker threads, while you
can use a command line switch to control the minimum/maximum number of
worker threads as well as the parameter for when to start a new worker
thread and when to deactivate it...

Well, this is my current view on the state of affairs, and is thougth a
lot in the context of SMOP, so it would be really interesting to have
some feedback from the parrot folks...

daniel



Re: Continuations

2009-05-27 Thread Jon Lang
Andrew Whitworth wrote:
 The issue mentioned in the Synopses is that junctions autothread, and
 autothreading in a conditional could potentially create multiple
 threads of execution, all of which are taking different execution
 paths. At some point, to bring it all back together again, the various
 threads could use a continuation to return back to a single execution
 flow.

Hmm.  If that's the case, let me suggest that such an approach would
be counterintuitive, and not worth considering.  When I say if any of
these books are out of date, review your records for inconsistencies;
otherwise, make the books available for use, I don't expect to end up
doing both tasks.  In a similar manner, I would expect junctions not
to autothread over conditionals, but to trigger at most one execution
path (continuation?).  The real issue that needs to be resolved, I
believe, is illustrated in the following statement:

if any of these books are out of date, review them.  The question,
as I understand it, is what is meant by 'them'?  Is it these
books, or is it the ones that are out of date?  In perl 6 terms:

   $x = any(@books);
   if $x.out-of-date { $x.review }

Is this equivalent to:

   if any(@books).out-of-date { any(@books).review }

or:

   if any(@books).out-of-date { any(@books.grep {.out-of-date} ).review }

I don't mean to reopen the debate; though if we can get some
resolution on this, I won't mind.  But I _would_ at least like to see
the summary of the issue stated a bit more clearly.

-- 
Jonathan Dataweaver Lang


Re: RFC: Implicit threading and Implicit event-loop (Was: Re: Continuations)

2009-05-27 Thread John M. Dlugosz

Sounds like threads to me.

What I see that's different from common threads in other languages is 
that they are all the same, rather than one master and many new threads 
that have no context history above them.  In Perl 6, every thread sees 
the same dynamic scope as the original.  It doesn't matter which one is 
left standing to continue and eventually return up the context chain. 


--John


Daniel Ruoso daniel-at-ruoso.com |Perl 6| wrote:

Em Ter, 2009-05-26 às 19:33 -0700, Jon Lang escreveu:
  

The exact semantics of autothreading with respect to control
structures are subject to change over time; it is therefore erroneous
to pass junctions to any control construct that is not implemented via
as a normal single or multi dispatch. In particular, threading
junctions through conditionals correctly could involve continuations,
which are almost but not quite mandated in Perl 6.0.0.
What is a continuation?



Continuation here is meant in the most generic sense, which is:

The rest of the thread of execution

It doesn't imply any specific API on manipulating the continuations, nor
it implies that the continuations are re-invocable, cloneable or
anything like that.

It basically means that the interpreter can choose to interrupt your
code at any point and continue it later, after running some other code.
This has the basic effect that Perl 6 points toward *implicit threading*
rather than explicit, and also that it points toward *implicit event
loop* rather than explicit.

In practical terms:

 sub baz (*...@input) {
   for @input - $element {
 say BAZ!;
 $element + 1;
   }
 } 


 sub foo (*...@input) {
   for @input - $element {
 say FOO!;
 $element - 1;
   }
 }

 sub bar (*...@input) {
   for @input - $element {
 say BAR!;
 $element * 2;
   }
 }

 say BEFORE!;
 my @a == baz == foo == bar == $*IN;
 say AFTER;

Is going to open 5 implicit threads (which might be delegated to any
number of worker threads), besides the initial thread. So, at first,
you'll immediatly see in the output:

 BEFORE!
 AFTER!

The implicit threads are:

  1 - read from $*IN and push into a lazy list X
  2 - read from the lazy list X, run an iteration of the for in the sub 
  bar, and push to the lazy list Y

  3 - read from the lazy list Y, run an iteration of the for in the sub
  foo, and push to the lazy list W
  4 - read from the lazy list W, run an iteration of the for in the sub
  baz, and push to the lazy list Z
  5 - read from the lazy list Z and push into the lazy list that
  happens to be stored in '@a'

That basically means that this lazy lists are attached to the
interpreter main-loop (yes, Perl 6 should implement something POE-like
in its core), which will allow the read of IO to be non-blocking, so you
don't need a OS thread for that. It also means that every lazy list
should be somehow attached to that event-loop.

So, as you enter data in $*IN, you should get something like that:

 I entered this line!
 BAR!
 FOO!
 BAZ!
 I entered this other line!
 BAR!
 FOO!
 BAZ!

On the implementation side, I think there is going to be a
ControlExceptionWouldBlock, which is raised by every lazy object when
the data is not immediatly available, allowing the interpreter to put
this continuation in a blocked state, somehow registering a listener
to the event that blocks it.

One of the attributes of the ControlExceptionWouldBlock would be a
Observable object, this Observable object is the thing that is waiting
for the specific event to happen and register additional listeners to
that event. The interpreter itself will register itself as an Observer
to that Observable, so it can re-schedule the thread, marking it as
waiting.

That being said, I think we have a continuation pool which are in
either running, blocked or waiting state. And a scheduler that
takes this continuations and assign to the worker threads, while you
can use a command line switch to control the minimum/maximum number of
worker threads as well as the parameter for when to start a new worker
thread and when to deactivate it...

Well, this is my current view on the state of affairs, and is thougth a
lot in the context of SMOP, so it would be really interesting to have
some feedback from the parrot folks...

daniel


  




Continuations

2009-05-26 Thread Jon Lang
From S09, under Junctions:

The exact semantics of autothreading with respect to control
structures are subject to change over time; it is therefore erroneous
to pass junctions to any control construct that is not implemented via
as a normal single or multi dispatch. In particular, threading
junctions through conditionals correctly could involve continuations,
which are almost but not quite mandated in Perl 6.0.0.

What is a continuation?

-- 
Jonathan Dataweaver Lang


Re: Continuations

2009-05-26 Thread John M. Dlugosz

Jon Lang dataweaver-at-gmail.com |Perl 6| wrote:

From S09, under Junctions:

The exact semantics of autothreading with respect to control
structures are subject to change over time; it is therefore erroneous
to pass junctions to any control construct that is not implemented via
as a normal single or multi dispatch. In particular, threading
junctions through conditionals correctly could involve continuations,
which are almost but not quite mandated in Perl 6.0.0.

What is a continuation?

  

http://en.wikipedia.org/wiki/Continuation

Early on, Perl 6 discussion featured a lot on Continuations.  Now, I 
don't see it anywhere at all, and believe that the general form is not 
required, by design.  That is, not mandated.  It's a computer science 
concept that generalizes *all* forms of flow control including 
exceptions, co-routines, etc.  The long-jump or exception is a more 
normal case of returning to something that is still in context, but 
imagine if you could go both ways:  bookmark something in the code, like 
making a closure but for the complete calling stack of activation 
complexes, and then jump back to it later.


Re: Blocks, continuations and eval()

2005-04-22 Thread Stéphane Payrard
Hi,

I am making a presentation about Perl6 this week end.  My point will
be: the next generation of applicative languages will be scripting
languages because they have come of age. 

Alternatives don't cut it anymore. Indeed C and C++ are memory
allocation nightmare; Java and C# don't have read-eval loop, a
necessary condition for rapid learning and development.  Functional
languages like haskell or ocaml are very powerful but needs massive
wetware reconfiguration to get used to the syntax and semantic.

So I will make do a presentation of Perl6 and Parrot features to make
my point about upcoming scripting languages.

I have a few questions inspired by my recently acquired  knowledge
about functional languages. Perl6 being the ultimate syncretist
language, I wonder if some functional features will make it
into Perl6. I know we already got currying.

A very nice feature of Haskell and *ml is the possibility to define
complex datastructures types and the control flow that manipulate
these structures: constructors and pattern matching.  With these
languages, in a very deep sense, control flow is pattern matching. Can
we expect Perl6 to propose something similar?

If yes, could be the matching part folded into the rule syntax?  Rules
are about identifying structures in parsed strings and acting
accordingly.  Partern matching is about identify typed structures and
acting accordingly. There is a similarity there.  Also we may want to
match both at the structural level and at the string level.  Or is
this asking too much of rules, that have already swallowed both lexing
and parsing.

The notion of data type become very useful in Perl6 for people who
want it.  In fact, Perl6 is a mix of dynamic and static types
(bindings).  I think type theory handles type inference in this kind
of langage with something called dependant type.  Though I have to go
thru ATTaPl to get it.

Perl, like many scripting language is very lax and, when needed,
converts implicitely values within expressions.  This is nice, but I
think that makes type inference impossible.  Type inference is good
because it allows to generate very efficient/strict code with very
little type annotations.

Can we expect in a distance feature a pragmatic mode convention to
control automatic type conversions if any and the type inference
scheme chosen when/if implemented?


-- 
  cognominal stef


Re: Blocks, continuations and eval()

2005-04-22 Thread Stéphane Payrard
On Fri, Apr 22, 2005 at 08:13:58PM +0200, Stéphane Payrard wrote:
 On Fri, Apr 22, 2005 at 09:32:55AM -0700, Larry Wall wrote:
 
 Thank you for your detailled answer. I still don't get what you mean 
 by   [] pattern matching arguments. 
 Do you mean smart pattern matching on composite values? 
 
  
  A lot of features are making it into Perl 6 that have historically been
  associated with functional programming.  Off the top of my head:
  ...
  [] pattern matching arguments

Thx to people on #perl6, I got it.  It is a form of pattern matching
on arguments. It is described in S06 in under the headers Unpacking
hash parameters, Unpacking array parameters.
  

 sub quicksort ([$pivot, [EMAIL PROTECTED], ?$reverse, ?$inplace) {
...
}

So if we mix that with typing, we will end with full fledged unification?

-- 
  cognominal stef


Re: Blocks, continuations and eval()

2005-04-21 Thread wolverian
On Tue, Apr 12, 2005 at 04:17:56AM -0700, Larry Wall wrote:
 We'll make continuations available in Perl for people who ask for
 them specially, but we're not going to leave them sitting out in the
 open where some poor benighted pilgrim might trip over them unawares.

Sorry for replying so late, but I missed your reply somehow. I just want
to ask a little clarification on this; exactly what kind of hiding are
you considering for continuations? That is, do you just mean that there
will not be a 'call/cc' primitive by default in the global namespace?
I'm fine with that, as that's just one method of capturing the calling
continuation.

 Larry

-- 
wolverian


signature.asc
Description: Digital signature


Re: Blocks, continuations and eval()

2005-04-21 Thread Larry Wall
On Thu, Apr 21, 2005 at 04:30:07PM +0300, wolverian wrote:
: On Tue, Apr 12, 2005 at 04:17:56AM -0700, Larry Wall wrote:
:  We'll make continuations available in Perl for people who ask for
:  them specially, but we're not going to leave them sitting out in the
:  open where some poor benighted pilgrim might trip over them unawares.
: 
: Sorry for replying so late, but I missed your reply somehow. I just want
: to ask a little clarification on this; exactly what kind of hiding are
: you considering for continuations? That is, do you just mean that there
: will not be a 'call/cc' primitive by default in the global namespace?
: I'm fine with that, as that's just one method of capturing the calling
: continuation.

I suspect it's just something like

use Continuations;

at the top to enable the low-level interface.  There would be no
restriction on using continuation semantics provided by other modules,
because then the use of that other module implies whatever form
of continuation it provides.

My concern is primarily the reader of the code, who needs some kind
of warning that one can get sliced while juggling sharp knives.  If
we were willing to be a little more Ada-like, we'd make it a shouted
warning:

use CONTINUATIONS;

Hmm, maybe that's not such a bad policy.  I wonder what other dangerous
modules we might have.  Ada had UNCHECKED_TYPE_CONVERSION, for instance.

Larry


Re: Blocks, continuations and eval()

2005-04-21 Thread Nigel Sandever
On Thu, 21 Apr 2005 08:36:28 -0700, [EMAIL PROTECTED] (Larry Wall) wrote:
 
 Hmm, maybe that's not such a bad policy.  I wonder what other dangerous
 modules we might have.  Ada had UNCHECKED_TYPE_CONVERSION, for instance.
 

How about
use RE_EVAL; # or should that be REALLY_EVIL?

 Larry





Re: Blocks, continuations and eval()

2005-04-12 Thread Larry Wall
On Tue, Apr 12, 2005 at 11:36:02AM +0100, Piers Cawley wrote:
: wolverian [EMAIL PROTECTED] writes:
: 
:  On Fri, Apr 08, 2005 at 12:18:45PM -0400, MrJoltCola wrote:
:  I cannot say how much Perl6 will expose to the high level language.
: 
:  That is what I'm wondering about. I'm sorry I was so unclear.
: 
:  Can you tell me what your idea of a scope is? I'm thinking a
:  continuation, and if that is what you are thinking, I'm thinking the
:  answer to your question is yes.
: 
:  Yes. I want to know how Perl 6 exposes continuations, and how to get one
:  for, say, the current lexical scope, and if it has a method on it that
:  lets me evaluate code in that context (or some other way to do that).
: 
: As I understand what Larry's said before. Out of the box, it
: doesn't. Apparently we're going to have to descend to Parrot to write
: evalcc/letcc/your-preferred-continuation-idiom equivalent. 

We'll make continuations available in Perl for people who ask for
them specially, but we're not going to leave them sitting out in the
open where some poor benighted pilgrim might trip over them unawares.

Larry


Re: Blocks, continuations and eval()

2005-04-12 Thread Piers Cawley
Larry Wall [EMAIL PROTECTED] writes:

 On Tue, Apr 12, 2005 at 11:36:02AM +0100, Piers Cawley wrote:
 : wolverian [EMAIL PROTECTED] writes:
 : 
 :  On Fri, Apr 08, 2005 at 12:18:45PM -0400, MrJoltCola wrote:
 :  I cannot say how much Perl6 will expose to the high level language.
 : 
 :  That is what I'm wondering about. I'm sorry I was so unclear.
 : 
 :  Can you tell me what your idea of a scope is? I'm thinking a
 :  continuation, and if that is what you are thinking, I'm thinking the
 :  answer to your question is yes.
 : 
 :  Yes. I want to know how Perl 6 exposes continuations, and how to get one
 :  for, say, the current lexical scope, and if it has a method on it that
 :  lets me evaluate code in that context (or some other way to do that).
 : 
 : As I understand what Larry's said before. Out of the box, it
 : doesn't. Apparently we're going to have to descend to Parrot to write
 : evalcc/letcc/your-preferred-continuation-idiom equivalent. 

 We'll make continuations available in Perl for people who ask for
 them specially, but we're not going to leave them sitting out in the
 open where some poor benighted pilgrim might trip over them unawares.

Oh goody! Presumably we're initially talking of a simple
'call_with_current_continuation'? 


Blocks, continuations and eval()

2005-04-08 Thread wolverian
Hi,

(I'm sorry if this topic has already been discussed.)

one day a friend asked if Perl 5 had a REPL facility.
(Read-Eval-Print-Loop). I told him it has perl -de0, which is different
in that it does not preserve the lexical scope across evaluated lines.
This is because eval STRING creates its own scope, in which the string
is then evaluated.

You can hack around this with a recursive eval(), which will eventually
blow the stack. I wrote a short module to do this, but never released
it. Have others done this? :)

To get to the real topic:

In Perl 6, the generic solution to fix this (if one wants to fix it)
seems, to me, to be to add a .eval method to objects that represent
scopes. I'm not sure if scopes are first class values in Perl 6. Are
they? How do you get the current scope as an object? Are scopes just
Code objects?

On #perl6, theorbtwo wasn't sure if .eval should be a method on coderefs
or blocks. Is there a difference between the two? I always hated this
about Ruby; there seems to be no practical value to the separation.

Also, are blocks/coderefs/scopes continuations? Should .eval be a method
in Continuation?

Thanks,

-- 
wolverian


signature.asc
Description: Digital signature


Re: Blocks, continuations and eval()

2005-04-08 Thread David Storrs
On Fri, Apr 08, 2005 at 05:03:11PM +0300, wolverian wrote:

Hi wolverian,

 one day a friend asked if Perl 5 had a REPL facility.
 (Read-Eval-Print-Loop). I told him it has perl -de0, which is different
 [...]
 In Perl 6, the generic solution to fix this (if one wants to fix it)
 seems, to me, to be to add a .eval method to objects that represent
 scopes. I'm not sure if scopes are first class values in Perl 6. Are
 they? How do you get the current scope as an object? Are scopes just
 Code objects?

I'm unclear on what you're looking for.  Are you trying to get a way
to do interactive coding in P6?  Or the ability to freeze a scope
and execute it later?  Or something else?

--Dks


-- 
[EMAIL PROTECTED]


Re: Blocks, continuations and eval()

2005-04-08 Thread MrJoltCola
At 10:03 AM 4/8/2005, wolverian wrote:
To get to the real topic:
In Perl 6, the generic solution to fix this (if one wants to fix it)
seems, to me, to be to add a .eval method to objects that represent
scopes. I'm not sure if scopes are first class values in Perl 6. Are
they? How do you get the current scope as an object? Are scopes just
Code objects?
On #perl6, theorbtwo wasn't sure if .eval should be a method on coderefs
or blocks. Is there a difference between the two? I always hated this
about Ruby; there seems to be no practical value to the separation.
Also, are blocks/coderefs/scopes continuations? Should .eval be a method
in Continuation?
I'm having a bit of trouble following you, but I can tell you that the VM 
portion
treats continuations as well as lexical scopes or pads as first class Parrot
objects (or PMCs).

I cannot say how much Perl6 will expose to the high level language.
Can you tell me what your idea of a scope is? I'm thinking a
continuation, and if that is what you are thinking, I'm thinking the
answer to your question is yes.
-Melvin



Re: Blocks, continuations and eval()

2005-04-08 Thread wolverian
On Fri, Apr 08, 2005 at 08:35:30AM -0700, David Storrs wrote:
 I'm unclear on what you're looking for.  Are you trying to get a way
 to do interactive coding in P6?  Or the ability to freeze a scope
 and execute it later?  Or something else?

Neither in itself. I'm looking for a way to refer to scopes
programmatically. I'm also asking if they are continuations, or blocks,
or coderefs, or are those all the same?

The two things you mention are effects of being able to refer to scopes
in such a fashion. I do want both, but the real question isn't if they
are possible, but about what blocks, coderefs and scopes are.

I'm sorry if I was unclear. I probably should have spent more time
writing the post. :)

 --Dks

--
wolverian


signature.asc
Description: Digital signature


Re: Blocks, continuations and eval()

2005-04-08 Thread wolverian
On Fri, Apr 08, 2005 at 12:18:45PM -0400, MrJoltCola wrote:
 I cannot say how much Perl6 will expose to the high level language.

That is what I'm wondering about. I'm sorry I was so unclear.

 Can you tell me what your idea of a scope is? I'm thinking a
 continuation, and if that is what you are thinking, I'm thinking the
 answer to your question is yes.

Yes. I want to know how Perl 6 exposes continuations, and how to get one
for, say, the current lexical scope, and if it has a method on it that
lets me evaluate code in that context (or some other way to do that).

 -Melvin

-- 
wolverian


signature.asc
Description: Digital signature


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-20 Thread Dan Sugalski
At 12:00 AM + 3/20/03, Simon Cozens wrote:
[EMAIL PROTECTED] (Matthijs Van Duin) writes:
 OK, I suppose that works although that still means you're moving the
 complexity from the perl implementation to its usage: in this case,
 the perl 6 parser which is written in perl 6
No, I don't believe that's what's happening. My concern is that at some
point, there *will* need to be a bootstrapped parser which is written in
some low level language, outputting Parrot bytecode, and it *will* need
to be able to reconfigure itself mid-match.
I think. I can't remember why I'm so convinced of this, and I'm too tired
to think it through with examples right now, and I might be wrong anyway,
but at least I can be ready with a solution if it proves necessary. :)
You may well be right--I don't think so,  but I'm not at my clearest 
either. I don't see that it'll be needed outside the initial 
bootstrap parser if at all, so I'm not too worried. (And the 
low-level language for it will probably be perl 5, since I'd far 
rather build something with a Parse::RecDescent grammar than a 
hand-nibbler in C)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-20 Thread Austin Hastings

--- Matthijs van Duin [EMAIL PROTECTED] wrote:
 On Wed, Mar 19, 2003 at 03:46:50PM -0500, Dan Sugalski wrote:
 
 They should be though, if a variable was hypothesized when the 
 continuation was taken, then it should be hypothesized when that 
 continuation is invoked.
 
 Should they? Does hypotheticalization count as data modification (in
 
 which case it shouldn't) or control modification (in which case it 
 should),
 
 Isn't that the whole point of hypotheses in perl 6?  You talk about 
 successful and unsuccessful de-hypothesizing, and about abormals
 exits etc..  you seem to have a much complexer model of hypotheses 
 than what's in my head.

The complex model is right -- in other words, if hypotheses are to be a
first-class part of the language then they must interoperate with all
the other language features.

So there's several ways out of a sub. What does a hypo do in all these
cases?

let $x   - Declares a hypothesis. I think this should be a verb.
   (Which is to say, a function instead of a storage
class.)
   (This in turn suggests that primitive types can't be 
   hypothesized, although arrays thereof could be.)

fail - Pretty strongly suggests that the hypo is ignored. 
   Also suggests continuation/backtracking behavior. Do we
   really know what fail does? See discard, below.

discard $x   - (suggested) Obvious keyword for failing one single
   hypothesis.

keep $x  - Suggests the value is made permanent. This should be 
   a verb. Ckeep with no args should just keep every 
   hypo in the current fillintheSCOPE.

return   - This is unclear. On the one hand, it's a transfer of 
   control and I think that Clet and Ckeep are data,
   not control, modifiers. On the other hand, it's one of 
   the normal ways to leave a block, and could be argued
   either way.

   (Also: remember the bad old days of Cmy vs. Clocal.
   I propose that hypos fail by default -  to keep the 
   distinction clear in the minds of newbies.)

throw- Exceptions unwind the call stack. If let is a 
   control action, it should undo. If let is a data
   action, it should not. To me, Clet is an explicit
   action taken against a variable, and should not be
   undone by this. (Of course, if unwinding the call stack
   causes the variable to go out of scope, it's not an
   issue.)

continuation: goto
 - Again: continuations are transfers of control, not 
   data. If let is a control action, continuations
   will have to know if they are transferring back up 
   the stack or if they are transferring to some new,
   never-before-seen (on the stack) place. I think that
   Clet should be a data action, in which case this
   doesn't affect the hypothesis. 

generators: yield
 - Same issues, although the argument can be made that 
   since you can resume a generator, the hypo should 
   be confined to the extant-but-inactive scope.


 To me, what 'let' does is temporize a variable except it doesn't get 
 restored if you leave the scope, but only if you use a continuation
 to go back to a point where it wasn't hypothesized yet.

Yes and no.

I agree it shouldn't get restored, see above. However, continuations
don't touch data. So a global variable that has been hypo'ed should
(IMO) remain so after the continuation. 

Frankly, if you mix the two, it's YOUR job to understand the
ramifications.

 When the last continuation taken *before* the hypothesis is gone, so
 is the old version and thus the hypothesized variable becomes 
 permanent.

I disagree. Proposal and acceptance should be explicit actions.

 The behavior regarding coroutines followed naturally from this, and
 so does the behavior inside regexen if they use the continuation
 semantics for  backtracking -- which is what I'm suggesting.
 
 This leave only behavior regarding preemptive threads, which is
 actually very easy to solve:  disallow hypothesizing shared 
 variables -- it simply makes no sense to do that.  Now that 
 I think of it, temporizing shared variables is equally bad news,
 so this isn't something new.

Why?

If you constrain hypotheses to the thread (making it a control action
instead of a data action) this could be a way to get cheap MUXing.
Hypothesize all the new values you wish, then pay once to get a mux,
then keep all the data values while you've got the mux. Shrinks your
critical region:

{: is synchronized($mux)
  keep all;
}

OTOH, if you are really using threads well, then your app may construct
a hypothesis based on user input, and the math and visualizer and gui
threads may all need to work in that hypothetical space.

 (Which makes continuations

Re: Rules and hypotheticals: continuations versus callbacks

2003-03-20 Thread Matthijs van Duin
On Thu, Mar 20, 2003 at 08:49:28AM -0800, Austin Hastings wrote:
--- Matthijs van Duin [EMAIL PROTECTED] wrote:
you seem to have a much complexer model of hypotheses 
than what's in my head.
The complex model is right -- in other words, if hypotheses are to be a
first-class part of the language then they must interoperate with all
the other language features.
(lots of explanation here)
You're simply expanding on the details your complex model - not on the 
need for it in the first place.

I'll see if I can write some details/examples of my model later, and show 
how it interacts with various language features in a simple way.


This leave only behavior regarding preemptive threads, which is
actually very easy to solve:  disallow hypothesizing shared 
variables -- it simply makes no sense to do that.  Now that 
I think of it, temporizing shared variables is equally bad news,
so this isn't something new.
I just realize there's another simple alternative: make it cause the 
variable become thread-local for that particular thread.


Hypothesize all the new values you wish, then pay once to get a mux,
then keep all the data values while you've got the mux. Shrinks your
critical region
You're introducing entirely new semantics here, and personally I think 
you're abusing hypotheses, although I admit in an interesting and 
potentially useful way. I'll spend some thought on that.


My experience has been that when anyone says I don't see why anyone
would ..., Damian immediately posts an example of why.
No problem since it works fine in my model (I had already mentioned that 
earlier) - I just said *I* don't see why anyone would.. :-)


So, stop talking about rexen. When everyone groks how continuations
should work, it'll fall out. 
rexen were the main issue: Dan was worried about performance

(And if you reimplement the rexengine using continuations and outperform 
Dan's version by 5x or better, then we'll have another Geek Cruise to 
Glacier Bay and strand Dan on an iceberg. :-)
I don't intend to outperform him.. I intend to get the same performance 
with cleaner, simpler and more generic semantics.

But as I said in my previous post.. give me some time to work out the 
details.. maybe I'll run into fatal problems making the whole issue moot :)

BTW, you say reimplement ?  Last time I checked hypothetic variables 
weren't implemented yet, let alone properly interact with continuations. 
Maybe it's just sitting in someone's local version, but until I have 
something to examine, I can't really compare its performance to my system.

--
Matthijs van Duin  --  May the Forth be with you!


Re: prototype (was continuations and regexes)

2003-03-20 Thread Matthijs van Duin
On Thu, Mar 20, 2003 at 11:38:31AM -0800, Sean O'Rourke wrote:
Here's what I take to be a (scheme) prototype of Matthijs' success
continuations approach.  It actually works mostly by passing closures and
a state object, ...
Matthijs -- is this what you're describing?
It sounds like approach #2 (callback) I listed in my original post

Unfortunately, #1 is the more appealing approach of the two and is what this 
whole thread has been about so far.  I pretty much abandoned #2 early on.
I'll see if I can take a look at it later.

#2's only advantage was that - as you noted - it doesn't need continuations 
for backtracking, but uses the normal call-chain.

I've never really done anything with scheme but I know the syntax mostly, so 
I'll see if I can read it later on -- you obviously put quite some effort in 
writing it, so it deserves to be read :-)

Dan -- given that the real one could optimize simple operators by
putting a bunch of them inside a single sub, does this look too painful?
I doubt he'll like this -- while the continuations-model is still mostly 
like his model (structurally), the callback-model isn't.  I also think it 
has less opportunity for optimizations but I might be wrong about that.

--
Matthijs van Duin  --  May the Forth be with you!


Re: prototype (was continuations and regexes)

2003-03-20 Thread Matthijs van Duin
Oops, I just noticed Sean had mailed Dan and me privately, not on the list.. 
sorry for sending the reply here :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Tue, Mar 18, 2003 at 09:28:43PM -0700, Luke Palmer wrote:
Plan 1:  Pass each rule a Isuccess continuation (rather than a
backtrack one), and have it just return on failure.  The big
difference between this and your example is that Clets are now
implemented just like Ctemps.  Too bad Clet needs non-regex
behavior, too.
That's mechanism #2, not #1

You probably don't mean the word continuation here though, since 
a continuation never returns, so once a rule would invoke the success 
continuation it can't regain control except via another continuation 

You probably simply mean a closure


Plan 2:  Call subrules as plain old subs and have them throw a
backtrack exception on failure (or just return a failure-reporting
value... same difference, more or less).
But.. say you have:

foo bar

Would would this be implemented?  When bar fails, it needs to backtrack 
into foo, which has already returned.  Are you saying every rule will be 
an explicit state machine?


This has the advantage that Clet behaves consistently with the 
rest of Perl
What do you mean?


I looked around in Parrot a little, and it seems like continuations
are done pretty efficiently.
Yes, I noticed that do

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Leopold Toetsch
Matthijs van Duin wrote:

Which system is likely to run faster on parrot?


I would propose, estimate the ops you need and test it :)

E.g. call a continuation 1e6 times and communicate state with one global 
(a lexical is probably the same speed, i.e. a hash lookup)

$ cat a.pasm
new P5, .PerlInt
set P5, 100
store_global foo, P5
new P0, .Continuation
set_addr I0, endcont
set P0, I0
endcont:
find_global P4, foo
unless P4, done
dec P4
# store_global foo, P4 --no need to store, P4 is a reflike thingy
invoke
done:
print done\n
end
$ time imcc -P a.pasm
done
real0m0.881s  

$ imcc -p a.pasm
done
   OPERATION PROFILE
  CODE   OP FULL NAMECALLS  TOTAL TIMEAVG TIME
  -  - ---  --  --
  0  end 10.010.01
 40  set_addr_i_ic   10.010.01
 66  set_p_i 10.010.01
 67  set_p_ic10.040.04
226  unless_p_ic   1010.5950250.01
276  dec_p 1000.5999460.01
758  store_global_sc_p   10.060.06
760  find_global_p_sc  1011.0379220.01
786  new_p_ic20.110.05
819  invoke1000.9140630.01
883  print_sc10.0052990.005299
  -  - ---  --  --
 114103.1522800.01
So you can estimate that the more heavy opcodes take about 1us, more 
light vtable functions are ~double that speed with the slow, profiling 
core. CGP (or JIT) is 3 - 4 times faster.

-O3 compiled parrot, Athlon 800, i386/linux

leo



Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 10:38:54AM +0100, Leopold Toetsch wrote:
I would propose, estimate the ops you need and test it :)
Hmm, good point

Or even better.. I should just implement both examples and benchmark them; 
they're simple enough and the ops are available.

I guess it's time to familiarize myself with pasm :)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 01:01:28PM +0100, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 10:38:54AM +0100, Leopold Toetsch wrote:
I would propose, estimate the ops you need and test it :)
Hmm, good point

Or even better.. I should just implement both examples and benchmark them; 
they're simple enough and the ops are available.
except I forgot entirely about let

however the implementation let will have impact on the performance of both 
systems.. oh well, I'll just have to estimate like you said :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 10:38:54AM +0100, Leopold Toetsch wrote:
I would propose, estimate the ops you need and test it :)
I haven't completed testing yet, however it's becoming clear to me that 
this is likely to be a pointless effort

There are so many variables that can affect performance here that the 
results I may find in these tests are unlikely to have any relation to 
the performance of rules in practice.

1. making continuations affects the performance of *other* code (COW)
2. the let operation is missing and all attempts to fake it are silly
3. to really test it, I'd need to make subrules full subroutines, but then 
  the performance difference will probably disappear in the overhead of all 
  other stuff.  To test I'd need large, realistic patterns; and I'm 
  certainly not in the mood to write PIR for them manually.

And it appears that on my machine continuations and garbage collection have
a quarrel, which also makes testing problematic.
I guess the only way to find out is to implement both systems and compare 
them using a large test set of realistic grammars.  Or ofcourse just 
implement it using continuations (system #1), since the speed difference 
probably isn't gonna be huge anyway.

Here is my test program for continuation and the results on my machine:

# aaab ~~ / ^ [ a | a* ] ab fail /

set I5, 1000
sweepoff# or bus error
collectoff  # or segmentation fault

begin:
set S0, aaab
set I0, 0
new P0, .Continuation
set_addr I1, second
set P0, I1
rx_literal S0, I0, a, backtrack
branch third
second:
new P0, .Continuation
set_addr I1, fail
set P0, I1
deeper:
rx_literal S0, I0, a, third
save P0 # normally hypothesize
new P0, .Continuation
set_addr I1, unwind
set P0, I1
branch deeper
unwind:
dec I0  # normally de-hypothesize
restore P0  # normally de-hypothesize
third:
rx_literal S0, I0, ab, backtrack
sub I0, 2   # normally de-hypothesize
backtrack:
invoke
fail:
dec I5
if I5, begin
end


  OPERATION PROFILE 

 CODE   OP FULL NAME CALLS  TOTAL TIMEAVG TIME
 -     ---  --  --
 0  end  10.290.29
40  set_addr_i_ic 50000.0249280.05
46  set_i_ic  10010.0105730.11
60  set_s_sc  10000.0057170.06
66  set_p_i   50000.0162010.03
   213  if_i_ic   10000.0028480.03
   274  dec_i 40000.0113900.03
   370  sub_i_ic  10000.0042270.04
   675  save_p30000.1923090.64
   682  restore_p 30000.2464570.82
   719  branch_ic 40000.0122160.03
   770  sweepoff 10.140.14
   772  collectoff   10.030.03
   786  new_p_ic  50000.1794030.36
   819  invoke50000.0262850.05
   962  rx_literal_s_i_sc_ic 10.0542600.05
 -     ---  --  --
16   480040.7868610.16
iBook; PPC G3; 700 Mhz

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 10:05 AM +0100 3/19/03, Matthijs van Duin wrote:
But.. say you have:

foo bar

Would would this be implemented?  When bar fails, it needs to 
backtrack into foo, which has already returned.  Are you saying 
every rule will be an explicit state machine?
By compile-time interpolation. foo isn't so much a subroutine as a 
macro. For this to work, if we had:

  foo: \w+?
  bar: [plugh]{2,5}
then what the regex engine *really* got to compile would be:

   (\w+?) ([plugh]{2,5})

with names attached to the two paren groups. Treating them as actual 
subroutines leads to madness, continuations don't quite work, and 
coroutines could pull it off if we could pass data back into a 
coroutine on reinvocation, but...

We do, after all, want this fast, right?
--
Dan
--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 10:40:02AM -0500, Dan Sugalski wrote:
By compile-time interpolation. foo isn't so much a subroutine as a 
macro. For this to work, if we had:

  foo: \w+?
  bar: [plugh]{2,5}
then what the regex engine *really* got to compile would be:

   (\w+?) ([plugh]{2,5})

with names attached to the two paren groups. Treating them as actual 
subroutines leads to madness,
Ehm, Foo.test cannot inline Foo.foo since it may be overridden:

grammar Foo {
rule foo { \w+? }
rule bar { [plugh]{2,5} }
rule test { foo bar }
}
grammar Bar is Foo {
rule foo { alpha+? }
}
What you say is only allowed if I put is inline on foo.



continuations don't quite work
Care to elaborate on that?  I'd say they work fine

We do, after all, want this fast, right?
Ofcourse, and we should optimize as much as we can - but not optimize 
*more* than we can.  Rules need generic backtracking semantics, and that's 
what I'm talking about.  Optimizations to avoid the genericity of these 
backtracking semantics is for later.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 4:52 PM +0100 3/19/03, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 10:40:02AM -0500, Dan Sugalski wrote:
By compile-time interpolation. foo isn't so much a subroutine as 
a macro. For this to work, if we had:

  foo: \w+?
  bar: [plugh]{2,5}
then what the regex engine *really* got to compile would be:

   (\w+?) ([plugh]{2,5})

with names attached to the two paren groups. Treating them as 
actual subroutines leads to madness,
Ehm, Foo.test cannot inline Foo.foo since it may be overridden:

grammar Foo {
rule foo { \w+? }
rule bar { [plugh]{2,5} }
rule test { foo bar }
}
grammar Bar is Foo {
rule foo { alpha+? }
}
What you say is only allowed if I put is inline on foo.
At the time I run the regex, I can inline things. There's nothing 
that prevents it. Yes, at compile time it's potentially an issue, 
since things can be overridden later, but that's going to be 
relatively rare, and can be dealt with by selective recompilation.

By the time the regex is actually executed, it's fully specified. By 
definition if nothing else--you aren't allowed to selectively 
redefine rules in the middle of a regex that uses those rules. Or, 
rather, you can but the update won't take effect until after the end 
of the regex, the same way that you can't redefine a sub you're in 
the middle of executing. (And yes, I'm aware that if you do that 
you'll pick up the new version if you recursively call, but that 
won't work with regexes)

continuations don't quite work
Care to elaborate on that?  I'd say they work fine
There's issues with hypothetical variables and continuations. (And 
with coroutines as well) While this is a general issue, they come up 
most with regexes.

We do, after all, want this fast, right?
Ofcourse, and we should optimize as much as we can - but not 
optimize *more* than we can.  Rules need generic backtracking 
semantics, and that's what I'm talking about.
No. No, in fact they don't. Rules need very specific backtracking 
semantics, since rules are fairly specific. We're talking about 
backtracking in regular expressions, which is a fairly specific 
generality. If you want to talk about a more general backtracking 
that's fine, but it won't apply to how regexes backtrack.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 11:09:01AM -0500, Dan Sugalski wrote:
At the time I run the regex, I can inline things. There's nothing 
that prevents it. Yes, at compile time it's potentially an issue, 
since things can be overridden later,
OK, but that's not how you initially presented it :-)


you aren't allowed to selectively redefine rules in the middle of a regex 
that uses those rules. Or, rather, you can but the update won't take 
effect until after the end
I don't recall having seen such a restriction mentioned in Apoc 5.

While I'm a big fan of optimization, especially for something like this, 
I think we should be careful with introducing mandatory restrictions just 
to aid optimization.  (is inline will allow such optimizations ofcourse)


There's issues with hypothetical variables and continuations. (And 
with coroutines as well) While this is a general issue, they come up 
most with regexes.
I'm still curious what you're referring to exactly.  I've outlined possible 
semantics for hypothetical variables in earlier posts that should work.


We do, after all, want this fast, right?
Ofcourse, and we should optimize as much as we can - but not 
optimize *more* than we can.  Rules need generic backtracking 
semantics, and that's what I'm talking about.
No. No, in fact they don't. Rules need very specific backtracking 
semantics, since rules are fairly specific. We're talking about 
backtracking in regular expressions, which is a fairly specific 
generality. If you want to talk about a more general backtracking 
that's fine, but it won't apply to how regexes backtrack.
My impression from A5 and A6 is that rules are methods.  They're looked up 
like methods, they can be invoked like methods, etc.

I certainly want to be able to write rules myself, manually, when I think 
it's appropriate; and use these as subrules in other methods.  Generic 
backtracking semantics are needed for that, and should at least conceptually 
also apply to normal rules.

When common sub-patterns are inlined, simple regexen will not use runtime 
subrules at all, so the issue doesn't exist there - that covers everything 
you would do with regexen in perl 5 for example.

When you do use real sub-rules, you're getting into the domain previously 
held by Parse::RecDescent and the like.  While these should ofcourse still 
be as fast as possible, a tiny bit of overhead on top of regular regex is 
understandable.

However, such overhead might not be even needed at all:  whenever possible 
optimizations should be applied, and rules are free to use special hacky 
but fast calling semantics to subrules if they determine that's possible. 
But I don't think a special optimization should be elevated to the official 
semantics.  I say, make generic semantics first, and then optimize the heck 
out of it.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Leopold Toetsch
Matthijs van Duin wrote:

sweepoff# or bus error
collectoff# or segmentation fault
Please try :

/* set this to 1 for tracing the system stack and processor registers */
#define TRACE_SYSTEM_AREAS 1
in dod.c (works for me).

Though I don't know, if processor registers on PPC gets traced by this 
(it might not stand optimization if not).

Code is not that deeply looked at, that we can savely turn off stack 
tracing yet.

leo



Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Jonathan Scott Duff
On Wed, Mar 19, 2003 at 11:09:01AM -0500, Dan Sugalski wrote:
 By the time the regex is actually executed, it's fully specified. By 
 definition if nothing else--you aren't allowed to selectively 
 redefine rules in the middle of a regex that uses those rules. Or, 
 rather, you can but the update won't take effect until after the end 
 of the regex, the same way that you can't redefine a sub you're in 
 the middle of executing. (And yes, I'm aware that if you do that 
 you'll pick up the new version if you recursively call, but that 
 won't work with regexes)

Are you implying that 

$fred = rx/fred/;
$string ~~ m:w/ $fred { $fred = rx/barney/; } rubble /

won't match barney rubble?

-Scott
-- 
Jonathan Scott Duff
[EMAIL PROTECTED]


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Sean O'Rourke
On Wed, 19 Mar 2003, Jonathan Scott Duff wrote:
 Are you implying that

   $fred = rx/fred/;
   $string ~~ m:w/ $fred { $fred = rx/barney/; } rubble /

 won't match barney rubble?

Or, worse, that

   $fred = rx/fred/;
   $string ~~ m:w/ { $fred = rx/barney/; } $fred rubble /

won't, either?

/s



Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 10:41 AM -0600 3/19/03, Jonathan Scott Duff wrote:
On Wed, Mar 19, 2003 at 11:09:01AM -0500, Dan Sugalski wrote:
 By the time the regex is actually executed, it's fully specified. By
 definition if nothing else--you aren't allowed to selectively
 redefine rules in the middle of a regex that uses those rules. Or,
 rather, you can but the update won't take effect until after the end
 of the regex, the same way that you can't redefine a sub you're in
 the middle of executing. (And yes, I'm aware that if you do that
 you'll pick up the new version if you recursively call, but that
 won't work with regexes)
Are you implying that

$fred = rx/fred/;
$string ~~ m:w/ $fred { $fred = rx/barney/; } rubble /
won't match barney rubble?
Potentially, no. What, then, should happen if you do:

   $barney = rx/barney/;
   $string = barney rubble;
   $string ~~ m:w/ $barney { $barney = rx/fred/; } rubble /;
The regex shouldn't match, since you've invalidated part of the match 
in the middle.

I can potentially see constructs of the form $var be taken as 
indirect rule invocations and their dispatch left to runtime, 
complete with the potential for bizarre after-the-fact invalidations, 
but as regex rules in the regex stream rather than as generic code.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 5:38 PM +0100 3/19/03, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 11:09:01AM -0500, Dan Sugalski wrote:
At the time I run the regex, I can inline things. There's nothing 
that prevents it. Yes, at compile time it's potentially an issue, 
since things can be overridden later,
OK, but that's not how you initially presented it :-)
Then I wasn't clear enough, sorry. This is perl -- the state of 
something at compile time is just a suggestion as to how things 
ultimately work. The state at the time of is the only thing that 
really matters, and I shortcut.

you aren't allowed to selectively redefine rules in the middle of a 
regex that uses those rules. Or, rather, you can but the update 
won't take effect until after the end
I don't recall having seen such a restriction mentioned in Apoc 5.
I'll nudge Larry to add it explicitly, but in general redefinitons of 
code that you're in the middle of executing don't take effect 
immediately, and it's not really any different for regex rules than 
for subs.

While I'm a big fan of optimization, especially for something like 
this, I think we should be careful with introducing mandatory 
restrictions just to aid optimization.  (is inline will allow such 
optimizations ofcourse)
Actually, we should be extraordinarily liberal with the application 
of restrictions at this phase. It's far easier to lift a restriction 
later than to impose it later, and I very much want to stomp out any 
constructs that will force slow code execution. Yes, I may lose, but 
if I don't try...

My job, after all, is to make it go fast. If you want something 
that'll require things to be slow then I don't want you to have it. :)

There's issues with hypothetical variables and continuations. (And 
with coroutines as well) While this is a general issue, they come 
up most with regexes.
I'm still curious what you're referring to exactly.  I've outlined 
possible semantics for hypothetical variables in earlier posts that 
should work.
The issue of hypotheticals is complex.

We do, after all, want this fast, right?
Ofcourse, and we should optimize as much as we can - but not 
optimize *more* than we can.  Rules need generic backtracking 
semantics, and that's what I'm talking about.
No. No, in fact they don't. Rules need very specific backtracking 
semantics, since rules are fairly specific. We're talking about 
backtracking in regular expressions, which is a fairly specific 
generality. If you want to talk about a more general backtracking 
that's fine, but it won't apply to how regexes backtrack.
My impression from A5 and A6 is that rules are methods.  They're 
looked up like methods, they can be invoked like methods, etc.
They aren't methods, though. They're not code in general, they're 
regex constructions in specific. Because they live in the symbol 
table and in some cases can be invoked as subs/methods doesn't make 
them subs or methods, it makes them regex constructs with funky 
wrappers if you want to use them in a non-regex manner.

I certainly want to be able to write rules myself, manually, when I 
think it's appropriate; and use these as subrules in other methods. 
Generic backtracking semantics are needed for that, and should at 
least conceptually also apply to normal rules.
No, no it shouldn't. Rule are rules for regexes, they are *not* subs. 
If you want generic backtracking to work, then there can't be any 
difference between:

  rule foo { \w+ }

and
  sub foo { ... }
but there must be. With rules as regex constructs the semantics are 
much simpler. If we allow rules to be arbitrary code not only do we 
have to expose a fair amount of the internals of the regex engine to 
the sub so it can actually work on the stream and note its position 
(which is fine, I can do that) we also need to be able to pause foo 
in the middle and jump back in while passing in parameters of some 
sort. Neither continuations nor standard coroutines are sufficient in 
this instance, since the reinvocation must *both* preserve the state 
of the code at the time it exited but also pass in an indication as 
to what the sub should do. For example, if the foo sub was treated as 
a rule and we backtrack, should it slurp more or less?

If rules are just plain regex rules and not potentially arbitrary 
code, the required semantics are much simpler.

Then there's the issue of being able to return continuations from 
within arbitrary unnamed blocks, since the block in this:

   $foo ~~ m:w/alpha {...} number/;

should be able to participate in the backtracking activities if we're 
not drawing a distinction between rules and generic code. (Yeah, the 
syntax is wrong, but you get the point)

Ultimately the question is How do you backtrack into arbitrary code, 
and how do we know that the arbitrary code can be backtracked into? 
My answer is we don't, but I'm not sure how popular that particular 
answer is.

When common sub-patterns are inlined, simple regexen will not use 
runtime subrules at all, so

Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Simon Cozens
[EMAIL PROTECTED] (Dan Sugalski) writes:
 you aren't allowed to selectively redefine
 rules in the middle of a regex that uses those rules.

This is precisely what a macro does.

-- 
How should I know if it works?  That's what beta testers are for.  I only
coded it.
(Attributed to Linus Torvalds, somewhere in a posting)


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 5:47 PM + 3/19/03, Simon Cozens wrote:
[EMAIL PROTECTED] (Dan Sugalski) writes:
 you aren't allowed to selectively redefine
 rules in the middle of a regex that uses those rules.
This is precisely what a macro does.
Not once execution starts, no.
--
Dan
--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Simon Cozens
[EMAIL PROTECTED] (Dan Sugalski) writes:
 At 5:47 PM + 3/19/03, Simon Cozens wrote:
 [EMAIL PROTECTED] (Dan Sugalski) writes:
   you aren't allowed to selectively redefine
   rules in the middle of a regex that uses those rules.
 
 This is precisely what a macro does.
 
 Not once execution starts, no.

Compilation's just execution of a regex, albeit the Perl6::Grammar::program
regex, and that regex will need to be modified while it's in operation in
order to pick up macro is parsed definitions and apply them to the rest
of what it's parsing.

-- 
* DrForr digs around for a fresh IV drip bag and proceeds to hook up.
dngor Coffee port.
DrForr Firewalled, like everything else around here.


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 5:54 PM + 3/19/03, Simon Cozens wrote:
[EMAIL PROTECTED] (Dan Sugalski) writes:
 At 5:47 PM + 3/19/03, Simon Cozens wrote:
 [EMAIL PROTECTED] (Dan Sugalski) writes:
   you aren't allowed to selectively redefine
   rules in the middle of a regex that uses those rules.
 
 This is precisely what a macro does.
 Not once execution starts, no.
Compilation's just execution of a regex, albeit the Perl6::Grammar::program
regex, and that regex will need to be modified while it's in operation in
order to pick up macro is parsed definitions and apply them to the rest
of what it's parsing.
Ah, damn, I wasn't thinking far enough out. I'm not sure it'll work 
quite like that, with a single call to the regex engine that spits 
out everything in one go. More likely it'll be a set of iterative 
calls to the engine that terminate at natural sequence points, 
potentially with recursive calls into the parsing regex.

--
Dan
--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Simon Cozens
[EMAIL PROTECTED] (Dan Sugalski) writes:
 Compilation's just execution of a regex, albeit the Perl6::Grammar::program
 regex, and that regex will need to be modified while it's in operation in
 order to pick up macro is parsed definitions and apply them to the rest
 of what it's parsing.
 
 Ah, damn, I wasn't thinking far enough out. 

This, you see, is precisely why some of us started work last year on a
regular expression engine which could handle having its expressions
rewritten during the match... ;)

-- 
Last week I forgot how to ride a bicycle.  -- Steven Wright


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 12:35:19PM -0500, Dan Sugalski wrote:
Then I wasn't clear enough, sorry. This is perl -- the state of 
something at compile time is just a suggestion as to how things 
ultimately work.
Yes, hence my surprise about actually inlining stuff, luckily that was 
just a misunderstanding :-)


I'll nudge Larry to add it explicitly, but in general redefinitons of 
code that you're in the middle of executing don't take effect 
immediately, and it's not really any different for regex rules than 
for subs.
Ah, but we're not redefining the sub that's running, but the subs it's 
about to call.  That works for subs, and Simon Cozens already pointed out 
we certainly also need it for rules :-)


Actually, we should be extraordinarily liberal with the application 
of restrictions at this phase. It's far easier to lift a restriction 
later than to impose it later,
This is perl 6, we can add a new restriction next week

and I very much want to stomp out any constructs that will force slow code 
execution. Yes, I may lose, but if I don't try...
You're absolutely right, and optimization is very important to me too.  But 
you can't *only* look at the speed of constructs, or we'll be coding in C 
or assembly :-)

We'll need to meet in the middle..


The issue of hypotheticals is complex.
Well, I'm a big boy, I'm sure I can handle it.  Are you even talking about 
semantics or implementation here?  Because I already gave my insights on 
semantics, and I have 'em in my head for implementation too but I should 
probably take those to perl6-internals instead.

Ultimately the question is How do you backtrack into arbitrary code, 
and how do we know that the arbitrary code can be backtracked into? 
My answer is we don't, but I'm not sure how popular that particular 
answer is.

I say, make generic semantics first, and then optimize the heck out of it.
That's fine. I disagree. :)
Now that Simon Cozens has established that sub-rules need to be looked up 
at runtime, I think we can both be happy:

As far as I can see, a rule will consist of two parts: The wrapper that 
will handle stuff when the rule is invoked as a normal method, perhaps 
handle modifiers, handle searches for unanchored matches, setup the state, 
etc;  and the actual body that does a match at the current position.

Now, what you want is that subrule-invocation goes directly from body to 
body, skipping the overhead of method invocation to the wrapper.  I say, 
when you look up the method for a subrule, check if it is a regular rule 
and if so call its body directly, and otherwise use the generic mechanism.

I'll get my lovely generic semantics with the direct body-body calling 
hidden away as an optimization details, and I get the ability to write 
rule-methods in perl code.

You still get your low-overhead body-body calls and therefore the speed 
you desire (hopefully).  Since you need to fetch the rule body anyway, 
there should be no extra overhead: where you'd normally throw an error 
(non-rule invoked as subrule) you'd switch to generic invocation instead.

Sounds like a good deal? :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 8:04 PM +0100 3/19/03, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 12:35:19PM -0500, Dan Sugalski wrote:
I'll nudge Larry to add it explicitly, but in general redefinitons 
of code that you're in the middle of executing don't take effect 
immediately, and it's not really any different for regex rules than 
for subs.
Ah, but we're not redefining the sub that's running, but the subs 
it's about to call.  That works for subs, and Simon Cozens already 
pointed out we certainly also need it for rules :-)
Well, I'm not 100% sure we need it for rules. Simon's point is 
well-taken, but on further reflection what we're doing is subclassing 
the existing grammar and reinvoking the regex engine on that 
subclassed grammar, rather than redefining the grammar actually in 
use. The former doesn't require runtime redefinitions, the latter 
does, and I think we're going to use the former scheme.

Actually, we should be extraordinarily liberal with the application 
of restrictions at this phase. It's far easier to lift a 
restriction later than to impose it later,
This is perl 6, we can add a new restriction next week
We can't add them once we hit betas. I'd as soon add them now, rather 
than later.

and I very much want to stomp out any constructs that will force 
slow code execution. Yes, I may lose, but if I don't try...
You're absolutely right, and optimization is very important to me 
too.  But you can't *only* look at the speed of constructs, or we'll 
be coding in C or assembly :-)

We'll need to meet in the middle..
Well, not to be too cranky (I'm somewhat ill at the moment, so I'll 
apologize in advance) but... no. No, we don't actually have to, 
though if we could that'd be nice.

The issue of hypotheticals is complex.
Well, I'm a big boy, I'm sure I can handle it.  Are you even talking 
about semantics or implementation here?  Because I already gave my 
insights on semantics, and I have 'em in my head for implementation 
too but I should probably take those to perl6-internals instead.
Semantics. Until Larry's nailed down what he wants, there are issues 
of reestablishing hypotheticals on continuation reinvocation, 
flushing those hypotheticals multiple times, what happens to 
hypotheticals when you invoke a continuation with hypotheticals in 
effect, what happens to hypotheticals inside of coroutines when you 
establish them then yield out, and when hypotheticals are visible to 
other threads.

I read through your proposal (I'm assuming it's the one that started 
this thread) and it's not sufficient unless I missed something, which 
I may have.

Ultimately the question is How do you backtrack into arbitrary 
code, and how do we know that the arbitrary code can be backtracked 
into? My answer is we don't, but I'm not sure how popular that 
particular answer is.

I say, make generic semantics first, and then optimize the heck out of it.
That's fine. I disagree. :)
Now that Simon Cozens has established that sub-rules need to be 
looked up at runtime,
Well

Sounds like a good deal? :-)
At the moment, no. It seems like a potentially large amount of 
overhead for no particular purpose, really. I don't see any win in 
the regex case, and you're not generalizing it out to the point where 
there's a win there. (I can see where it would be useful in the 
general case, but we've come nowhere near touching that)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 02:31:58PM -0500, Dan Sugalski wrote:
Well, I'm not 100% sure we need it for rules. Simon's point is 
well-taken, but on further reflection what we're doing is subclassing 
the existing grammar and reinvoking the regex engine on that 
subclassed grammar, rather than redefining the grammar actually in 
use. The former doesn't require runtime redefinitions, the latter 
does, and I think we're going to use the former scheme.
That's not the impression I got from Simon

It would also be rather annoying.. think about balanced braces etc, take 
this rather contrieved, but valid example:

$x ~~ m X {
macro ... yada yada yada;
} X;
It seems to be that you're really inside a grammar rule when that macro 
is defined.  Otherwise you'd have to keep a lot of state outside the 
parser to keep track of such things, which is exactly what perl grammars 
were supposed to avoid I think.

We can't add them once we hit betas. I'd as soon add them now, rather 
than later.
Well, I'd rather not add it at all :-)


We'll need to meet in the middle..
Well, not to be too cranky (I'm somewhat ill at the moment, so I'll 
apologize in advance) but... no. No, we don't actually have to, 
though if we could that'd be nice.
OK, strictly speaking that's true, but I think we can


Semantics. Until Larry's nailed down what he wants, there are issues 
of reestablishing hypotheticals on continuation reinvocation, 
They should be though, if a variable was hypothesized when the continuation 
was taken, then it should be hypothesized when that continuation is invoked.

flushing those hypotheticals multiple times,
Not idea what you mean

what happens to hypotheticals when you invoke a continuation with 
hypotheticals in effect, 
Basically de-hypothesize all current hypotheticals, and re-hypothesize 
the ones that were hypothesized when the continuation was taken.  You can 
ofcourse optimize this by skipping the common ancestry, if you know 
what I mean

what happens to hypotheticals inside of coroutines when you 
establish them then yield out,
This follows directly from the implementation of coroutines: the first 
yield is a normal return, so if you hypothesize $x before that it'll stay 
hypothesized. if you then hypothesize $y outside the coroutine and call 
the coroutine again, $y will be de-hypothesized. If the coroutine then 
hypothesizes $z and yields out, $z will be de-hypothesized and $y
re-hypothesized.  $x will be unaffected by all this


and when hypotheticals are visible to other threads.
I haven't thought of that, but to be honest I'm not a big fan of preemptive 
threading anyway.  Cooperative threading using continuations is probably 
faster, has no synchronization issues.  And the behavior of hypotheticals 
follows naturally there (you can use 'let' or 'temp' to create thread-
local variables in that case)


I read through your proposal (I'm assuming it's the one that started 
this thread) and it's not sufficient unless I missed something, which 
I may have.
Also look at Sean O'Rourke's reply and my reply to that; it contains 
additional info.


Sounds like a good deal? :-)
At the moment, no. It seems like a potentially large amount of 
overhead for no particular purpose, really.
I have to admit I don't know the details of how your system works, but 
what I had in mind didn't have any extra overhead at all -- under the 
(apparently still debatable) assumption that you need to look up subrules 
at runtime anyway.

You do agree that if that is possible, is *is* a good deal?


I don't see any win in the regex case, and you're not generalizing it out 
to the point where there's a win there. (I can see where it would be 
useful in the general case, but we've come nowhere near touching that)
We have come near it.. backtracking is easy using continuations, and we can 
certainly have rules set the standard for the general case.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Dan Sugalski
At 9:14 PM +0100 3/19/03, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 02:31:58PM -0500, Dan Sugalski wrote:
Well, I'm not 100% sure we need it for rules. Simon's point is 
well-taken, but on further reflection what we're doing is 
subclassing the existing grammar and reinvoking the regex engine on 
that subclassed grammar, rather than redefining the grammar 
actually in use. The former doesn't require runtime redefinitions, 
the latter does, and I think we're going to use the former scheme.
That's not the impression I got from Simon

It would also be rather annoying.. think about balanced braces etc, 
take this rather contrieved, but valid example:

$x ~~ m X {
macro ... yada yada yada;
} X;
It seems to be that you're really inside a grammar rule when that 
macro is defined.
Right. Macro definition ends, you subclass off the parser object, 
then immediately call into it, and it eats until the end of the 
regex, at which point it exits and so does the parent, for lack of 
input, and the resulting parse tree is turned to bytecode and 
executed.

  Otherwise you'd have to keep a lot of state outside the parser to 
keep track of such things, which is exactly what perl grammars were 
supposed to avoid I think.
You, as a user-level programmer, don't have to track the state. The 
parser code will, but that's not a big deal.

We'll need to meet in the middle..
Well, not to be too cranky (I'm somewhat ill at the moment, so I'll 
apologize in advance) but... no. No, we don't actually have to, 
though if we could that'd be nice.
OK, strictly speaking that's true, but I think we can

Semantics. Until Larry's nailed down what he wants, there are issues 
of reestablishing hypotheticals on continuation reinvocation,

They should be though, if a variable was hypothesized when the 
continuation was taken, then it should be hypothesized when that 
continuation is invoked.
Should they? Does hypotheticalization count as data modification (in 
which case it shouldn't) or control modification (in which case it 
should), and do you restore the hypothetical value at the time the 
continuation was taken or just re-hypotheticalize the variables? 
(Which makes continuations potentially more expensive as you need to 
then save off more info so on invocation you can restore the 
hypothetical state)

What about co-routines, then? And does a yield from a coroutine count 
as normal or abnormal exit for pushing of hypothetical state outward, 
or doesn't it count at all?

flushing those hypotheticals multiple times,
Not idea what you mean
I hypotheticalize the variables. I then take a continuation. Flow 
continues normally, exits off the end normally, hypothetical values 
get pushed out. I invoke the continuation, flow continues, exits 
normally. Do I push the values out again?

what happens to hypotheticals when you invoke a continuation with 
hypotheticals in effect,

Basically de-hypothesize all current hypotheticals,
How? Successfully or unsuccessfully? Does it even *count* as an exit 
at all if there's a pending continuation that could potentially exit 
the hypotheticalizing block later?

what happens to hypotheticals inside of coroutines when you 
establish them then yield out,
This follows directly from the implementation of coroutines: the 
first yield is a normal return, so if you hypothesize $x before that 
it'll stay hypothesized. if you then hypothesize $y outside the 
coroutine and call the coroutine again, $y will be de-hypothesized.
Why? That doesn't make much sense, really. If a variable is 
hypotheticalized outside the coroutine when I invoke it, the 
coroutine should see the hypothetical variable. But what about yields 
from within a couroutine that's hypotheticalized a variable? That's 
neither a normal nor an abnormal return, so what happens?

If the coroutine then hypothesizes $z and yields out, $z will be 
de-hypothesized and $y
re-hypothesized.  $x will be unaffected by all this
Yech. I don't think that's the right thing to do.

and when hypotheticals are visible to other threads.
I haven't thought of that, but to be honest I'm not a big fan of 
preemptive threading anyway.
Doesn't matter whether you like it or not, they're a fact that must 
be dealt with. (And scare up a dual or better processor machine and 
I'll blow the doors off a cooperative threading scheme, 
synchronization overhead or not)

I read through your proposal (I'm assuming it's the one that started this
Sounds like a good deal? :-)
At the moment, no. It seems like a potentially large amount of 
overhead for no particular purpose, really.
I have to admit I don't know the details of how your system works, 
but what I had in mind didn't have any extra overhead at all -- 
under the (apparently still debatable) assumption that you need to 
look up subrules at runtime anyway.

You do agree that if that is possible, is *is* a good deal?
No. Honestly I still don't see the *point*, certainly not in regards 
to regular expressions

Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 03:46:50PM -0500, Dan Sugalski wrote:
Right. Macro definition ends, you subclass off the parser object, 
then immediately call into it
...
You, as a user-level programmer, don't have to track the state. The 
parser code will, but that's not a big deal.
OK, I suppose that works although that still means you're moving the 
complexity from the perl implementation to its usage: in this case, the 
perl 6 parser which is written in perl 6 -- but I can well imagine other 
people want to do the same, and they'll have to do a similar hack.

I really don't like that, perl normally moves the complexity away from the 
programmer and into perl.


They should be though, if a variable was hypothesized when the 
continuation was taken, then it should be hypothesized when that 
continuation is invoked.
Should they? Does hypotheticalization count as data modification (in 
which case it shouldn't) or control modification (in which case it 
should),
Isn't that the whole point of hypotheses in perl 6?  You talk about 
successful and unsuccessful de-hypothesizing, and about abormals exits 
etc..  you seem to have a much complexer model of hypotheses than what's 
in my head.

I could be entirely missing things ofcourse, but I haven't seen any 
evidence to that yet.

To me, what 'let' does is temporize a variable except it doesn't get 
restored if you leave the scope, but only if you use a continuation to 
go back to a point where it wasn't hypothesized yet.

When the last continuation taken *before* the hypothesis is gone, so is 
the old version and thus the hypothesized variable becomes permanent.

The behavior regarding coroutines followed naturally from this, and so does 
the behavior inside regexen if they use the continuation semantics for 
backtracking -- which is what I'm suggesting.

This leave only behavior regarding preemptive threads, which is actually 
very easy to solve:  disallow hypothesizing shared variables -- it simply 
makes no sense to do that.  Now that I think of it, temporizing shared 
variables is equally bad news, so this isn't something new.


(Which makes continuations potentially more expensive as you need to 
then save off more info so on invocation you can restore the 
hypothetical state)
Actually, I think 'let' can handle this.. it's only invocation of 
continuations that will become more expensive because it needs to deal with 
the hypothesized variables

What about co-routines, then? And does a yield from a coroutine count 
as normal or abnormal exit for pushing of hypothetical state outward, 
or doesn't it count at all?
Your terminology gets rather foreign to me at this point.  Assuming a 
co-routine is implemented using continuations, their behavior follows 
directly from the description above, and I think the resulting behavior 
looks fine.  I don't see why people would hypothesize variables inside a 
co-routine anyway.


I hypotheticalize the variables. I then take a continuation. Flow 
continues normally, exits off the end normally, hypothetical values 
get pushed out. I invoke the continuation, flow continues, exits 
normally. Do I push the values out again?
If it ends normally, the variable isn't de-hypothesized at all.  Also, the 
continuation was created *after* you hypothesized the variable, so when you 
invoke it nothing will happen to the variable.


How? Successfully or unsuccessfully? Does it even *count* as an exit 
at all if there's a pending continuation that could potentially exit 
the hypotheticalizing block later?
You're making 0% sense to me, apparently because your mental model of 
hypothesizing differs radically from mine.

Why? That doesn't make much sense, really.
Probably the same problem in opposite direction :-)

(And scare up a dual or better processor machine and I'll blow the doors 
off a cooperative threading scheme, synchronization overhead or not)
Ofcourse, for CPU-intensive applications that spread their computation over 
multiple threads on a multi-processor machine, you'll certainly need 
preemptive multithreading.

When exactly is the last time you wrote such an application in perl? :-)

Seriously though, I think in the common case cooperative threading is likely 
to be superior.. it has low overhead, it should have faster context switch 
time, you have no synchronization issues, and you can mostly avoid the need 
for explicit yielding: in many applications threads will regularly block on 
something anyway (which will yield to another thread)

But anyway, this is getting off-topic.. I'll save it for later.  Regex first


No. Honestly I still don't see the *point*, certainly not in regards 
to regular expressions and rules. The hypothetical issues need 
dealing with in general for threads, coroutines, and continuations, 
but I don't see how any of this brings anything to rules for the 
parsing engine.

The flow control semantics the regex/parser needs to deal with are 
small and simple. I just don't see the point of trying to make

Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Simon Cozens
[EMAIL PROTECTED] (Matthijs Van Duin) writes:
 OK, I suppose that works although that still means you're moving the
 complexity from the perl implementation to its usage: in this case,
 the perl 6 parser which is written in perl 6

No, I don't believe that's what's happening. My concern is that at some
point, there *will* need to be a bootstrapped parser which is written in
some low level language, outputting Parrot bytecode, and it *will* need
to be able to reconfigure itself mid-match.

I think. I can't remember why I'm so convinced of this, and I'm too tired
to think it through with examples right now, and I might be wrong anyway,
but at least I can be ready with a solution if it proves necessary. :)

-- 
There is no safe investment. To love at all is to be vulnerable. ... 
The only place outside Heaven where you can be perfectly safe from all the
dangers and pertubations of love is Hell.
 -CS Lewis The Four Loves


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Larry Wall
I would like to express my sincere gratitude to all of you for working
through these issues.  I bent my brain on the Perl 5 regex engine,
and that was just a simple recurse-on-success engine--and I'm not
the only person it drove mad.  I deeply appreciate that Perl 6's
regex engine may drive you even madder.  But such sacrifices are at
the heart of why people love Perl.  Thanks!

Larry


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Sean O'Rourke
On Tue, 18 Mar 2003, Matthijs van Duin wrote:
 and maybe also:
  What is the current plan?

 although I got the impression earlier that there isn't any yet for invoking
 subrules :-)

See line 1014, languages/perl6/P6C/rule.pm.  The hack I used was to call
rules like ordinary subs, and have them push marks onto the regex stack
before they return.  I'm not sure if this can be made to work with
hypotheticals, and I'm sure it won't interact kindly with
continuation-taking, but there's _something_.

As for the interaction with continuations, I was about to post some of my
concerns when I received your long and well-thought-out mail.  I need to
think about the discussion so far a bit more, but briefly:

(1) There's more than one way to go when combining dynamically-scoped
variables with continuations: for example, do you use dynamic bindings
from where the continuation was taken, or from where it's invoked?  (see
e.g. Scheme's dynamic-wind).

(2) (internals) The functional-language people have found that full
continuations are slow, and put a lot of effort into avoiding them where
possible.  Backtracking languages like Icon and Prolog are implemented by
special mechanisms rather than general continuations, probably for this
reason.  So if we're forced to do a regex engine using full continuations,
it will probably be dog-slow

(3) On the other hand, we probably want people to intermix regex
backtracking, continuation-taking, and hypothetical/dynamic variables, and
have it do the right thing, where right means something like
mind-bendingly difficult to reason about, but consistent.  How do we
want these features to play with each other?

(4) (internals) Given that Parrot has so many different control mechanisms
(call/ret, exceptions, closures, continuations, ...), how do we maintain
consistency?  And how much of that is parrot's responsibility (versus the
perl6 compiler's)?

/s



Rules and hypotheticals: continuations versus callbacks

2003-03-18 Thread Matthijs van Duin
A quick note before I begin: the stuff in this email isn't just an 
implementation detail - it impacts the language too, that's why I'm 
posting it here.  Should I cross-post to perl6-internals ?  (I'm not 
really familiar with that list yet)

I've recently spent thought on backtracking into rules, and the exact 
nature of hypothetical variables etc.  The two systems for backtracking 
into a subrule that are described below are the best I can think of right 
now, but maybe I'm completely missing something and is something much 
simpler possible - in which case I'd love to hear about it ofcourse. :-)

My main questions are:

Is there a simpler system I'm overlooking?
Which of the two systems would you prefer if speed isn't the issue?
Which system is likely to run faster on parrot?
and maybe also:
What is the current plan?
although I got the impression earlier that there isn't any yet for invoking 
subrules :-)

Anyway, I will use the following grammar for examples:
   rule foo { a }
   rule bar { a+ }
   rule quux { ab }
   rule test { [ foo | bar ] quux }
 Mechanism 1 -- Continuations 

Continuations can be used to reset the state of the world to the 
previous backtracking point.

Various ways can be imagined to pass around the continuation.  I picked 
one that seems fairly clean to me and doesn't create any more 
continuations than strictly necessary.

One thing I'll need to explain is what 'let' means in this system: it 
makes sure the variable is restored if a continuation is invoked that was 
created before the variable was hypothesized.  This generalization means 
you can hypothesize any variable you can temporize.

OK, let's look at how the rule 'test' could be implemented using 
continuations.  Note that I'm not paying attention to optimization at this 
point.

method test (Continuation ?backtrack is rw) {
backtrack or mark(backtrack) or return;
$_ = .new;
if (mark(backtrack)) {
let $.{foo} = .foo(backtrack);
} elsif (mark(backtrack)) {
let $.{bar} = .bar(backtrack);
} else {
backtrack;
}
let $.{quux} = .quux(backtrack);
return $_;
}
where mark is a utility sub defined as something like:

sub mark (Continuation backtrack is rw) {
my cc = callcc { $_ }  or return 0;
let backtrack = cc.assuming(undef);
return 1;
}
Let's see how this would match on 'aaab':

0. A new state object is created  (Ignore the first line, it's for later)

1. The first mark hypothesizes backtrack.

2. $.{foo} is hypothesized.  foo matches 'a', and since it doesn't do any 
backtracking, it leaves backtrack alone.

3. $.{quux} is hypothesized.  quux fails, it calls backtrack:
 a. $.{quux} is de-hypothesized.
 b. $.{foo} is de-hypothesized.
 c. backtrack is de-hypothesized.
 d. inside the first mark undef is returned from callcc.  mark returns false.
4. The second mark hypothesizes backtrack.

5. $.{bar} is hypothesized.  bar matches 'aaa' and hypothesizes backtrack

6. $.{quux} is hypothesized.  quux fails, it calls backtrack:
 a. $.{quux} is de-hypothesized
 b. inside bar, it backtracks to match 'aa'.  bar returns again
7. $.{quux} is hypothesized.  quux matches, leaves backtrack alone.

Note that the backtrack remains hypothesized after test completes. Let's 
say test is followed by fail and see that happens:

8. fail calls backtrack
 a. $.{quux} is de-hypothesized
 b. inside bar, it backtracks to match 'a'.  bar returns again
9. $.{quux} is hypothesized.  quux fails, it calls backtrack:
 a. $.{quux} is de-hypothesized
 b. inside bar, backtrack is de-hypothesized. bar calls backtrack
 c. $.{bar} is de-hypothesized
 d. backtrack is de-hypothesized
 e. inside the second mark undef is returned from callcc.  mark returns false.
10. backtrack is called, causing backtracking into whatever was before test.

This only leaves the issue of how to deal with the top-level, where the 
continuation is omitted.  The magical first line will in that case create 
a continuation which will simply return from the rule if the match fails.  
If the top-level match succeeds then the backtrack variable disappears 
into thin air, and with it all backtracking information (the continuations 
and de-hypotheticalization info).

Note that the user can ofcourse choose to retain the backtracking info, 
and use it later to cause backtracking into the match after it has completed:

if Grammar.rule($string, my backtrack) {
   ...
   if i_dont_like_this_match {
   backtrack; # try another
   }
}
 Mechanism 2 -- Callbacks 

The second mechanism is a bit more mundane.  The idea is that every rule 
will get passed a closure that's called to match whatever comes next.  If 
the whatever comes next fails, the rule can backtrack and call the 
closure again.

This time 'let' is exactly the same as 'temp' except hypothetical 
variables are only allowed inside the dynamic scope of a subroutine with a 
special trait

Re: Rules and hypotheticals: continuations versus callbacks

2003-03-18 Thread Luke Palmer
 My main questions are:
 
  Is there a simpler system I'm overlooking?
  Which of the two systems would you prefer if speed isn't the issue?

Mechanism 1.

  Which system is likely to run faster on parrot?

They're both likely to be very slow.

 and maybe also:
  What is the current plan?

 although I got the impression earlier that there isn't any yet for invoking 
 subrules :-)

Sure there is.  It's boling away in my brain and in my local parrot
copy.  (You haven't seen any commits because I'm overhauling it and it
doesn't... well... work yet).

My plan is to allow whatever we find is best, and swap them around
and benchmark them seperately.  But the two engines I have in mind
include one very similar to your two examples, and one more classical
approach.

Plan 1:  Pass each rule a Isuccess continuation (rather than a
backtrack one), and have it just return on failure.  The big
difference between this and your example is that Clets are now
implemented just like Ctemps.  Too bad Clet needs non-regex
behavior, too.

Plan 2:  Call subrules as plain old subs and have them throw a
backtrack exception on failure (or just return a failure-reporting
value... same difference, more or less).  This has the advantage that
Clet behaves consistently with the rest of Perl.  It has the
disadvantage that we have to manually implement backtracking through
individual rules.  It has the advantage that it's easier to optimize.

I looked around in Parrot a little, and it seems like continuations
are done pretty efficiently.  So, I can't really say which of these
would be faster, but I'd guess the latter.

I'll be writing them both, though, so we'll see :)

Luke


Ccaller and Continuations

2003-03-11 Thread Luke Palmer
=head1 Ccaller and Continuations

Here's another blend known paradigms document from Luke.  The idea
is to rethink Ccaller to provide even more information than it
already does, in an elegant way.  To get us started:

As in Perl 5, the Ccaller function will return information about
the dynamic context of the current subroutine. Rather than always
returning a list, it will return an object that represents the
selected caller's context.  (Apocalypse 6)

In brief, the semantics of Ccaller are to look up the stack until an
appropriate frame is found and return an object representing the
context at that point.  In briefer, Ccaller gives information about
the stack.

It would make more sense, from an Object Oriented point of view, for
Ccaller just to give an object that knew about the stack, and query
it with methods.  That way you could make more versatile queries (not
functionally, but more easily), and, for instance, pass it back inside
an exception object for a stack backtrace.

The magical Ccaller() function could return a handle by which
you can access the caller's Cmy variables. And in general, there
will be such a facility under the hood, because we have to be able
to construct the caller's lexical scope while it's being
compiled.  (A6)

So now it needs access to each frame's lexical variables.  That seems
reasonable, and because of object lifetime guarantees (complemented
with garbage collection), it poses no dangling reference problems.

You're probably way ahead of me, given the title of this paper.
That's right, throw in an execution point and we have ourselves all
the information that we need to go right back to where this object
refers.  Yes, it's the makin's of a continuation.

If this object has a Ccall method (aliased by the funciton-call
operator) which replaces the current execution stack with the one it
represents, we now have a new way to return from a function:

sub one_plus($x) {
caller.call($x);
reformat_hard_drive;   # Will never get here
}

Presumably, the argument to Ccall will be shoved in place of what
the return value was supposed to be.

So we have a way to get the caller's continuation, but a lot of
continuation-passing style is about Icurrent continuations.  Do we
have to have another function in the family of Ccaller, namely
Chere?  Sure, but it can be implemented in terms of Ccaller.

sub *here() {
return caller;
}

Snazzy, no?  

Unfortunately, this is not particularly easy to work with.  It's, in
fact, particularly hard to work with.  Here's an implementation of a
simple coroutine given just these tools:

sub fact(Int $x, Continuation $caller) {
if $x == 0 {
return 1;
}
else {
my $result = $x * fact($x-1, $caller);
given here {
when defined { $caller.($result = $_) }
default  { return $result }
}
}
}

given here {
when Continuation { fact(4, $_); }
when Pair { print .key; .value.(); }
}

The way Scheme (and various other languages) gets around this is with
a Ccall-with-current-continuation function, which resumes right
after the call.  I'm going to call it Cbranch, for lack of a better
name that's less than 30 (!) characters long.

sub *here(?$id is rw) {
$id = \my $anon;
return caller but branched $id;
}

sub *branch( code(Continuation) ) {
given here my $id {
when .branched == $id { undef .branched; code($_) }
default   { $_ }
}
}

(Get it?  This is fraud-proof, too, as long as you stay in the realm
of the Perl externals.)

Our factorial program now looks like:

sub fact(Int $x, Continuation $caller) {
if $x == 0 { return 1; }
else   { branch { $caller.($x * fact($x-1, $caller) = $_) } }
}

my $rv = branch { fact(4, $_) };
print $rv.key;
while branch $rv.value - $_ {
last when not Pair;
print $rv.key;
}

It's better, but as long as we're on the topic of adding layers of
continuation support to Perl, why not do coroutines as well? :)

sub *yield([EMAIL PROTECTED]) { die Yield while not in coroutine }

class CoroutineIterator is Iterator {
submethod BUILD(.code, @.args) { }

method next($self:) {
my @ret;
($.cc, @ret) := branch - back($,@) {
local *yield(*@) = sub ([EMAIL PROTECTED]) {
branch { back $_, @_ }
}
 
if $.cc { $cc.call }
else{ $self.code.([EMAIL PROTECTED]) }

undef;  # When the coroutine returns
}
return [EMAIL PROTECTED];
}

has $.cc;
has .code;
has @.args;
}

And now our factorial example closes as:

sub fact(Int $x

Re: Coroutines, continuations, and iterators -- oh, my! (Was: Re: Continuations elified)

2002-11-21 Thread fearcadi
Damian Conway writes:
  
  There's no second iterator. Just Cfor walking through an array.
  

( questions in the form of answers :-) 

so : 
* for impose array context for first argument and doesnt care about
  nature of the array which it was given eventually as an argument .
  no multiple streams -- use parallel and friends. 
  
* parallel and friends return lazy array ( for performance
  considerations  ) _o_r_ for *notice* the parallel and ??? optimize
  it away / dont know

  for parallel(@a,@b) - ($x,$y) { ... } 

* every object with next method can function as iterator ???
  _o_r_ we always have to inherit from iterator class . 
  what about reset/rewind  ???

* $fh = open myfile ; 
  for $fh { ... while $fh { ... } ...  }

  both $fh *ultimately* use *the same* method call $fh.next , but
  second $fh does it explicitely , while the first -- from under the
  cloth of lazy array returned by $fh.each and drived by for .

* what does it mean that for walks the array ( keeping in mind that
  that array may be usual or lazy and for have to not to care  )

   
   What's the difference between a lazy array and an iterator? Is there
   caching?
  
  Yes. A lazy array is a wrapper-plus-cache around an iterator.
  

$fh = open file ; 
@a := $fh ; 
print @a[3] # 4 calls to $fh.next 
print @a[0] # no calls to $fh.next 

is that the meaning of ...-plus-cache 


  
   Some of questions about iterators and stuff:
   
   1- Are iterators now considered a fundamental type? 
  
  Probably, since they're fundamental to I/O and Cfor loops.
  
  

so every class can define its own next  method or inherit from
Iterator to be used as an iterator _o_r_ it ( class ) *always* have to 
inherit from Iterator --  to be used as iterator  ??? 
Naively , it seems that this is similar to booleans in perl -- no need to
inherit from special class to behave as boolean. 

  
   2a- Is there an CIterator.toArray() or C.toList() method?
  
  Iterator::each.
  
  
   2a1- The notion that Iterator.each() returns a lazy array seems a
   little wierd. Isn't a lazy array just an iterator?
  
  No. It's an array that populates itself on-demand using an iterator.
  

what is the difference between the arrays @a, @b , ... here 


$a = Iterator.new( ... )  
@a = $a.each ; 
@b := $a.each ; 
@c := $a ; 
@d is lazy = ( 1, 2, 3 ) ;
@f is lazy = $a.each ;


  Iterator: an object that returns successive values from some source
 (such as an array, a filehandle, or a coroutine)

isnt it *anything* having method next ???
why do I need a special type iterator ? 


thanks , 

arcadi . 




Re: Continuations

2002-11-20 Thread Damian Conway
Paul Johnson wrote:


Is it illegal now to use quotes in qw()?


Nope. Only as the very first character of a 

 
Paging Mr Cozens.  ;-)

It's just another instance of whitespace significance.




	print «\a b c»;

 
Presumably without the backslash here too.

Maybe. It depends on whether Larry decides to make « and 
synonyms in all contexts (in which case: no).

Damian




Re: Continuations elified

2002-11-20 Thread Damian Conway
Arcadi wrote:


   while $iter {...}  # Iterate until $iter.each returns false?

  you mean Iterate until $iter.next returns false? 

Oops. Quite so.



what is the difference between the Iterator  and lazy array ?

am I right that it is just interface : lazy array is an iterator
object inside Array interface : 


That's one particular implementation of a lazy array, yes.

Another implementation is an array interface with an subroutine that
maps indices onto values. Perl 6 will undoubtedly need both,
and maybe others as well.

Damian





Re: Coroutines, continuations, and iterators -- oh, my! (Was: Re: Continuations elified)

2002-11-20 Thread Damian Conway
Austin Hastings wrote:


   for each $dance: {
  ^ note colon



1- Why is the colon there? Is this some sub-tile syntactical new-ance
that I missed in a prior message, or a new thing?


It's the way we mark an indirect object in Perl 6.



2- Why is the colon necessary? Isn't the each $dance just a
bassackwards method invocation (as Cclose $fh is to C$fh.close())?


Yes. The colon is needed because the colon-less Perl 5 indirect object
syntax is inherently ambiguous. Adding the colon in Perl 6 fixes the
many nasty, subtle problems that Perl 5's syntax had.



I think this is called avoiding the question. Now you've converted an
Iterator into an iterator masked behind an array, and asked the Cfor
keyword to create apparently a private iterator to traverse it.


There's no second iterator. Just Cfor walking through an array.



What's the value of Cnext(PARAM_LIST)? Is this just a shortcut for
re-initializing the iterator?


No. It uses the original coroutine but rebinds its parameters to the new
arguments passed to Cnext.



How is this going to work when the
iterator has opened files or TCP connections based on the parameter
list?


The original files or connections will be unaffected.



Furthermore, what's the syntax for including arguments to next in a
diamond operator?


I very much doubt there would be one. If you need to pass arguments,
you'd call Cnext explicitly.



What's the difference between a lazy array and an iterator? Is there
caching?


Yes. A lazy array is a wrapper-plus-cache around an iterator.



What about the interrelationships between straight iteration
and iteration interrupted by a reset of the parameter list?


Resetting the parameter list doesn't interrupt iteration.


 Or does

calling $iter.next(PARAM_LIST) create a new iterator or wipe the cache?


No.



How do multiple invocations of each() interact with each other? (E.g.,
consider parsing a file with block comment delimiters: one loop to read
lines, and an inner loop to gobble comments (or append to a delimited
string -- same idea). These two have to update the same file pointer,
or all is lost.)


So pass the file pointer to the original continuation.



Some of questions about iterators and stuff:

1- Are iterators now considered a fundamental type? 

Probably, since they're fundamental to I/O and Cfor loops.



1a- If so, are they iterators or Iterators? (See 2b1, below)


class Iterator {...}



1b- What value would iterators (small-i) have? Is it a meaningful idea?


Depends what you mean by it. ;-)



2- What is the relationship between iterators and arrays/lists?


None. Except that some arrays/lists may be implemented using Iterators.



2a- Is there an CIterator.toArray() or C.toList() method?


Iterator::each.



2a1- The notion that Iterator.each() returns a lazy array seems a
little wierd. Isn't a lazy array just an iterator?


No. It's an array that populates itself on-demand using an iterator.



2b- Is there a CList.iterator() method? Or some other standard way of
iterating lists?


Probably.



2b1- Are these primitive interfaces to iteration, in fact
overridable? That is, can I override some operator-like method and
change the behavior of

while $fh { print; }


Sure. Derive a class from Iterator and change its Cnext method.



2b2- Is that what Ceach does in scalar context -- returns an
iterator?


No. Ceach returns a lazy array, so in a scalar context it returns
a reference to the lazy array.



3- What's the difference among an iterator, a coroutine, and a
continuation?


Iterator: an object that returns successive values from some source
	  (such as an array, a filehandle, or a coroutine)

Coroutine: a subroutine whose state is preserved when it returns
   such that it may be restarted from the point of previous
	   return, rather than from the start of the subroutine

Continuation: a mechanism for capturing the what-happens-next
	  at any point in a program's execution

BTW, there's rather a nice discussion of these three at:
http://mail.python.org/pipermail/python-dev/1999-July/000467.html



3a- Does imposing Damian's iterator-based semantics for coroutines
(and, in fact, imposing his definition of any sub-with-yield ==
coroutine) cause loss of desirable capability? 

No. Not compared to other potential coroutine semantics.


 3b- Is there a corresponding linkage between continuations and some

object, a la coroutine-iterator? 

Continuations can be used to implement virtually any control structure.
In a sense they link to everything.



3c- Is there a tie between use of continuations and use of thread or
IPC functionality? 

Hmmm. I *suppose* a continuation could continue into a different thread.
That might be something worth proscribing. ;-)




3d- Conversely, what happens when continuations, coroutines, or
iterators are used in a threaded environment? Will there need to be
locking?


Yes. Threaded environments *always* require locking at some level.



4- Given

Re: Continuations

2002-11-19 Thread Paul Johnson

Damian Conway said:

 Is it illegal now to use quotes in qw()?

 Nope. Only as the very first character of a 

Paging Mr Cozens.  ;-)

 So any of these are still fine:

   print  a b c ;
   print \a b c;
   print «\a b c»;

Presumably without the backslash here too.

   print qw/a b c/;

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net






Re: Continuations

2002-11-19 Thread Austin Hastings

--- Damian Conway [EMAIL PROTECTED] wrote:
 Iain 'Spoon' Truskett wrote:
 
@a ???+??? @b
@a ???+??? @b
  
  Y'know, for those of us who still haven't set up Unicode, they look
  remarkably similar =)
 
 Think Of It As Evolution In Action
 
 ;-)

This coming from someone whose national bird is the platypus?

=Austin


__
Do you Yahoo!?
Yahoo! Web Hosting - Let the expert host your site
http://webhosting.yahoo.com



Re: Continuations elified

2002-11-19 Thread fearcadi
Damian Conway writes:
  David Wheeler asked:
  
   How will while behave?
  
  Cwhile evaluates its first argument in scalar context, so:
  
  
   while $fh {...}# Iterate until $fh.readline returns EOF?
  
  More or less. Technically: call $fh.next and execute the loop
  body if that method returns true. Whether it still has the
  automatic binding to $_ and the implicit definedness check is yet
  to be decided.
  
  
   while $iter {...}  # Iterate until $iter.each returns false?
  
  Yes.
  

you mean Iterate until $iter.next returns false? 


what is the difference between the Iterator  and lazy array ?

am I right that it is just interface : lazy array is an iterator
object inside Array interface : 

Larry Wall wrote:
 Then there's this approach to auto-iteration:
 
 my @dance := Iterator.new(@squares);
 for @dance {

but then each is very simple method : 

class Iterator {
method each( $self:) {
my @a := $self ;
return @a ; 
}
}

but then probably we dont need two methods -- next and each . 
just like in perl5 each can determine the calling context 

class Iterator {
method each( $self:) {
when want Scalar {
...
}
when want Array {
my @a := $self ;
return @a ; 
}
}
}

and then ... could be *really* the sugar for .each ( or .next if it
will be called so ) . in these examples 


 In a scalar context:
 
   $fh   # Calls $fh.readline (or maybe that's $fh.next???
   $iter # Calls $iter.next
   fibs()  # Returns iterator object
   fibs()# Returns iterator object and calls that
   #object's Cnext method (see note below)
   
 
 In a list context:
 
   $fh   # Calls $fh.each
   $iter # Calls $iter.each
   fibs()  # Returns iterator object
   fibs()# Returns iterator object and calls object's Ceach
 

... *always* force call to .each ( which is context aware ) . 

but again all that is based on the assumtion that lazy array is just
Iterator object in the cloth of array ( variable container ) . so
this is a question . 

arcadi 







Re: Coroutines, continuations, and iterators -- oh, my! (Was: Re: Continuations elified)

2002-11-19 Thread Austin Hastings
 Larry wrote:
 
 So you can do it any of these ways:
 
 for $dance {
 
 for $dance.each {
 
 for each $dance: {
^ note colon

1- Why is the colon there? Is this some sub-tile syntactical new-ance
that I missed in a prior message, or a new thing?

2- Why is the colon necessary? Isn't the each $dance just a
bassackwards method invocation (as Cclose $fh is to C$fh.close())? 

 Then there's this approach to auto-iteration:
 
 my @dance := Iterator.new(@squares);
 for @dance {

I think this is called avoiding the question. Now you've converted an
Iterator into an iterator masked behind an array, and asked the Cfor
keyword to create apparently a private iterator to traverse it. That
seems like twice as much work for the same output.

Also, I have a problem with the notion of the Iterator class being
tasked with creation of iterators -- how do you deal with objects (even
TIEd arrays) that require magic iterators? Better to ask the class to
give you one. (Of course, CIterator.new() could internally ask
@squares to provide an iterator, but again that adds a layer for little
apparent gain. 

 Damian Conway wrote:
 The presence of a Cyield automatically makes a subroutine a
 coroutine:
 
   sub fibs {
   my ($a, $b) = (0, 1);
   loop {
   yield $b;
   ($a, $b) = ($b, $a+$b);
   }
   }
 
 Calling such a coroutine returns an Iterator object with (at least)
 the following methods:
 
 next()   # resumes coroutine body until next Cyield
 
 next(PARAM_LIST) # resumes coroutine body until next Cyield,
  # rebinding params to the args passed to Cnext.
  # PARAM_LIST is the same as the parameter list
  # of the coroutine that created the Iterator

What's the value of Cnext(PARAM_LIST)? Is this just a shortcut for
re-initializing the iterator? How is this going to work when the
iterator has opened files or TCP connections based on the parameter
list?

my $iter = DNS.iterator(.com.);

while $iter {
   $iter.next(.com.au.) 
if not hackable($_);
}

Furthermore, what's the syntax for including arguments to next in a
diamond operator?

while $iter($a, $b) {
  ...
  $a += 2;
}

 each()   # returns a lazy array, each element of which
  # is computed on demand by the appropriate
  # number of resumptions of the coroutine body

What's the difference between a lazy array and an iterator? Is there
caching? What about the interrelationships between straight iteration
and iteration interrupted by a reset of the parameter list? Or does
calling $iter.next(PARAM_LIST) create a new iterator or wipe the cache?
How do multiple invocations of each() interact with each other? (E.g.,
consider parsing a file with block comment delimiters: one loop to read
lines, and an inner loop to gobble comments (or append to a delimited
string -- same idea). These two have to update the same file pointer,
or all is lost.)





Some of questions about iterators and stuff:

1- Are iterators now considered a fundamental type? 

1a- If so, are they iterators or Iterators? (See 2b1, below)

1b- What value would iterators (small-i) have? Is it a meaningful idea?

2- What is the relationship between iterators and arrays/lists?

2a- Is there an CIterator.toArray() or C.toList() method?

2a1- The notion that Iterator.each() returns a lazy array seems a
little wierd. Isn't a lazy array just an iterator? Why else have the
proposed syntax for Iterator.next(PARAM_LIST)? (Admittedly the
PARAM_LIST doesn't have to be a single integer, like an array.) Or is
that what a small-i iterator is?

2b- Is there a CList.iterator() method? Or some other standard way of
iterating lists?

2b1- Are these primitive interfaces to iteration, in fact
overridable? That is, can I override some operator-like method and
change the behavior of

while $fh { print; }

2b2- Is that what Ceach does in scalar context -- returns an
iterator?

my $iter = each qw(apple banana cherry);

my $junk = all qw(apple banana cherry);
my $itr2 = each $junk;  # Whoops! Wrong thread...

3- What's the difference among an iterator, a coroutine, and a
continuation?

3a- Does imposing Damian's iterator-based semantics for coroutines
(and, in fact, imposing his definition of any sub-with-yield ==
coroutine) cause loss of desirable capability? (Asked in ignorance --
the only coroutines I've ever dealt with were written in assembly
language, so I don't really know anything about what they can be used
to do.)

3b- Is there a corresponding linkage between continuations and some
object, a la coroutine-iterator? 

3c- Is there a tie between use of continuations and use of thread or
IPC functionality? Is it a prohibitive tie, one way or the other? That
is, I've been thinking that A coroutine is just a continuation. But
if A continuation implies ..., for example a semaphore or a thread,
or some

Re: Continuations

2002-11-18 Thread Ken Fox
Damian Conway wrote:

my $iter = fibses();
for  $iter  {...}

(Careful with those single angles, Eugene!)


Operator  isn't legal when the grammar is expecting an
expression, right? The  must begin the circumfix  operator.

Is the grammar being weakened so that yacc can handle it? The
rule engine is still talked about, but sometimes I get the
feeling that people don't want to depend on it.

That  $iter  syntax reminds me too much of C++.

- Ken




Re: Continuations elified

2002-11-18 Thread Austin Hastings

--- Damian Conway [EMAIL PROTECTED] wrote:
 The semantics of Cfor would simply be that if it is given an
 iterator object (rather than a list or array), then it calls 
 that object's iterator once per loop.

By extension, if it is NOT given an iterator object, will it appear to
create one?

That is, can I say 

for (@squares)
{
  ...
  if $special.instructions eq 'Advance three spaces'
  {
$_.next.next.next;
  }
  ...
}

or some other suchlike thing that will enable me to consistently
perform iterator-like things within a loop, regardless of origin?

(Oh please! Let there be one, and for love of humanity, let it be
called bork. Pleasepleaseplease!!! It's a shorthand form of bind or
kontinue, really it is.  :-) :-) :-))


 but I think that's...err...differently right. 

The verb form is euph ? Or euphemise?

 Otherwise I can't see how one call call an iterator directly in a
 for loop:
 
   for fibs() {...}

Which suggests that fibs is a coroutine, since otherwise its return
value is weird. But 

 while fibs() { ... }

suggests instead behavior rather like while (!eof()), although the
diamond is strange. 

So in general, diamonded-function-call implies coroutine/continuation?

 But I could certainly live with it not having that, in which case
 the preceding example would have to be:
 
   my $iter = fibs();
   for $iter {...}

That's not horrible, but it does even more damage to expression
folding.

 and, if your coroutine itself repeatedly yields a iterator
 then you need:
 
   my $iter = fibses();
   for  $iter  {...}
 
 (Careful with those single angles, Eugene!)

To disagree, vile Aussie! To be looking at perl5's adornmentless
diamond:

perlopentut sez:
POT When you process the ARGV filehandle using ARGV, 
POT Perl actually does an implicit open on each file in @ARGV.
POT Thus a program called like this:

POT$ myprogram file1 file2 file3

POT Can have all its files opened and processed one at a time
POT using a construct no more complex than:

POTwhile () {
POT# do something with $_
POT}

This *ought* to work the same in p6.

Since you can modify @ARGV in a pure string context, what's the real
behavior?

If I say: while () {print;} I'm asking for file-scan behavior.

If I say: for (@ARGV) { print; } I'm asking for array-scan behavior.

If I say: for (@ARGV) { print; } I'm asking for trouble?

Proposal:

@ARGV is a string array, but  is topicized in :: (or whatever the
default execution context is called) to iterate over @_MAGIC_ARGV,
which is := @ARGV but of a different class whose iterate behavior
performs an open $_ in the background, and iterates serially over
each entry in @ARGV once EOF occurs.

my CoolFileType @MY_ARGV := @ARGV;  # Same data, different interface. 

for (@MY_ARGV) {
  print;
}

This is kind of neat, but there needs to be a solid, readable
typecasting mechanism to facilitate the multiple flavors of iteration
-- to convert from one iterator format to another, for example.


  They elegantify stuff.
 
 tsk tsk If you're going to talk Merkin, talk it propericiously:
 
   They elegantificatorize stuff

We Merkuns don't add, we shorten. That's elify, to wit:

Z'at elify the code?
Elify - no.

Reduction of length, combined with conservation of ambiguity. Win win
win.

=Austin


__
Do you Yahoo!?
Yahoo! Web Hosting - Let the expert host your site
http://webhosting.yahoo.com



Re: Continuations

2002-11-18 Thread Damian Conway
Ken Fox wrote:

Damian Conway wrote:


my $iter = fibses();
for  $iter  {...}

(Careful with those single angles, Eugene!)



Operator  isn't legal when the grammar is expecting an
expression, right? 

Right.



The  must begin the circumfix  operator.


Or the circumfix ... operator. Which is the problem here.



Is the grammar being weakened so that yacc can handle it?


Heaven forbid! ;-)


 The rule engine is still talked about, but sometimes I get the

feeling that people don't want to depend on it.


That's not the issue.



That  $iter  syntax reminds me too much of C++.


Yes. But since iterating an iterator to get another iterator that
is immediately iterated will (I sincerely hope!) be a very rare
requirement, I doubt it will be anything like the serious inconvenience
it is in C++.

Damian




Re: Continuations elified

2002-11-18 Thread Damian Conway
Austin Hastings asked:



By extension, if it is NOT given an iterator object, will it appear to
create one?


Yep.



That is, can I say 

for (@squares)
{
  ...
  if $special.instructions eq 'Advance three spaces'
  {
$_.next.next.next;
  }
  ...
}

or some other suchlike thing that will enable me to consistently
perform iterator-like things within a loop, regardless of origin?

If, by C$_.next.next.next; you mean skip the next three elements
of @squares, then no. $_ isn't an alias to the implicit iterator
over @squares; it's an alias for the element of @squares currently
being iterated.

You want (in my formulation):

my $dance = Iterator.new(@squares);
for $dance {
   ...
   if $special.instructions eq 'Advance three spaces' {
  $dance.next.next.next;
   }
   ...
}



(Oh please! Let there be one, and for love of humanity, let it be
called bork. Pleasepleaseplease!!! It's a shorthand form of bind or
kontinue, really it is.  :-) :-) :-))


Brain on raw krack more like it ;-)



So in general, diamonded-function-call implies coroutine/continuation?


That's the problem. I can't see how that works syntactically.





To disagree, vile Aussie! To be looking at perl5's adornmentless
diamond:



If I say: while () {print;} I'm asking for file-scan behavior.


Yes. Special case.



If I say: for (@ARGV) { print; } I'm asking for array-scan behavior.


Yes.



If I say: for (@ARGV) { print; } I'm asking for trouble?


grin Under my proposal, you're saying:

	* Grab next element of @ARGV
	* Iterate that element.

*Unless* the elements of @ARGV in Perl 6 are actually special Iterator-ish,
filehandle-ish objects that happen to also stringify to the command-line
strings. Hm.



Proposal:


Seemed very complex to me.



That's elify, to wit:

Z'at elify the code?
Elify - no.


GROAN

Damian




Re: Continuations elified

2002-11-18 Thread Austin Hastings

--- Damian Conway [EMAIL PROTECTED] wrote:
 Austin Hastings asked:
  That is, can I say 
  
  for (@squares)
  {
...
if $special.instructions eq 'Advance three spaces'
{
  $_.next.next.next;
}
...
  }
  
  or some other suchlike thing that will enable me to consistently
  perform iterator-like things within a loop, regardless of origin?
 
 If, by C$_.next.next.next; you mean skip the next three elements
 of @squares, then no. $_ isn't an alias to the implicit iterator
 over @squares; it's an alias for the element of @squares currently
 being iterated.
 
 You want (in my formulation):
 
  my $dance = Iterator.new(@squares);
  for $dance {
 ...
 if $special.instructions eq 'Advance three spaces' {
$dance.next.next.next;
 }
 ...
  }

How'zat, again? What is the means for extracting the actual VALUE of
$dance? And why is that different for $_-as-iterator?

IOW:

my Iterator $dance = ...;
for $dance {
  print $_; # should this print the current dance, - $_ by default?

  print $dance;
  
  # Should the above be .value() 
  # or .next()
  # or .toString() ?

  print $dance; # Obviously get next value and advance a la p5

}

Also, in your formulation:

  my $dance = Iterator.new(@squares);
  for $dance {

What happens when iterators require magic state info? It seems more
appropriate to define a CLASS.iterator method, which overloads a
simplistic default. (fail in scalar cases, iterative for Array and
Hash)

  (Oh please! Let there be one, and for love of humanity, let it be
  called bork. Pleasepleaseplease!!! It's a shorthand form of bind
 or
  kontinue, really it is.  :-) :-) :-))
 
 Brain on raw krack more like it ;-)

Whatever! As long as I get to say:

for my @Swedish - $chef {
  $chef.bork.bork.bork;
}

Perlmuppets, anyone?

  So in general, diamonded-function-call implies
 coroutine/continuation?
 
 That's the problem. I can't see how that works syntactically.

I was thinking in terms of reading, not parsing -- that is, when a
coder reads diamonded-function-call, the reflex will be 'this is a
continuation' -- valuable clues.


  To disagree, vile Aussie! To be looking at perl5's adornmentless
  diamond:
  
  If I say: while () {print;} I'm asking for file-scan behavior.
 
 Yes. Special case.

But a special case of WHAT?

  If I say: for (@ARGV) { print; } I'm asking for trouble?
 
 grin Under my proposal, you're saying:
 
   * Grab next element of @ARGV
   * Iterate that element.

That's the problem. Why would the second point * Iterate that element
happen? We've already GOT a flattener. If I want recursive iteration I
can say:

for (*@ARGV) { print; }

or maybe 

for (*@ARGV) { print; }

I can't be sure.

 *Unless* the elements of @ARGV in Perl 6 are actually special
 Iterator-ish,
 filehandle-ish objects that happen to also stringify to the
 command-line
 strings. Hm.

I thought of that one first, but discarded it.
  Proposal:
 
 Seemed very complex to me.

It's because I discarded the @ARGV is magic that stringifies option.
In other words, let @ARGV be normal, first and foremost. Then apply
magic to it. 

That way, I can argue for being able to apply the SAME magic to
something else, rather than trying to fight with having to reimplement
the @ARGV magic class.

my @file_names = ( list of strings );
my FileIterator @argv_alike := @file_names;

while (@argv_alike) {
 ...
}

Now the question is, how do I get it into $_/@_/whatever_, so that I
can use  instead of @argv_alike? (I avoid simply slapping it into
@_ because I vaguely recall something about the inevitable demise of
@_-as-arglist etc. But that was before the weekend, when there were
more brain cells.)


 GROAN

Warning: File iterator used in void context. Possible loss of data.

=Austin



__
Do you Yahoo!?
Yahoo! Web Hosting - Let the expert host your site
http://webhosting.yahoo.com



Re: Continuations elified

2002-11-18 Thread Larry Wall
On Tue, Nov 19, 2002 at 08:53:17AM +1100, Damian Conway wrote:
: my $dance = Iterator.new(@squares);
: for $dance {

Scalar variables have to stay scalar in list context, so $dance cannot
suddenly start behaving like a list.  Something must tell the scalar
to behave like a list, and I don't think I want Cfor to do that.
A Cfor should just take an ordinary list.

So you can do it any of these ways:

for $dance {

for $dance.each {

for each $dance: {
   ^ note colon

Then there's this approach to auto-iteration:

my @dance := Iterator.new(@squares);
for @dance {

Larry



Re: Continuations elified

2002-11-18 Thread Damian Conway
Larry wrote:


So you can do it any of these ways:

for $dance {

for $dance.each {

for each $dance: {
   ^ note colon

Then there's this approach to auto-iteration:

my @dance := Iterator.new(@squares);
for @dance {


Okay, so now I need to make sense of the semantics of ... and
Cfor and coroutines and their combined use.

Is the following correct?

==

The presence of a Cyield automatically makes a subroutine a coroutine:

	sub fibs {
		my ($a, $b) = (0, 1);
		loop {
			yield $b;
			($a, $b) = ($b, $a+$b);
		}
	}

Calling such a coroutine returns an Iterator object with (at least)
the following methods:

	next()			# resumes coroutine body until next Cyield

	next(PARAM_LIST)	# resumes coroutine body until next Cyield,
# rebinding params to the args passed to Cnext.
# PARAM_LIST is the same as the parameter list
# of the coroutine that created the Iterator

	each()			# returns a lazy array, each element of which
# is computed on demand by the appropriate
# number of resumptions of the coroutine body


In a scalar context:

	$fh		# Calls $fh.readline (or maybe that's $fh.next???
	$iter		# Calls $iter.next
	fibs()		# Returns iterator object
	fibs()	# Returns iterator object and calls that
			#object's Cnext method (see note below)
	

In a list context:

	$fh		# Calls $fh.each
	$iter		# Calls $iter.each
	fibs()		# Returns iterator object
	fibs()	# Returns iterator object and calls object's Ceach


So then:

	for $fh {...}# Build and then iterate a lazy array (the elements
			   # of which call back to the filehandle's input
			   # retrieval coroutine)

	for $iter {...}  # Build and then iterate a lazy array (the elements
			   # of which call back to the iterator's coroutine)

	for fibs() {...}   # Loop once, setting $_ to the iterator object
			   # that was returned by Cfibs

	for fibs() {...} # Build and then iterate a lazy array (the elements
			   # of which call back to the coroutine of the
			   # iterator returned by Cfibs


==

Note: this all hangs together *very* nicely, except when someone writes:

	loop {
		my $nextfib = fibs();
		...
	}

In which case $nextfib is perennially 1, since every call to Cfibs
returns a new Iterator object.

The solution is very simple, of course:

		my $nextfib = my $iter//=fibs();

but we might want to contemplate issuing a warning when someone calls
an argumentless coroutine within a scalar context 

Damian




Re: Continuations

2002-11-18 Thread Ken Fox
Damian Conway wrote:

Ken Fox wrote:

The  must begin the circumfix  operator.


Or the circumfix ... operator. Which is the problem here.


This is like playing poker with God. Assuming you can get over
the little hurdles of Free Will and Omniscience, there's still
the problem of Him pulling cards out of thin air.

What does the circumfix ... operator do? [1]

Here docs are re-syntaxed and the  introducer was stolen
for the ... operator? [2]


Yes. But since iterating an iterator to get another iterator that
is immediately iterated will (I sincerely hope!) be a very rare
requirement, I doubt it will be anything like the serious inconvenience
it is in C++.


True. I suppose even multi-dimensional data structures will
rarely be iterated over with a simple:

  for  $array  {
  }

Most people will probably want more control:

  for $array {
 for $_ {
 }
  }

Anyways, I was wondering about the general principle of using
C++ style hacks to make yacc happy. I should have known better
coming from the author of C++ Resyntaxed. Did the immodest
proposal fixsyntax? ;)

- Ken

[1] I can't google for . Anybody know if Google can add perl6
operators to their word lists? Seriously!

[2] Hmm. Will the uproar on here docs beat string concatenation?




Re: Continuations

2002-11-18 Thread Damian Conway
Ken Fox lamented:


Or the circumfix ... operator. Which is the problem here.

 
This is like playing poker with God.

I hear God prefers dice.



What does the circumfix ... operator do? 

It's the ASCII synonym for the «...» operator, which is a
synonym for the qw/.../ operator.



Here docs are re-syntaxed and the  introducer was stolen
for the ... operator? 

Nope. Heredocs still start with .



Anyways, I was wondering about the general principle of using
C++ style hacks to make yacc happy. I should have known better
coming from the author of C++ Resyntaxed. Did the immodest
proposal fixsyntax? ;)


But of course! In SPECS, type parameters live in unambiguously
nestable (and yacc parsable) [...] delimiters.

Damian




Re: Continuations

2002-11-18 Thread Ken Fox
Damian Conway wrote:

It's [...] the ASCII synonym for the «...» operator, which
is a synonym for the qw/.../ operator.



Nope. Heredocs still start with .


Hey! Where'd *that* card come from? ;)

Seriously, that's a good trick. How does it work? What do these
examples do?

  print a b c;

  print a
  b
  c;
  a

Is it illegal now to use quotes in qw()?

- Ken




Re: Continuations

2002-11-18 Thread Damian Conway
Seriously, that's a good trick. How does it work? What do these
examples do?

  print a b c;


Squawks about finding the string b immediately after the heredoc introducer.



  print a
  b
  c;


Likewise.



Is it illegal now to use quotes in qw()?


Nope. Only as the very first character of a 

So any of these are still fine:

	print  a b c ;
	print \a b c;
	print «\a b c»;
	print qw/a b c/;

Damian




Re: Continuations elified

2002-11-18 Thread David Wheeler
On Monday, November 18, 2002, at 06:51  PM, Damian Conway wrote:


	for $fh {...}# Build and then iterate a lazy array (the elements
			   # of which call back to the filehandle's input
			   # retrieval coroutine)

	for $iter {...}  # Build and then iterate a lazy array (the elements
			   # of which call back to the iterator's coroutine)

	for fibs() {...}   # Loop once, setting $_ to the iterator object
			   # that was returned by Cfibs

	for fibs() {...} # Build and then iterate a lazy array (the elements
			   # of which call back to the coroutine of the
			   # iterator returned by Cfibs


How will while behave?

	while $fh {...}# Iterate until $fh.readline returns EOF?

	while $iter {...}  # Iterate until $iter.each returns false?

	while fibs() {...}   # Infinite loop -- fibs() returns an
 # iterator every time?

	while fibs() {...} # I'm afraid to ask!

Best,

David

--
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
http://david.wheeler.net/  Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]




Re: Continuations elified

2002-11-18 Thread Luke Palmer
 Mailing-List: contact [EMAIL PROTECTED]; run by ezmlm
 X-Sent: 19 Nov 2002 02:51:54 GMT
 Date: Tue, 19 Nov 2002 13:51:56 +1100
 From: Damian Conway [EMAIL PROTECTED]
 X-Accept-Language: en, en-us
 Cc: [EMAIL PROTECTED] [EMAIL PROTECTED]
 X-SMTPD: qpsmtpd/0.12, http://develooper.com/code/qpsmtpd/
 
 Larry wrote:
 
  So you can do it any of these ways:
  
  for $dance {
  
  for $dance.each {
  
  for each $dance: {
 ^ note colon
  
  Then there's this approach to auto-iteration:
  
  my @dance := Iterator.new(@squares);
  for @dance {
 
 Okay, so now I need to make sense of the semantics of ... and
 Cfor and coroutines and their combined use.
 
 Is the following correct?
 
 ==
[snip] 
 ==


I like this Imuch better than what you explained before.  Most of my
problems with two iterators to the same thing in the same scope are
gone, as well as the confusions I had about Cfor.

Luke



Re: Continuations elified

2002-11-18 Thread Damian Conway
David Wheeler asked:


How will while behave?


Cwhile evaluates its first argument in scalar context, so:



while $fh {...}# Iterate until $fh.readline returns EOF?


More or less. Technically: call $fh.next and execute the loop body if that method
returns true. Whether it still has the automatic binding to $_ and the implicit
definedness check is yet to be decided.



while $iter {...}  # Iterate until $iter.each returns false?


Yes.



while fibs() {...}   # Infinite loop -- fibs() returns an
 # iterator every time?


I suspect so.



while fibs() {...} # I'm afraid to ask!


Usually an infinite loop. Cfibs() returns a new iterator every time,
which ... then calls Cnext on.

Damian




Re: Continuations elified

2002-11-18 Thread David Wheeler
On Monday, November 18, 2002, at 08:05  PM, Damian Conway wrote:


while $fh {...}# Iterate until $fh.readline returns EOF?


More or less. Technically: call $fh.next and execute the loop body 
if that method
returns true. Whether it still has the automatic binding to $_ and the 
implicit
definedness check is yet to be decided.

That's a scalar context? I assumed it was list context from your 
previous post:

In a list context:

	$fh		# Calls $fh.each


At any rate, I hope that it's bound to $_ -- nice conversion from Perl 
5's behavior, that.

David

--
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
http://david.wheeler.net/  Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]



Re: Continuations

2002-11-18 Thread Luke Palmer
 Date: Tue, 19 Nov 2002 14:29:46 +1100
 From: Damian Conway [EMAIL PROTECTED]
 
 Ken Fox lamented:
 
  Or the circumfix ... operator. Which is the problem here.
   
  This is like playing poker with God.
 
 I hear God prefers dice.
 
 
  What does the circumfix ... operator do? 
 
 It's the ASCII synonym for the «...» operator, which is a
 synonym for the qw/.../ operator.

I did not have Unicode working during the dreaded operator thread.
What was the final syntax for vector ops?

@a ≪+≫ @b
@a ≫+≪ @b
Something else?

If the former, how does one disambiguate a qw with a unary vector op?

Luke



Re: Continuations elified

2002-11-18 Thread Damian Conway
David Wheeler asked:


while $fh {...}# Iterate until $fh.readline returns EOF?



That's a scalar context? 

Sure. Cwhile always evaluates its condition in a scalar context.

Damian




Re: Continuations

2002-11-18 Thread Damian Conway
Luke Palmer asked:


What was the final syntax for vector ops?

@a ≪+≫ @b
@a ≫+≪ @b


The latter (this week, at least ;-).

Damian




Re: Continuations

2002-11-18 Thread Iain 'Spoon' Truskett
* Damian Conway ([EMAIL PROTECTED]) [19 Nov 2002 15:19]:
 Luke Palmer asked:
  What was the final syntax for vector ops?
 
 @a ???+??? @b
 @a ???+??? @b

 The latter (this week, at least ;-).

Y'know, for those of us who still haven't set up Unicode, they look
remarkably similar =)


cheers,
-- 
Iain.



Re: Continuations elified

2002-11-18 Thread David Wheeler
On Monday, November 18, 2002, at 08:17  PM, Damian Conway wrote:


Sure. Cwhile always evaluates its condition in a scalar context.


Oh, duh. Thanks.

David

--
David Wheeler AIM: dwTheory
[EMAIL PROTECTED] ICQ: 15726394
http://david.wheeler.net/  Yahoo!: dew7e
   Jabber: [EMAIL PROTECTED]




Re: Continuations

2002-11-18 Thread Damian Conway
Iain 'Spoon' Truskett wrote:


  @a ???+??? @b
  @a ???+??? @b


Y'know, for those of us who still haven't set up Unicode, they look
remarkably similar =)


Think Of It As Evolution In Action

;-)

Damian




Re: Continuations

2002-11-18 Thread David Wheeler
On Monday, November 18, 2002, at 08:19  PM, Damian Conway wrote:
(B
(B What was the final syntax for vector ops?
(B @a $B"c(B+$B"d(B @b
(B @a $B"d(B+$B"c(B @b
(B
(B The latter (this week, at least ;-).
(B
(BThis reminds me: I though of another set of bracing characters that I 
(Bdon't recall anyone ever mentioning before, and that might be useful in 
(Bsome context where such a thing is needed.
(B
(B   C \op/  or C /op\ 
(B
(BI realize that there could be issues with patterns, but the first 
(Bexample ought to avoid that, I would think. If someone *has* thought of 
(Bthese characters as complementary braces (why wouldn't someone have?), 
(Bwell, just forget about it.
(B
(BDavid
(B
(B-- 
(BDavid Wheeler AIM: dwTheory
([EMAIL PROTECTED] ICQ: 15726394
(Bhttp://david.wheeler.net/  Yahoo!: dew7e
(BJabber: [EMAIL PROTECTED]


RE: Continuations

2002-11-17 Thread Angel Faus
Damian Conway wrote:

 The formulation of coroutines I favour doesn't work like that.

 Every time you call a suspended coroutine it resumes from immediately
 after the previous Cyield than suspended it. *And* that Cyield
 returns the new argument list with which it was resumed.

 So you can write things like:

 sub pick_no_repeats (*@from_list) {
 my $seen;
 while (pop @from_list) {
 next when $seen;
 @from_list := yield $_;
 $seen |= $_;
 }
 }

 # and later:

 while pick_no_repeats( @values ) {
 push @values, some_calc($_);
 }

 Allowing the list of choices to change, but repetitions still to be
avoided.


I understand that this formulation is more powefull, but one thing I like
about python's way (where a coroutine is just a funny way to generate lazy
arrays) is that it lets you _use_ coroutines without even knowing what they
are about.

Such as when you say:

for $graph.nodes { ... }

..nodes may be implemented as a coroutine, but you just don't care about it.
Plus any function that previously returned an array can be reimplemented as
coroutine at any time, without having to change the caller side.

In other words, how do you create a lazy array of dynamically generated
values in perl6?

Maybe it could be something like this:

 $foo = bar.instantiate(1, 2, 3);
 @array = $foo.as_array;

-angel





Re: Continuations

2002-11-17 Thread Dan Sugalski
At 1:29 PM +1100 11/17/02, Damian Conway wrote:

The formulation of coroutines I favour doesn't work like that.

Every time you call a suspended coroutine it resumes from immediately
after the previous Cyield than suspended it. *And* that Cyield
returns the new argument list with which it was resumed.


Hrm. I can see the power, but there's a fair amount of internal 
complication there. I'll have to ponder that one a bit to see if 
there's something I'm missing that makes it easier than I think.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Continuations

2002-11-17 Thread Damian Conway
Angel Faus wrote:


I understand that this formulation is more powefull, but one thing I like
about python's way (where a coroutine is just a funny way to generate lazy
arrays) is that it lets you _use_ coroutines without even knowing what they
are about.

Such as when you say:

for $graph.nodes { ... }

.nodes may be implemented as a coroutine, but you just don't care about it.
Plus any function that previously returned an array can be reimplemented as
coroutine at any time, without having to change the caller side.


Yes, it may be that Pythonic -- as opposed to Satherian/CLUic -- iterators are
a better fit for Perl 6. It rather depends on the semantics of Perl 6
iterators, which Larry hasn't promulgated fully yet.



In other words, how do you create a lazy array of dynamically generated
values in perl6?

Maybe it could be something like this:

 $foo = bar.instantiate(1, 2, 3);
 @array = $foo.as_array;


Well, I think it has to be much less ugly than that! ;-)


Damian




Re: Continuations

2002-11-17 Thread Damian Conway
Of course, apart from the call-with-new-args behaviour, having
Pythonic coroutines isn't noticably less powerful. Given:

sub fibs ($a = 0 is copy, $b = 1 is copy) {
loop {
yield $b;
($a, $b) = ($b, $a+b);
}
}

we still have implicit iteration:

for fibs() {
print Now $_ rabbits\n;
}

and explicit iteration:

my $iter = fibs();
while $iter {
print Now $_ rabbits\n;
}

and explicit multiple iteration:

my $rabbits = fibs(3,5);
my $foxes   = fibs(0,1);
loop {
my $r = $rabbits;
my $f = $foxes;
print Now $r rabbits and $f foxes\n;
}

and even explicitly OO iteration:

my $iter = fibs();
while $iter.next {
print Now $_ rabbits\n;
}


And there's no reason that a coroutine couldn't produce an iterator object
with *two* (overloaded) Cnext methods, one of which took no arguments
(as in the above examples), and one of which had the same parameter list
as the coroutine, and which rebound the original parameters on the next
iteration.

For example, instead of the semantics I proposed previously:

# Old proposal...

sub pick_no_repeats (*@from_list) {
my $seen;
while (pop @from_list) {
next when $seen;
@from_list := yield $_;
$seen |= $_;
}
}

# and later:

while pick_no_repeats( @values ) {
push @values, some_calc($_);
}


we could just write:

# New proposal

sub pick_no_repeats (*@from_list) {
my $seen;
while (pop @from_list) {
next when $seen;
yield $_;
$seen |= $_;
}
}

# and later:

my $pick = pick_no_repeats( @values );
while $pick.next(@values)  {
push @values, some_calc($_);
}


These semantics also rather neatly solve the problem of whether or
not to re-evaluate/re-bind the parameters each time a coroutine
is resumed. The rule becomes simple: if the iterator's Cnext
method is invoked without arguments, use the old parameters;
if it's invoked with arguments, rebind the parameters.

And the use of the $foo operator to mean $foo.next cleans up
teh syntax nicely.

I must say I rather like this formulation. :-)

Damian






Re: Continuations

2002-11-17 Thread Luke Palmer
 Date: Mon, 18 Nov 2002 09:28:59 +1100
 From: Damian Conway [EMAIL PROTECTED]

I've a couple of questions here:

 we still have implicit iteration:
 
  for fibs() {
  print Now $_ rabbits\n;
  }

Really?  What if fibs() is a coroutine that returns lists (Fibonacci
lists, no less), and you just want to iterate over one of them?  The
syntax:

 for fibs {
 print Now $_ rabbits\n;
 }

Would make more sense to me for implicit iteration.  Perhaps I'm not
looking at it right.  How could you get the semantics of iterating
over just one list of the coroutine?

 and explicit iteration:
 
  my $iter = fibs();
  while $iter {
  print Now $_ rabbits\n;
  }

Ahh, so $iter is basically a structure that has a continuation and a
value.  When you call the .next method, it calls the continuation, and
delegates to the value otherwise.  Slick  (Unless the coroutine itself
is returning iterators... then... what?).

class Foo {
method next { print Gliddy glub gloopy\n }
}
sub goof () {
loop {
print Nibby nabby nooby\n;
yield new Foo;
}
}

my $iter = goof;
print $iter.next;  # No.. no!  Gliddy!  Not Nibby!

How does this work, then?

 For example, instead of the semantics I proposed previously:
 
  # Old proposal...
 
  sub pick_no_repeats (*@from_list) {
  my $seen;
  while (pop @from_list) {
  next when $seen;
  @from_list := yield $_;
  $seen |= $_;
  }
  }

Hang on... is Cwhile a topicalizer now?  Otherwise this code is not
making sense to me.

 These semantics also rather neatly solve the problem of whether or
 not to re-evaluate/re-bind the parameters each time a coroutine
 is resumed. The rule becomes simple: if the iterator's Cnext
 method is invoked without arguments, use the old parameters;
 if it's invoked with arguments, rebind the parameters.
 
 And the use of the $foo operator to mean $foo.next cleans up
 teh syntax nicely.

So filehandles are just loops that read lines constantly and yield
them.  I'm really starting to like teh concept of tohse co-routines :)
They elegantify stuff.

Luke



Re: Continuations

2002-11-17 Thread Damian Conway
Luke Palmer enquired:


we still have implicit iteration:

for fibs() {
print Now $_ rabbits\n;
}

 
Really?  What if fibs() is a coroutine that returns lists (Fibonacci
lists, no less), and you just want to iterate over one of them?  The
syntax:

 for fibs {
 print Now $_ rabbits\n;
 }

Would make more sense to me for implicit iteration.  Perhaps I'm not
looking at it right.  How could you get the semantics of iterating
over just one list of the coroutine?

The semantics of Cfor would simply be that if it is given an iterator
object (rather than a list or array), then it calls that object's iterator
once per loop.



and explicit iteration:

my $iter = fibs();
while $iter {
print Now $_ rabbits\n;
}



Ahh, so $iter is basically a structure 

It's an object.



that has a continuation and a
value.  When you call the .next method, it calls the continuation, and
delegates to the value otherwise.


Err. No. Not quite. Though that would be cute too.



(Unless the coroutine itself is returning iterators... 

Yep.

The idea is that, when any subroutine (f) with a Cyield in it is called,
it immediately returns an Iterator object (i.e. without executing its body at
all). That Iterator object has (at least) two Cnext methods:

	method next() {...}
	method next(...) {...}

where the second Cnext's parameter list is identical to the
parameter list of the original f.

Code can then call the iterator's Cnext method, either explicitly:

	$iter.next(...);

or operationally:

	$iter

or implicitly (in a Cfor loop):

	for $iter {...}

Previously Larry has written that last variant as:

	for $iter {...}

but I think that's...err...differently right. I think the angled version
should invoke C$iter.next once (before the Cfor starts iterating) and
then iterate the result of that. In other words, I think that a Cfor
loop argument should always have one implicit level of iteration.

Otherwise I can't see how one call call an iterator directly in a
for loop:

	for fibs() {...}


But I could certainly live with it not having that, in which case
the preceding example would have to be:

	my $iter = fibs();
	for $iter {...}


and, if your coroutine itself repeatedly yields a iterator
then you need:

	my $iter = fibses();
	for  $iter  {...}

(Careful with those single angles, Eugene!)




class Foo {
method next { print Gliddy glub gloopy\n }
}
sub goof () {
	loop {
print Nibby nabby nooby\n;
yield new Foo;


That would have to be:

  yield new Foo:;
or:
  yield Foo.new;


}
}

my $iter = goof;
print $iter.next;  # No.. no!  Gliddy!  Not Nibby!

How does this work, then?

Calling Cgoof returns an iterator that resumes the body of Cgoof
each time the iterator's Cnext method is called.

Teh actual call to C$iter.next resumes the body of Cgoof, which runs
until the next Cyield, which (in this case) returns an object of class
CFoo. So the line:

	print $iter.next;

prints Nibby nabby nooby\n then the serialization of the Foo object.

If you wanted to print Nibby nabby nooby\n and then Gliddy glub gloopy\n
you'd write:

	print $iter.next.next;
or:
	print $iter.next;
or:
	print  $iter ;



Hang on... is Cwhile a topicalizer now?


That's still under consideration. I would like to see the special-case
Perl 5 topicalization of:

	while $iter {...}

to be preserved in Perl 6. Larry is not so sure. If I can't sway Larry, then
we'd need explicit topicalization there:

	while $iter - $_ {...}

which in some ways seems like a backwards step to me.




So filehandles are just loops that read lines constantly and yield
them.


Nearly. Filehandles are just iterator objects, each attached to a
coroutine that reads lines constantly and yields them.



I'm really starting to like the concept of those co-routines :)


Likewise.



They elegantify stuff.


tsk tsk If you're going to talk Merkin, talk it propericiously:

	They elegantificatorize stuff

;-)

Damian




Re: Continuations

2002-11-16 Thread Damian Conway
Peter Haworth asked:


So to get the same yield context, each call to the coroutine has to be from
the same calling frame. If you want to get several values from the same
coroutine, but from different calling contexts, can you avoid the need to
wrap it in a closure?


I don't think so.

Damian




Re: Continuations

2002-11-16 Thread Dan Sugalski
At 8:31 AM +1100 11/17/02, Damian Conway wrote:

Peter Haworth asked:


So to get the same yield context, each call to the coroutine has to be from
the same calling frame. If you want to get several values from the same
coroutine, but from different calling contexts, can you avoid the need to
wrap it in a closure?


I don't think so.


I dunno. One of the things I've seen with coroutines is that as long 
as you call them with no arguments, you get another iteration of the 
coroutine--you actually had to call it with new arguments to reset 
the thing. (Which begs the question of what you do when you have a 
coroutine that doesn't take any args, but that's a separate issue)

OTOH, forcing a closure allows you to have multiple versions of the 
same coroutine instantiated simultaneously, which strikes me as a 
terribly useful thing.

Perhaps we'd be better with an explicit coroutine instantiation call, like:

   $foo = bar.instantiate(1, 2, 3);

or something. (Or not, as it is ugly)
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Re: Continuations

2002-11-16 Thread Damian Conway
Dan Sugalski wrote:


I dunno. One of the things I've seen with coroutines is that as long as 
you call them with no arguments, you get another iteration of the 
coroutine--you actually had to call it with new arguments to reset the 
thing. 

The formulation of coroutines I favour doesn't work like that.

Every time you call a suspended coroutine it resumes from immediately
after the previous Cyield than suspended it. *And* that Cyield
returns the new argument list with which it was resumed.

So you can write things like:

	sub pick_no_repeats (*@from_list) {
		my $seen;
		while (pop @from_list) {
			next when $seen;
			@from_list := yield $_;
			$seen |= $_;
		}
	}

	# and later:

	while pick_no_repeats( @values ) {
		push @values, some_calc($_);
	}

Allowing the list of choices to change, but repetitions still to be avoided.



OTOH, forcing a closure allows you to have multiple versions of the same 
coroutine instantiated simultaneously, which strikes me as a terribly 
useful thing.

Yep!



Perhaps we'd be better with an explicit coroutine instantiation call, like:

   $foo = bar.instantiate(1, 2, 3);

or something. 

Ew!


(Or not, as it is ugly)

That'd be my vote! ;-)


Damian




Re: Continuations

2002-11-12 Thread Peter Haworth
On Wed, 06 Nov 2002 10:38:45 +1100, Damian Conway wrote:
 Luke Palmer wrote:
  I just need a little clarification about yield().
 
 Cyield is exactly like a Creturn, except that when you
 call the subroutine next time, it resumes from after the Cyield.
 
  how do you tell the difference between a
  recursive call and fetching the next element?  How would you maintain
  two iterators into the same array?
 
 The re-entry point isn't stored in the subroutine itself. It's stored
 (indexed by optree node) in the current subroutine call frame. Which,
 of course, is preserved when recursive iterator invocations recursively
 yield.

So to get the same yield context, each call to the coroutine has to be from
the same calling frame. If you want to get several values from the same
coroutine, but from different calling contexts, can you avoid the need to
wrap it in a closure?

  sub iterate(@foo){
yield $_ for @foo;
undef;
  }

  # There's probably some perl5/6 confusion here
  sub consume(@bar){
my $next = sub{ iterate(@bar); };
while $_ = $next() {
  do_stuff($_,$next);
}
  }

  sub do_stuff($val,$next){
...
if $val ~~ something_or_other() {
  my $quux = $next();
  ...
}
  }


-- 
Peter Haworth   [EMAIL PROTECTED]
...I find myself wondering if Larry Ellison and Tim Curry
 were separated at birth...hmm...
-- Tom Good



Continuations

2002-11-05 Thread Luke Palmer
I just need a little clarification about yield().

consider this sub:

sub iterate(foo) {
yield for foo;
undef;
}

(Where yield defaults to the topic)  Presumably. 

a = (1, 2, 3, 4, 5);
while($_ = iterate a) {
print
}

Will print 12345.  Or is that:

for a {
print
}

?  So, does yield() build a lazy array, or does it act like an eplicit
iterator?  If the latter,  how do you tell the difference between a
recursive call and fetching the next element?  How would you maintain
two iterators into the same array?

Luke



Re: Continuations for fun and profit

2002-07-09 Thread Ted Zlatanov

On Mon, 8 Jul 2002, [EMAIL PROTECTED] wrote:
 Yep. But serializing continuations is either tough, or not
 completely doable, since programs tend to have handles on things
 outside their direct control like filehandles, sockets, database
 connections, and suchlike things. Resuming a continuation that's
 been frozen but also has an open DB handle is... an interesting
 problem. :)

I've always thought that a language that implemented FREEZE() and
THAW() blocks would be very cool indeed.  Java's EJB persistence is
extremely useful, but there you always need the safety webbing of a
container.  I'm pretty sure that if we could save the state of
everything else on the interpreter level, people won't mind losing and
having to reestablish OS-level resources.  At least I wouldn't.
Currently I have to do twice as much work to resume execution anyhow.

Sorry if this has been duly dissected before, I just thought in the
context of continuations it would be a worthwhile side avenue.

Ted




Re: Continuations for fun and profit

2002-07-09 Thread Peter Haworth

On Mon, 8 Jul 2002 16:54:16 -0400, Dan Sugalski wrote:
 while ($foo) {
   $foo--;
 }
 
 Pretty simple. (For illustrative purposes) To do that with 
 continuations, it'd look like:
 
 $cont = take_continuation();
 if ($foo) {
   $foo--;
   invoke($cont);
 }
 
 When you invoke a continuation you put the call scratchpads and lexical
 scratchpads back to the state they were when you took the continuation.

If you restore the lexicals, how does this ever finish?

-- 
Peter Haworth   [EMAIL PROTECTED]
It's not a can of worms, it's a tank of shai-hulud.
-- Jarkko Hietaniemi



Re: Continuations for fun and profit

2002-07-09 Thread Peter Haworth

On Tue,  9 Jul 2002 16:42:03 +0100, Peter Haworth wrote:
  When you invoke a continuation you put the call scratchpads and lexical
  scratchpads back to the state they were when you took the continuation.
 
 If you restore the lexicals, how does this ever finish?

Never mind. It's the *access* to the lexicals, not their values.

-- 
Peter Haworth   [EMAIL PROTECTED]
Would you like ambiguity or something else?
Press any key to continue or any other key to quit



The Past, Present and Future of Continuations (was: Perl 6 Summary)

2002-07-08 Thread Andy Wardley

A short time ago, in a nearby  thread, Larry Wall wrote:
 Perhaps we should just explain continuations in terms of time travel.

Funny.  I wrote a message to this effect the other night, but decided
not to send it (too tired to decide if I was talking sense or nonsense).

I was about to propose that 'continuation' is too long a word for lazy
Perl folk to bandy around at will, and possibly too ivory tower for most 
people to grok.

 Another way of looking at it is that a continuation is a hypothesis
 about the future, and calling the continuation is a way of saying
 oops about that hypothesis.

My suggestion was along the lines of using .past, .now and .future
to reference the calling, current and future continuations, respectively.
I also wondered if .here and .there would somehow fit in to 
reference the current context, or remote context of a continuation.

I was thinking along the lines of a continuation being a here and now,
a collection of space and time (or in the context of a continuation, 
the shape and state of the program) bundled up to be transported safely 
over there to a future now where it can be unpackaged and used much
like a wormhole.

Maybe a continuation is like a nipple pierced in the fabric of space
and time through which many different threads can be strung?  Or like
a Quantum Entanglement - a Bose-Einstein Condensate spread along the
length of an camel's hair, merrily transporting perlons back and forth?

But I must admit that my understanding of continuations (and the fabric
of reality) is incomplete, and quite possibly flawed, being limited to 
what I've read on this list and read (but mostly not understood) in 
Appel's book.  I'm sure I don't yet understand how it all fits together, 
and I certainly can't see how to make the syntax fall into place.  

That's a job for a linguist and a mad scientist. :-)

 Basically, we need to find the right oversimplification to make people
 think they understand it.  

Absolutely.  But talking about time travel, particuarly in the future,
half-past-imperfect, stepping-sideways-through-time tense will never 
having to had been a simple matter for us to hoov comprehended. [*]

Now I know I'm talking nonsense, so I'll stop right here and now. :-)

A

[*] said with a tip of the hat to the fond memory of Douglas Adams 




Continuations for fun and profit

2002-07-08 Thread Dan Sugalski

Okay, for those of you following along at home, here's a quick 
rundown of what a continuation is, and how it works. (This is made 
phenomenally easier by the fact that perl has continations--try 
explaining this to someone used to allocating local variables on the 
system stack and get ready for frustration)

A continuation is a sort of super-closure. Like a closure it captures 
its lexical variables, so every time you use it, you're referring to 
the same set of variables, which live on until the continuation's 
destroyed. This works because the variables for a block are kept in a 
scratchpad--since each block has its own, and each scratchpad's 
mostly independent (mostly).

Now, imagine what would happen if the 'stack', which we track block 
entries, exits, sub calls, and so forth, was *also* done with a 
linked list of scratchpads, rather than as a real stack. You could 
have a sort of super closure that both remembered all your 
scratchpads *and* your spot in the call tree. That, essentially, is 
what a continuation is. We remember the scratchpads with variables in 
them *and* the scratchpads with stack information in them.

When we invoke a continuation, we put in place both the variables and 
call scratchpads, making it, in effect, as if we'd never really left 
the spot we took the continuation at. And, like normal closures, we 
can do this from wherever we like in the program.

The nice thing about continuations is you can do all the known 
control-flow operations (with perhaps the exception of a forward 
goto) with them, and you can use them to build new control flow 
structures. For example, let's take the while construct:

while ($foo) {
  $foo--;
}

Pretty simple. (For illustrative purposes) To do that with 
continuations, it'd look like:

$cont = take_continuation();
if ($foo) {
  $foo--;
  invoke($cont);
}

take_continuation() returns a continuation for the current point (or 
it could return one for the start of the next statement--either 
works), and invoke takes a continuation and invokes it. When you 
invoke a continuation you put the call scratchpads and lexical 
scratchpads back to the state they were when you took the 
continuation.

Presto--instant while loop. You can do for loops in a similar way, as 
well as any number of other control structures.
-- 
 Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
   teddy bears get drunk



Re: The Past, Present and Future of Continuations (was: Perl 6Summary)

2002-07-08 Thread Dan Sugalski

At 2:43 PM +0100 7/8/02, Andy Wardley wrote:
A short time ago, in a nearby  thread, Larry Wall wrote:
  Perhaps we should just explain continuations in terms of time travel.

Funny.  I wrote a message to this effect the other night, but decided
not to send it (too tired to decide if I was talking sense or nonsense).

I was about to propose that 'continuation' is too long a word for lazy
Perl folk to bandy around at will, and possibly too ivory tower for most
people to grok.

The one problem with using time travel is that people will expect the 
values of their variables to go back to what they were when the 
continuation is taken, which they won't.

You could natter on about variables being embedded in a separate 
n-dimensional reference frame while the control-flow is a way to 
model multidimensional cross-universe tunnelling... but that'd 
probably be a bit confusing. :)
-- 
 Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
   teddy bears get drunk



Re: Continuations for fun and profit

2002-07-08 Thread David M. Lloyd

On Mon, 8 Jul 2002, Dan Sugalski wrote:

 Pretty simple. (For illustrative purposes) To do that with
 continuations, it'd look like:

 $cont = take_continuation();
 if ($foo) {
   $foo--;
   invoke($cont);
 }

 take_continuation() returns a continuation for the current point (or it
 could return one for the start of the next statement--either works),

I think starting at the next statement would be cooler in some ways:

  $cont = take_continuation() and start_async_op($cont) and return;

  # do other stuff with results of async_op

- D

[EMAIL PROTECTED]




  1   2   >