Adding deref op [Was: backticks]

2004-04-21 Thread Matthijs van Duin
On Tue, Apr 20, 2004 at 10:55:51AM -0700, Larry Wall wrote:
The flip side is that, since we won't use C` as an operator in Perl
6, you're free to use it to introduce any user-defined operators
you like, including a bare C`.  All is fair if you predeclare.
Most languages won't even give you that...
I just realized there's another operator that has no infix meaning yet, and 
so is free to use (and perhaps more visually pleasing):

%hash\key   (or  $foo\bar\baz\42 )

vaguely reminiscent of DOS/Win32 paths :-D

Just curious, when adding such an operator myself, would it be possible to 
make it work on both hashes and arrays, without making the op very slow?

--
Matthijs van Duin  --  May the Forth be with you!


Re: Adding deref op [Was: backticks]

2004-04-21 Thread Matthijs van Duin
On Wed, Apr 21, 2004 at 01:02:15PM -0600, Luke Palmer wrote:
   macro infix:\ ($cont, $key)
   is parsed(/$?key := (-?letter\w* | \d+)/)
   {
   if $key ~~ /^\d+$/ {
   ($cont).[$key];
   }
   else {
   ($cont).«$key»;
   }
   }
That does all the magic at compile time.
True, but what about  $x\$y ? :-)

(which I'd want to work for consistency reasons.. so you can write 
$foo\bar\$baz\42 instead of ugly mixing like $foo\foo{baz}\42 )

--
Matthijs van Duin  --  May the Forth be with you!


Re: Adding deref op [Was: backticks]

2004-04-21 Thread Matthijs van Duin
On Wed, Apr 21, 2004 at 03:37:23PM -0400, John Macdonald wrote:
What about $x\n?  The backslash already has meaning in strings
I use hash elements far more often outside than inside strings, so I could 
live with having to write $x«foo» for interpolated hash elements.

Anyway, you're missing the point of my question.  Since shorthand for hash 
elements has already been banned from the core by Larry, I'm now just 
exploring what is involved with adding it later on, independent of what 
actual syntax I'd use (a bashtick, backslash, or something else).

--
Matthijs van Duin  --  May the Forth be with you!


Re: backticks (or slash, maybe)

2004-04-19 Thread Matthijs van Duin
On Mon, Apr 19, 2004 at 03:34:13PM -0700, Sean O'Rourke wrote:
in a '/' is a regex, anything otherwise is a hash slice.
I don't understand. Could you give some examples? Is this in the context
of bare /path/to/foo, even?
   /foo/   # trailing slash -- so it's a regexp (m/foo/)
   /foo\/bar/  # trailing slash -- syntax error (m/foo/ bar/)
   /foo/a  # hash-path -- no trailing slash ($_.{'foo'}{'a'})
   /foo\/bar   # hash-path -- no trailing slash ($_.{'foo/bar'})
   /foo\/  # hash-path -- no trailing slash ($_.{'foo/'})
I think this is highly ambiguous.

$x = /foo * $bar/and +bar();

would that be:

$x = m/foo* $bar/  (+bar());
 or
$x = $_.{'foo'} * $bar.{'and'} + bar();
?

As much as I see the appeal of this syntax, the / is simply too heavily used 
already.

--
Matthijs van Duin  --  May the Forth be with you!


Re: backticks

2004-04-16 Thread Matthijs van Duin
On Fri, Apr 16, 2004 at 07:12:44PM +0200, Juerd wrote:
Aaron Sherman skribis 2004-04-16  9:52 (-0400):
3. You proposed (late in the conversation) that both could co-exist, and
while that's true from a compiler point of view, it also leads to:
`stuff``stuff`stuff
Huh? No. That is a syntax error.
Actually, no, it's valid and means  qx/stuff/.{stuff}.{stuff}  which is 
of course bogus, but not a syntax error.

A slightly saner example would be:  `blah``-1  to get the last line of output 
from blah.

I agree with Aaron it looks awful, but that simply means a programmer 
shouldn't do that.  If you try hard enough, you'll always be able to write 
horribly ugly code with some effort.


	`$a`b`c` # May or may not give an error, but shocking either way
Syntax error.
This is indeed a syntax error afaics.

Again, saying look you can combine things to make something ugly is very 
poor reasoning.  Just because you can write code like

perl -e'connect$|=socket(1,2,1,$/=select+1),pack sa14,2,\nDBo\$\36;printd ! @ARGV\nq\n;print$/ +1=~/.+?^(.*?)^\./sm' perl

doesn't mean the language is bad.  It means I wrote awful code here.

So the only thing I can say in response to these convoluted examples is don't 
do that then.

--
Matthijs van Duin  --  May the Forth be with you!


Re: backticks

2004-04-15 Thread Matthijs van Duin
On Thu, Apr 15, 2004 at 12:27:12PM -0700, Scott Walters wrote:
Let me summerize my undestanding of this (if my bozo bit isn't already 
irrevocably set):

* %hashfoo retains the features of P5 $hash{foo} but does nothing to 
counter the damage of removal of barewords
Actually, %hashfoo will be like p5's $hash{foo}, and more generally 
%hashfoo bar is @hash{qw(foo bar)}, if I'm not terribly mistaken.

It's plain %hash{foo} that's affected.

So to summarize, the following would be equivalent:

%hash{foo}
%hashfoo
%hash`foo

* %hash`foo occupies an important nitch, trading features (slice, 
autovivication) to optmize for the common case, undoing the pain of the 
loss of barewords, serving as even a superior alternative
Autovivication is still possible, though the exact details would need to 
be worked out.  (Either always autovivify as hash, or make it dependent on 
whether the key matches /^-?\d+\z/ )

But indeed, this is my best argument too:  hashes are one of perl's top 
core features, and indexing with constant words or simple scalar variables 
are the most common ways of using them.  It's used so much that by the 
huffman principle it deserves very short and convenient notation.


* %hash`foo can be added by the user, but users are seldom aware of even a 
small fraction of the things on CPAN and there is a sitgma against writing 
non-standard code
I fear that very much too.  I'd probably not use the syntax either in public 
code (like CPAN modules) if it required a non-core module, since it would be 
silly to require an external module just for syntax sugar.

Instead, I'd just be annoyed at it being non-core.


* %hash`foo and %hash ~ `ls` can coexist without breaking anything as this 
is currently illegal, unused syntax

* %hash`s is an example of a small thing that would be easy to implement in 
core but would be used constantly, giving a lot of bang for the buck

* Rather than eliciting public comment on %hash`foo the proposal is being 
rejected out of hand
Exactly.  Juerd may have accidently aggrevated the situation by implying in 
his original post that %hash`key requires the removal of ``.  It's clear now 
that the two issues are separated and should be discussed separately.

In case it's not obvious, I'm very much in favor of %hash`foo.  (I'm not 
entirely sure yet how I feel about removing ``... maybe just leave it until 
a better application for those ticks can be found)

--
Matthijs van Duin  --  May the Forth be with you!


Re: backticks

2004-04-14 Thread Matthijs van Duin
On Wed, Apr 14, 2004 at 02:18:48PM +0200, Juerd wrote:
I propose to use ` as a simple hash subscriptor, as an alternative to {}
and . It would only be useable for \w+ keys or perhaps -?\w+. As
with methods, a simple atomic (term exists only in perlreftut, afaix,
but I don't know another word to describe a simple scalar variable)
scalar should be usable too.
   %hash`key
   $hashref`foo`bar`baz`quux   
   %hash`$key
   $object.method`key
I absolutely love it.  Since hashes are used so much, I think it deserves 
this short syntax.  Note btw that it's not even mutually exclusive with qx's 
use of backticks.  To illustrate that, see:

http://www.math.leidenuniv.nl/~xmath/perl/5.8.3-patches/tick-deref.patch

It's a quick patch I made that adds the `-operator to perl 5.8.3, so you 
can try it how it feels.


With some imagination, this can also be used for arrays.
I like that too.  (though not (yet) implemented in my patch)

--
Matthijs van Duin  --  May the Forth be with you!


Re: backticks

2004-04-14 Thread Matthijs van Duin
On Wed, Apr 14, 2004 at 01:56:35PM -0700, Randal L. Schwartz wrote:
That's because they aren't particularly interesting in modules, but
in 10 line scripts, they show up quite frequently.
This undermines the rest of your request.
No, actually, it doesn't.  Juerd doesn't seem to like ``, but that point is 
entirely orthogonal to the introduction of the ` dereferencing operator.

The two uses don't conflict.  (which is why I was able to make a patch that 
adds the `-operator to perl 5.8.3)

--
Matthijs van Duin  --  May the Forth be with you!


Re: backticks

2004-04-14 Thread Matthijs van Duin
On Wed, Apr 14, 2004 at 01:36:21PM -0600, John Williams wrote:
%hash`$key
oops, you contradicted yourself here. only be useable for \w+ keys
I guess you disliked his idea so much you didn't bother to read what exactly 
he said, right?

As with methods, a simple [...] scalar should be usable too

This is of course natural.. many places in perl accept either a bareword or 
simple scalar, at least in p5.


You are repeating the errors of javascript.  $0[15] != $0{15}
No, he spotted the issue in advance and suggested a solution already.

--
Matthijs van Duin  --  May the Forth be with you!


Re: object property syntax [OT]

2003-09-25 Thread Matthijs van Duin
On Wed, Sep 24, 2003 at 07:53:39PM -0400, Todd W. wrote:
I posted a question to CLPM on how to do this with perl5 and we decided to 
use an 'lvalue' attribute on the subroutine and then make the returned 
lvalue in the sub a tied variable to intercept read/writes:

http://groups.google.com/groups?th=5489106ee6715c8e
Note that this has already kinda been done:

http://search.cpan.org/~juerd/Attribute-Property-1.04/Property.pm

This only does setter-code (the getter always uses an element of the object) 
but it's perhaps interesting to look at.

It works, but its obviously slow
A::P uses the 'Want' module (if installed) to speed up the common cases.

--
Matthijs van Duin  --  May the Forth be with you!


Re: == vs. eq

2003-04-06 Thread Matthijs van Duin
On Sat, Apr 05, 2003 at 05:28:16PM -0700, Tom Christiansen wrote:
Is it possible that finite internal representations will differ in
internal representation yet produce identical series?
..[snip]..
Those define identical list, for any natural numbers X and Y, even as 
compile-time constants.  However, save for special case of X==Y, I do
not expect their internal representation to be the same.  
I just said you can *compare* them, I didn't say test whether they're 
identical.  Obviously comparing internal representations is a tricky 
business, and may have three results:  yes, the lists they generate are 
equal, no, the lists they generate are not equal, I have no clue.

In the last case, per-element comparison will need to be done.

   for ($i = 1; $i = fn(); $i++)  
   for $i ( 1 .. fn() ) 
and making instead a list or array whose members are 
   ( 1 .. fn() )
However, do you evaluate fn() only once or repeatedly?
If a compiler does such an optimization, it needs to make sure fn() is 
evaluated repeatedly unless it knows for sure fn() will return the same 
value for each iteration and has no side-effects.  I think this means that 
if fn() does *not* meet those conditions, the compiler can not use a list, 
because it would risk changing the semantics.


I believe we are indeed trying to define what we want it to do, no?

So sure, you can create a new infinite set by conjoining some new elements
to an existing one.  That's what all the numberic sets are, pretty much.
Do be careful that the result has consistent properties, though.
Well, in perl's tradition, I think it's more important for Inf to do what 
we mean than to be consistent.

IEEE Inf behavior is useful enough, with some constructions (such as the 
range operator) behaving specially when it occurs as one of the operands.


But most of the Laws of Arithmetic as you and I know them do not apply 
to these values.  (One could say as much for floating-point numbers, I 
suspect.)
Yes you can.  But since the p in perl stands for practical, I don't see 
that as a problem.

If we really want a mathematically pure system, it could probably be 
implemented using a module.

--
Matthijs van Duin  --  May the Forth be with you!


Re: == vs. eq

2003-04-06 Thread Matthijs van Duin
On Sun, Apr 06, 2003 at 05:52:30AM -0700, Paul wrote:
I just said you can *compare* them, I didn't say test whether they're 
identical.  Obviously comparing internal representations is a tricky 
business, and may have three results:  yes, the lists they generate 
are equal, no, the lists they generate are not equal, I have no 
clue.
In the last case, per-element comparison will need to be done.
Unless the compiler already knows it's from a generator.  
Then it only has to compare the existing elements, and if they compare, 
then the generator itself.
That falls in the first case:  the internal representation would consist of 
the already-generated elements and a genereator for the rest.  Comparison 
of the internal states will yield yep, they're equal so no (potentially 
infinite) per-element comparison is necessary.

 ( 1 .. fn() )
 However, do you evaluate fn() only once or repeatedly?
If a compiler does such an optimization, it needs to make sure fn() 
is evaluated repeatedly unless it knows for sure fn() will return the 
same value for each iteration and has no side-effects.  I think this 
means that if fn() does *not* meet those conditions, the compiler can 
not use a list, because it would risk changing the semantics.
I must've missed something. Why would the compiler consier the value of 
fn() any if it's business?
What you've missed is that the context was the optimization of:

for ($i = 1; $i  fn(); $i++)

Semantically this code is required to evaluate fn() for each iteration, so 
the optimizer can't just replace it with for ( 1 .. fn() ) in the general 
case, however it can do so if and only if it can ascertain that fn() will 
return the same value for each iteration, and has no side-effects.

Perl needs to be able to handle complicated situations, but how often 
will you need imaginary numbers? Python handles complex numbers 
out-of-the-box; I've *never* seen a use for it, other than saying oh, 
that's cool. But then, I don't do deep math at my job. I just shovel 
bits. :)
Complex numbers can be useful, although not for typical applications in 
perl's target audience.

It seems to me they can however be easily added as a class.

Inf is going to be supported in the core, but we need to keep in mind 
that it doesn't represent INFINITY in any literal sense [...] and could 
easily hold a digital image of a tuna
I vote yes on that, if it doesn't add too much to the size of the runtime 
library :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: == vs. eq

2003-04-05 Thread Matthijs van Duin
On Sat, Apr 05, 2003 at 03:22:17PM -0700, Tom Christiansen wrote:
When you write:

   (1..Inf) equal (0..Inf)

I'd like Perl to consider that false rather than having a blank look
on its face for a long time.  
The price of that consideration would be to give the Mathematicians
blank looks on *their* faces for a very long time instead.  Certainly,
they'll be quick to tell you there are just as many whole numbers
as naturals.  So they won't know what you mean by equal up there.
Based on his description, he meant element-wise equality.  And since the 
first element of (1..Inf) is 1, and the first element of (0..Inf) is 0, 
I agree with the result being false.

So no blank stare from me (student of mathematics)

The length of both will be Inf ofcourse (meaning countably infinite. I 
don't think we have a need for working with uncountably infinite sets in 
perl ;-)

Practically speaking, I'm not sure how--even whether--you *could*
define it.
You can define is very easily:  two lists are equal if the ith element of 
one list is equal to the ith element of the other list, for all valid 
indices i.

As for whether you can *evaluate* this test in bounded time, that depends. 
Computers are incapable of storing truly infinite lists, so the lists will 
have finite internal representations which you can compare.

As for two dynamically generated infinite lists (which you can't easily 
compare, for example if they're based on external input)... it will either 
return false in finite time, or spend infinite time on determining they're 
indeed equal.

In other words, if you treat Inf as any particular number (which Mr
Mathematician stridently yet somewhat ineffectually reminds you that are
*not* allowed to do!), then you may get peculiar results.
There is no problem with doing that, as long as you define what you want 
it to do.

Remember, most of mathematics is just an invention of humans :)

(crap about testing first/last N elements)
testing the first/last N elements is not the same as testing the whole list

for all N  :)


Mr Mathematician, purist that he is, has of course long ago thrown up his
hands in disgust, contempt, or both, and stormed out of the room
If he has, he's a very narrow-minded Mr Mathematician


how can you say 1+Inf?
Unless you're speech-impaired, it's not too hard

and 1..Inf will be problematic, too, since to say 1..Inf is also to say 
there must exist some number whose successor is Inf.
*cough*bullshit  writing the interval 1..infinity is very common

I've skipped the rest.. not in the mood... but you make many references 
to a Mr Mathematician I don't think I want to work with... luckily I 
haven't seen him around here at the maths faculty

--
Matthijs van Duin  --  May the Forth be with you!


Re: == vs. eq

2003-04-05 Thread Matthijs van Duin
On Sun, Apr 06, 2003 at 12:38:29AM +0200, Matthijs van Duin wrote:
In other words, if you treat Inf as any particular number (which Mr
Mathematician stridently yet somewhat ineffectually reminds you that are
*not* allowed to do!), then you may get peculiar results.
There is no problem with doing that, as long as you define what you want 
it to do.
Actually, if you really want to do infinities the math-way, then just 
grab a book on set theory.

one thing you might not like however is that when you go beyond the finite, 
it becomes necessary to differentiate between cardinal and ordinal numbers.

That's probably something you don't want to do in perl

The IEEE-float-style infinities are quite sufficient for most purposes

One thing I agree is that writing  1..Inf  is a *bit* sloppy since the 
range operator  n..m  normally produces the numbers i for which 
n = i = m  while  n..Inf  gives  n = i  Inf

but I can live with it

--
Matthijs van Duin  --  May the Forth be with you!


Re: Ruminating RFC 93- alphabet-blind pattern matching

2003-04-03 Thread Matthijs van Duin
On Thu, Apr 03, 2003 at 07:29:37AM -0800, Austin Hastings wrote:
This has been alluded to before.

What would /A*B*/ produce?

Because if you were just processing the rex, I think you'd have to
finish generating all possibilities of A* before you began iterating
over B*...
The proper way would be to first produce all possibilities of length n 
before giving any possibility of length n+1.

''
'A'
'B'
'AA'
'AB'
'BB'
'AAA'
'AAB'
...
I haven't spent a milisecond of working out whether that's feasible to 
implement, but from a theoretical POV it seems like the solution.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Short-circuiting user-defined operators

2003-04-02 Thread Matthijs van Duin
Is there any specific reason this was a reply to Michael Lazarro's Re: 
== vs. eq dated Tue, 1 Apr 2003 16:30:00 -0800 ?

(What I mean is, PLEASE don't use reply when you're not replying at all)

--
Matthijs van Duin  --  May the Forth be with you!


Re: temporization

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 10:56:14AM +0200, Matthijs van Duin wrote:
   temp $foo := $bar;   # temporarily bind $foo to $bar
   temp $foo = $bar;# temporarily assign the value of $bar to $foo
I just realize 'temp $foo = 3' might just as well mean bind $foo to a new 
scalar and initialize it to 3 as temporarily assign 3 to $foo

I guess it wasn't a nice idea anyway considering it'd change the meaning of 
temp based on which operator you let loose on it, which is rather evil and 
rude now that I think of it.

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 07:45:30AM -0800, Austin Hastings wrote:
I've been thinking about closures, continuations, and coroutines, and
one of the interfering points has been threads.
What's the P6 thread model going to be?

As I see it, parrot gives us the opportunity to implement preemptive
threading at the VM level, even if it's not available via the OS.
I think we should consider cooperative threading, implemented using 
continuations.  Yielding to another thread would automatically happen when 
a thread blocks, or upon explicit request by the programmer.

It has many advantages:
1. fast: low task switching overhead and no superfluous task switching
2. no synchronization problems.  locking not needed in most common cases
3. thanks to (2), shared state by default without issues
4. most code will not need any special design to be thread-safe, even when 
it uses globals shared by all threads.
5. no interference with continuations etc, since they're based on it
6. less VM code since an existing mechanism is used, which also means 
less code over which to spread optimization efforts

And optionally if round-robin scheduling is really desired for some odd 
reason (it's not easy to think of a situation) then that can be easily added 
by using a timer of some kind that does a context switch - but you'd regain 
the synchronization problems you have with preemptive threading.

One problem with this threading model is that code that runs a long time 
without blocking or yielding will hold up other threads.  Preventing rude 
code from affecting the system is one of the reasons modern OSes use 
preemptive scheduling.  This problem is obviously much smaller in perl 
scripts however since all of the code is under control of the programmer. 
And if a CPAN module contains rude code, this would be known soon enough. 
(the benefits of Open Source :-)

Another problem is the inability to easily take advantage of symmetrical 
multiprocessing, but this basically only applies to code that does heavy 
computation.

I think if we apply the Huffman principle here by optimizing for the most 
common case, cooperative threading wins from preemptive threading.

People who really want to do SMP should just fork() and use IPC, or use 
the Thread::Preemptive module which *someone* will no doubt write :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 10:50:59AM -0800, Michael G Schwern wrote:
I must write my code so each operation only takes a small fraction of time
or I must try to predict when an operation will take a long time and yield
periodicly.
Really.. why?  When you still have computation to be done before you can 
produce your output, why yield?  There are certainly scenarios where you'd 
want each thread to get a fair share of computation time, but if the 
output from all threads is desired, whoever is waiting for them probably 
won't care who gets to do computation first.

Worse, I must trust that everyone else has written their code to the above
spec and has accurately predicted when their code will take a long time.
Both this and the above can be easily solved by a timer event that forces 
a yield.  Most synchronization issues this would introduce can probably be 
avoided by deferring the yield until the next checkpoint determined by 
the compiler (say, the loop iteration)

I think this is a minor problem compared to the hurdles (and overhead!) of 
synchronization.

Cooperative multitasking is essentially syntax sugar for an event loop.
No, since all thread state is saved.  In syntax and semantics they're much 
closer to preemptive threads than to event loops.

We need good support at the very core of the langauge for preemptive 
threads.  perl5 has shown what happens when you bolt them on both 
internally and externally.  It is not something we can leave for later.
I think perl 6 will actually make it rather easy to bolt it on later.  You 
can use fork(), let the OS handle the details, and use tied variable for 
sharing.  I believe something already exists for this in p5 and is apparently 
faster than ithreads.  I haven't dug into that thing though, maybe it has 
other problems again.  No doubt you'll point 'em out for me ;-)

Cooperative multitasking, if you really want it, can be bolted on later or
provided as an alternative backend to a real threading system.
I agree it can be bolted on later, but so can preemptive threads probable.  
As Simon pointed out, optimizing for the common case means skipping threads 
altogether for now.

And I resent how you talk about non-preemptive threading as not being real 
threading.  Most embedded systems use tasking/threading models without 
round-robin scheduling, and people who try to move applications that perform 
real-time tasks from MacOS 9 to MacOS X curse the preemptive multitasking 
the latter has.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Conditional Creturns?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 11:04:35AM -0800, Michael Lazzaro wrote:
my bool $x = result_of_some_big_long_calculation(...args...);
return $x if $x;
Is there a way that doesn't require the named variable?
$_ and return  given big_calculation();

or:

given big_calculation() {
return when true;
}
--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 01:58:19PM -0500, Dan Sugalski wrote:
Dan doesn't like it. :)

Well, there are actually a lot of disadvantages, but that's the only 
important one, so it's probably not worth much thought over alternate 
threading schemes for Parrot at least--it's going with an OS-level 
preemptive threading model.
If you can ensure me that the hooks will be available in critical routines 
(blocking operations) to allow proper implementation of cooperative threads 
in a perl module, then that's all the support from the parrot VM I need :-)

I just hope you won't make my non-preemptive-threaded applications slower 
with your built-in support for preemptive threads :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 11:58:01AM -0800, Michael G Schwern wrote:
Off-list since this tastes like it will rapidly spin out of control.
On-list since this is relevant for others participating in the discussion


Classic scenario for threading: GUI.  GUI uses my module which hasn't been
carefully written to be cooperative.  The entire GUI pauses while it waits
for my code to do its thing.  No window updates, no button pushes, no
way to *cancel the operation that's taking too long*.
OK, very true, I was more thinking of something like a server that uses 
a thread for each connection.

Luckily I already mentioned that automatic yielding is not too hard.  A 
timed that sets a yield asap flag that's tested iteration of a loop 
should work - maybe something with even less overhead can be cooked up.


I hope this is not a serious suggestion to implement preemptive threads
using fork() and tied vars.  That way ithreads lie.
Actually, ithreads are slower because they don't do copy-on-write while 
the OS usually does.

fork() moves the problem to the OS, where bright people have already spent 
a lot of time optimizing things, I hope at least ;)

I suppose how much faster it is to do things within the VM rather than 
using forked processes depends on how much IPC happens.  In your GUI 
example, the answer is: very little, only status updates.

The existing system you probably mean is POE
No, I wasn't.  I looked it up, it's called forks.

Besides, it would be silly as Dan has already said Parrot will support
preemptive multitasking and that's half the hard work done.  The other 
half is designing a good language gestalt around them.
OK, as long as it doesn't hurt performance of non-threaded apps I 
obviously have no problem with *supporting* preemptive threading, since 
they're certainly useful for some applications.  But coop threads are 
more useful in the general case - especially since they're simpler to use 
thanks to the near-lack of synchronization problems.  Simplicity is good, 
especially in a language like perl.


And I resent how you talk about non-preemptive threading as not being 
real threading.  
My biases come from being a MacOS user since System 6.  MultiFinder
nightmares.
Valid point (I'm also a long-time MacOS user), but cooperative multitasking 
isn't the same as cooperative threading.  We're talking about the scheduling 
between threads inside one process; and we can avoid the lockup problem in 
the VM with automatic yielding.

This makes most of the problems of cooperative threading disappear, while 
leaving the advantages intact.

If we want to support real-time programming in Perl
No, I was merely pointing out that it's not always a step forward for all 
applications.  Some people made good use with the ability to grab all the 
CPU time you need on old MacOS.

None of this precludes having a cooperative threading system, but we
*must* have a preemptive one.
must is a big word; people happily used computers a long time before any 
threading was used ;-)

It looks like we could use both very well though

--
Matthijs van Duin  --  May the Forth be with you!


Re: Conditional Creturns?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 12:12:54PM -0800, Michael Lazzaro wrote:
On Monday, March 31, 2003, at 11:18  AM, Matthijs van Duin wrote:
Don't those return Cundef, as opposed to the value of C$_?  I.E. 
wouldn't it be:

$_ and return $_ given big_calculation();
-or-
given big_calculation() {
return $_ when true;
}
Oops, yes, you're right ofcourse

Sorry :)

--
Matthijs van Duin  --  May the Forth be with you!


Re: How shall threads work in P6?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 07:21:03PM +0100, Simon Cozens wrote:
[EMAIL PROTECTED] (Matthijs Van Duin) writes:
I think if we apply the Huffman principle here by optimizing for the
most common case, cooperative threading wins from preemptive threading.
Well, if you optimize for the most common case, throw out threads altogether.
Well, I almost would agree with you since cooperative threading can almost 
entirely be done in perl code, since they are built in continuations.  I 
actually gave an example of that earlier.

The only thing is that blocking system operations like file-descriptor 
operations need some fiddling.  (first try it non-blocking fd operation, if 
that fails block the thread and yield to another;  if all threads are 
blocked, so a select() or kqueue() or something similar over all fds on 
which threads are waiting)

If the hooks exist to handle this in a perl module, then I think we can 
skip the issue mostly, except maybe the question what to include with perl 
in the default installation.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Conditional Creturns?

2003-03-31 Thread Matthijs van Duin
On Mon, Mar 31, 2003 at 02:58:12PM -0800, Michael Lazzaro wrote:
my $x = baz(...args...);
return $x if $x;
I'm looking for a Perl6 way to say that oft-repeated, oft-chained 
two-line snippet up there without declaring the temporary variable.  
Using Cgiven or Cwhen, maybe?
$_ and return $_ given baz(...args...);

note that putting  in front of a sub call won't work in perl 6 (that syntax is 
used to actually refer to the right sub var itself, iirc)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Conditional Creturns?

2003-03-31 Thread Matthijs van Duin
On Tue, Apr 01, 2003 at 09:25:39AM +1000, Damian Conway wrote:
baz does refer to the Code object itself, as you say.

However, the bar(...) syntax *will* DWYM too. That's because:

	baz(@args);

is just a shorthand for:

	baz.(@args);
That's not what page 5 of Apoc 6 says:

quote
Note that when following a name like factorial, parentheses do not 
automatically mean to make a call to the subroutine. (This Apocalypse 
contradicts earlier Apocalypses. Guess which one is right...)

   $val = factorial($x);  # illegal, must use either
   $val = factorial($x);   #   this or
   $val = factorial.($x); #   maybe this.
In general, don't use the  form when you really want to call something.
/quote
--
Matthijs van Duin  --  May the Forth be with you!


Re: This week's Summary

2003-03-26 Thread Matthijs van Duin
Apologies for nitpicking, but you misspelled my name as Mattijs 4 times 
in the summary.  The right spelling is Matthijs :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: A6: argument initializations via //=, ||=, ::=

2003-03-25 Thread Matthijs van Duin
On Wed, Mar 26, 2003 at 09:19:42AM +1100, Damian Conway wrote:
my $x = 1;# initialization
   $x = 1;# assignment
Woo, C++   :-)

Considering 'our' merely declares a lexical alias to a package var, how 
do we initialize package vars?

--
Matthijs van Duin  --  May the Forth be with you!


Re: is static? -- Question

2003-03-24 Thread Matthijs van Duin
On Mon, Mar 24, 2003 at 01:37:01PM -0500, Dan Sugalski wrote:
Since I'd as soon not encourage this, how about INSTANTIATE? Nice and 
long and therefore discouraging. :)
Nothing a macro can't fix :-D

--
Matthijs van Duin  --  May the Forth be with you!


Re: is static? -- Question

2003-03-22 Thread Matthijs van Duin
On Sat, Mar 22, 2003 at 10:24:09PM +0200, arcadi shehter wrote:
 sub a {
 state $x;
 my $y;
 my sub b { state $z ; return $x++ + $y++ + $z++ ; }
 return b;   # is a \ before b needed?
 }
will all b refer to  the same $z ?
yes, they will

does it mean that this is legitimate 

 sub a {
 state $x;
 my $y;
 state sub b { state $z ; return $x++ + $y++ + $z++ ; }
 return b;   # is a \ before b needed?
 }
No, since you can't refer to $y in that sub (perl 5 actually allows you to 
do that but gives a warning 'Variable %s will not stay shared' - but I 
hope perl 6 will simply give a compile-time error)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-20 Thread Matthijs van Duin
On Thu, Mar 20, 2003 at 08:49:28AM -0800, Austin Hastings wrote:
--- Matthijs van Duin [EMAIL PROTECTED] wrote:
you seem to have a much complexer model of hypotheses 
than what's in my head.
The complex model is right -- in other words, if hypotheses are to be a
first-class part of the language then they must interoperate with all
the other language features.
(lots of explanation here)
You're simply expanding on the details your complex model - not on the 
need for it in the first place.

I'll see if I can write some details/examples of my model later, and show 
how it interacts with various language features in a simple way.


This leave only behavior regarding preemptive threads, which is
actually very easy to solve:  disallow hypothesizing shared 
variables -- it simply makes no sense to do that.  Now that 
I think of it, temporizing shared variables is equally bad news,
so this isn't something new.
I just realize there's another simple alternative: make it cause the 
variable become thread-local for that particular thread.


Hypothesize all the new values you wish, then pay once to get a mux,
then keep all the data values while you've got the mux. Shrinks your
critical region
You're introducing entirely new semantics here, and personally I think 
you're abusing hypotheses, although I admit in an interesting and 
potentially useful way. I'll spend some thought on that.


My experience has been that when anyone says I don't see why anyone
would ..., Damian immediately posts an example of why.
No problem since it works fine in my model (I had already mentioned that 
earlier) - I just said *I* don't see why anyone would.. :-)


So, stop talking about rexen. When everyone groks how continuations
should work, it'll fall out. 
rexen were the main issue: Dan was worried about performance

(And if you reimplement the rexengine using continuations and outperform 
Dan's version by 5x or better, then we'll have another Geek Cruise to 
Glacier Bay and strand Dan on an iceberg. :-)
I don't intend to outperform him.. I intend to get the same performance 
with cleaner, simpler and more generic semantics.

But as I said in my previous post.. give me some time to work out the 
details.. maybe I'll run into fatal problems making the whole issue moot :)

BTW, you say reimplement ?  Last time I checked hypothetic variables 
weren't implemented yet, let alone properly interact with continuations. 
Maybe it's just sitting in someone's local version, but until I have 
something to examine, I can't really compare its performance to my system.

--
Matthijs van Duin  --  May the Forth be with you!


Re: prototype (was continuations and regexes)

2003-03-20 Thread Matthijs van Duin
On Thu, Mar 20, 2003 at 11:38:31AM -0800, Sean O'Rourke wrote:
Here's what I take to be a (scheme) prototype of Matthijs' success
continuations approach.  It actually works mostly by passing closures and
a state object, ...
Matthijs -- is this what you're describing?
It sounds like approach #2 (callback) I listed in my original post

Unfortunately, #1 is the more appealing approach of the two and is what this 
whole thread has been about so far.  I pretty much abandoned #2 early on.
I'll see if I can take a look at it later.

#2's only advantage was that - as you noted - it doesn't need continuations 
for backtracking, but uses the normal call-chain.

I've never really done anything with scheme but I know the syntax mostly, so 
I'll see if I can read it later on -- you obviously put quite some effort in 
writing it, so it deserves to be read :-)

Dan -- given that the real one could optimize simple operators by
putting a bunch of them inside a single sub, does this look too painful?
I doubt he'll like this -- while the continuations-model is still mostly 
like his model (structurally), the callback-model isn't.  I also think it 
has less opportunity for optimizations but I might be wrong about that.

--
Matthijs van Duin  --  May the Forth be with you!


Re: prototype (was continuations and regexes)

2003-03-20 Thread Matthijs van Duin
Oops, I just noticed Sean had mailed Dan and me privately, not on the list.. 
sorry for sending the reply here :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Tue, Mar 18, 2003 at 09:28:43PM -0700, Luke Palmer wrote:
Plan 1:  Pass each rule a Isuccess continuation (rather than a
backtrack one), and have it just return on failure.  The big
difference between this and your example is that Clets are now
implemented just like Ctemps.  Too bad Clet needs non-regex
behavior, too.
That's mechanism #2, not #1

You probably don't mean the word continuation here though, since 
a continuation never returns, so once a rule would invoke the success 
continuation it can't regain control except via another continuation 

You probably simply mean a closure


Plan 2:  Call subrules as plain old subs and have them throw a
backtrack exception on failure (or just return a failure-reporting
value... same difference, more or less).
But.. say you have:

foo bar

Would would this be implemented?  When bar fails, it needs to backtrack 
into foo, which has already returned.  Are you saying every rule will be 
an explicit state machine?


This has the advantage that Clet behaves consistently with the 
rest of Perl
What do you mean?


I looked around in Parrot a little, and it seems like continuations
are done pretty efficiently.
Yes, I noticed that do

--
Matthijs van Duin  --  May the Forth be with you!


Re: A6: Quick questions as I work on Perl6::Parameters

2003-03-19 Thread Matthijs van Duin
On Tue, Mar 18, 2003 at 11:36:13PM +, Simon Cozens wrote:
Seriously, someone on IRC the other day was claiming that they already
had a P6RE-in-P5 implementation, and did show me some code, but I've
forgotten where it lives or their real name.
http://www.liacs.nl/~mavduin/P6P5_0.00_01.tar.gz
That someone would be me :-)

I should note that this version has wrong semantics: it is incapable of 
doing backtracking in various cases including subrules.  It only does 
backtracking within subexpressions that only does simple regex (because 
I translate that to a single perl 5 regex match)

My attempt at getting its semantics right is what triggered all my recent 
backtracking-related posts.  (since continuations are unavailable, it looks 
like I'll have to settle for the callback system)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Statement modifiers (yes, again)

2003-03-19 Thread Matthijs van Duin
On Tue, Mar 18, 2003 at 08:53:23PM -0700, Luke Palmer wrote:
How is a left-associative operator less special than a non-associative 
one?
Ehm, most operators in perl are left-associative, so you probably mean R2L 
short-circuiting but even then I'm not sure what you're trying to say here


And you speak of consistency, but wouldn't it be better to have Cif
be consistent with Cfor and Cwhile rather than Cand and Cor?
(Seeing as Cif is explicitly a control-flow construct)
'and' is a flow-control construct too..  foo if bar  and  bar and foo  
work identically.  Behaviorally 'if' is grouped with 'and'.

But I suppose based on the name people will group the 'if' modifier with 
'for' rather than with 'and'..

   Then they'll assume they can do:
   FOO for @BAR while $BAZ;
dunno.. people try all sorts of things that can't actually be done, but I 
suppose in this case it's a plausible extrapolation.

I guess to be honestly consistent all modifiers would have to become 
operators, which would bring us back to the multiple statement modifiers 
to which Larry said no..

I'll rest my case

--
Matthijs van Duin  --  May the Forth be with you!


random code snippet (cooperative threading)

2003-03-19 Thread Matthijs van Duin
I recently fiddled around a bit with how one might implement cooperative 
threading in perl 6 using call-with-current-continuation (callcc), so I 
thought I'd share it with you

(since continuations are often poorly understood, more examples is always 
better :-)

This is assuming callcc exists and that the continuation is topic inside it; 
if that's not a built-in then callcc can no doubt be defined using whatever 
is available to make continuations.

Enjoy!

my @queue;

sub thread_exit () {
shift(@queue)(undef);
exit;
}
sub thread_new (code) {
push @queue, callcc {
unshift @queue, $_;
code;
thread_exit;
}
}
sub thread_yield () {
if @queue {
my $prev = callcc shift @queue;
push @queue, $prev  if $prev;
}
}


thread_new {
for 1..10 - $i {
print foo: $i\n;
thread_yield;
}
}
thread_new {
for 1..10 - $i {
print bar: $i\n;
thread_yield;
}
}
thread_new {
for 1..10 - $i {
print baz: $i\n;
thread_yield;
}
}
thread_exit;

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 10:38:54AM +0100, Leopold Toetsch wrote:
I would propose, estimate the ops you need and test it :)
Hmm, good point

Or even better.. I should just implement both examples and benchmark them; 
they're simple enough and the ops are available.

I guess it's time to familiarize myself with pasm :)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 01:01:28PM +0100, Matthijs van Duin wrote:
On Wed, Mar 19, 2003 at 10:38:54AM +0100, Leopold Toetsch wrote:
I would propose, estimate the ops you need and test it :)
Hmm, good point

Or even better.. I should just implement both examples and benchmark them; 
they're simple enough and the ops are available.
except I forgot entirely about let

however the implementation let will have impact on the performance of both 
systems.. oh well, I'll just have to estimate like you said :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 10:38:54AM +0100, Leopold Toetsch wrote:
I would propose, estimate the ops you need and test it :)
I haven't completed testing yet, however it's becoming clear to me that 
this is likely to be a pointless effort

There are so many variables that can affect performance here that the 
results I may find in these tests are unlikely to have any relation to 
the performance of rules in practice.

1. making continuations affects the performance of *other* code (COW)
2. the let operation is missing and all attempts to fake it are silly
3. to really test it, I'd need to make subrules full subroutines, but then 
  the performance difference will probably disappear in the overhead of all 
  other stuff.  To test I'd need large, realistic patterns; and I'm 
  certainly not in the mood to write PIR for them manually.

And it appears that on my machine continuations and garbage collection have
a quarrel, which also makes testing problematic.
I guess the only way to find out is to implement both systems and compare 
them using a large test set of realistic grammars.  Or ofcourse just 
implement it using continuations (system #1), since the speed difference 
probably isn't gonna be huge anyway.

Here is my test program for continuation and the results on my machine:

# aaab ~~ / ^ [ a | a* ] ab fail /

set I5, 1000
sweepoff# or bus error
collectoff  # or segmentation fault

begin:
set S0, aaab
set I0, 0
new P0, .Continuation
set_addr I1, second
set P0, I1
rx_literal S0, I0, a, backtrack
branch third
second:
new P0, .Continuation
set_addr I1, fail
set P0, I1
deeper:
rx_literal S0, I0, a, third
save P0 # normally hypothesize
new P0, .Continuation
set_addr I1, unwind
set P0, I1
branch deeper
unwind:
dec I0  # normally de-hypothesize
restore P0  # normally de-hypothesize
third:
rx_literal S0, I0, ab, backtrack
sub I0, 2   # normally de-hypothesize
backtrack:
invoke
fail:
dec I5
if I5, begin
end


  OPERATION PROFILE 

 CODE   OP FULL NAME CALLS  TOTAL TIMEAVG TIME
 -     ---  --  --
 0  end  10.290.29
40  set_addr_i_ic 50000.0249280.05
46  set_i_ic  10010.0105730.11
60  set_s_sc  10000.0057170.06
66  set_p_i   50000.0162010.03
   213  if_i_ic   10000.0028480.03
   274  dec_i 40000.0113900.03
   370  sub_i_ic  10000.0042270.04
   675  save_p30000.1923090.64
   682  restore_p 30000.2464570.82
   719  branch_ic 40000.0122160.03
   770  sweepoff 10.140.14
   772  collectoff   10.030.03
   786  new_p_ic  50000.1794030.36
   819  invoke50000.0262850.05
   962  rx_literal_s_i_sc_ic 10.0542600.05
 -     ---  --  --
16   480040.7868610.16
iBook; PPC G3; 700 Mhz

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 10:40:02AM -0500, Dan Sugalski wrote:
By compile-time interpolation. foo isn't so much a subroutine as a 
macro. For this to work, if we had:

  foo: \w+?
  bar: [plugh]{2,5}
then what the regex engine *really* got to compile would be:

   (\w+?) ([plugh]{2,5})

with names attached to the two paren groups. Treating them as actual 
subroutines leads to madness,
Ehm, Foo.test cannot inline Foo.foo since it may be overridden:

grammar Foo {
rule foo { \w+? }
rule bar { [plugh]{2,5} }
rule test { foo bar }
}
grammar Bar is Foo {
rule foo { alpha+? }
}
What you say is only allowed if I put is inline on foo.



continuations don't quite work
Care to elaborate on that?  I'd say they work fine

We do, after all, want this fast, right?
Ofcourse, and we should optimize as much as we can - but not optimize 
*more* than we can.  Rules need generic backtracking semantics, and that's 
what I'm talking about.  Optimizations to avoid the genericity of these 
backtracking semantics is for later.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 11:09:01AM -0500, Dan Sugalski wrote:
At the time I run the regex, I can inline things. There's nothing 
that prevents it. Yes, at compile time it's potentially an issue, 
since things can be overridden later,
OK, but that's not how you initially presented it :-)


you aren't allowed to selectively redefine rules in the middle of a regex 
that uses those rules. Or, rather, you can but the update won't take 
effect until after the end
I don't recall having seen such a restriction mentioned in Apoc 5.

While I'm a big fan of optimization, especially for something like this, 
I think we should be careful with introducing mandatory restrictions just 
to aid optimization.  (is inline will allow such optimizations ofcourse)


There's issues with hypothetical variables and continuations. (And 
with coroutines as well) While this is a general issue, they come up 
most with regexes.
I'm still curious what you're referring to exactly.  I've outlined possible 
semantics for hypothetical variables in earlier posts that should work.


We do, after all, want this fast, right?
Ofcourse, and we should optimize as much as we can - but not 
optimize *more* than we can.  Rules need generic backtracking 
semantics, and that's what I'm talking about.
No. No, in fact they don't. Rules need very specific backtracking 
semantics, since rules are fairly specific. We're talking about 
backtracking in regular expressions, which is a fairly specific 
generality. If you want to talk about a more general backtracking 
that's fine, but it won't apply to how regexes backtrack.
My impression from A5 and A6 is that rules are methods.  They're looked up 
like methods, they can be invoked like methods, etc.

I certainly want to be able to write rules myself, manually, when I think 
it's appropriate; and use these as subrules in other methods.  Generic 
backtracking semantics are needed for that, and should at least conceptually 
also apply to normal rules.

When common sub-patterns are inlined, simple regexen will not use runtime 
subrules at all, so the issue doesn't exist there - that covers everything 
you would do with regexen in perl 5 for example.

When you do use real sub-rules, you're getting into the domain previously 
held by Parse::RecDescent and the like.  While these should ofcourse still 
be as fast as possible, a tiny bit of overhead on top of regular regex is 
understandable.

However, such overhead might not be even needed at all:  whenever possible 
optimizations should be applied, and rules are free to use special hacky 
but fast calling semantics to subrules if they determine that's possible. 
But I don't think a special optimization should be elevated to the official 
semantics.  I say, make generic semantics first, and then optimize the heck 
out of it.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 12:35:19PM -0500, Dan Sugalski wrote:
Then I wasn't clear enough, sorry. This is perl -- the state of 
something at compile time is just a suggestion as to how things 
ultimately work.
Yes, hence my surprise about actually inlining stuff, luckily that was 
just a misunderstanding :-)


I'll nudge Larry to add it explicitly, but in general redefinitons of 
code that you're in the middle of executing don't take effect 
immediately, and it's not really any different for regex rules than 
for subs.
Ah, but we're not redefining the sub that's running, but the subs it's 
about to call.  That works for subs, and Simon Cozens already pointed out 
we certainly also need it for rules :-)


Actually, we should be extraordinarily liberal with the application 
of restrictions at this phase. It's far easier to lift a restriction 
later than to impose it later,
This is perl 6, we can add a new restriction next week

and I very much want to stomp out any constructs that will force slow code 
execution. Yes, I may lose, but if I don't try...
You're absolutely right, and optimization is very important to me too.  But 
you can't *only* look at the speed of constructs, or we'll be coding in C 
or assembly :-)

We'll need to meet in the middle..


The issue of hypotheticals is complex.
Well, I'm a big boy, I'm sure I can handle it.  Are you even talking about 
semantics or implementation here?  Because I already gave my insights on 
semantics, and I have 'em in my head for implementation too but I should 
probably take those to perl6-internals instead.

Ultimately the question is How do you backtrack into arbitrary code, 
and how do we know that the arbitrary code can be backtracked into? 
My answer is we don't, but I'm not sure how popular that particular 
answer is.

I say, make generic semantics first, and then optimize the heck out of it.
That's fine. I disagree. :)
Now that Simon Cozens has established that sub-rules need to be looked up 
at runtime, I think we can both be happy:

As far as I can see, a rule will consist of two parts: The wrapper that 
will handle stuff when the rule is invoked as a normal method, perhaps 
handle modifiers, handle searches for unanchored matches, setup the state, 
etc;  and the actual body that does a match at the current position.

Now, what you want is that subrule-invocation goes directly from body to 
body, skipping the overhead of method invocation to the wrapper.  I say, 
when you look up the method for a subrule, check if it is a regular rule 
and if so call its body directly, and otherwise use the generic mechanism.

I'll get my lovely generic semantics with the direct body-body calling 
hidden away as an optimization details, and I get the ability to write 
rule-methods in perl code.

You still get your low-overhead body-body calls and therefore the speed 
you desire (hopefully).  Since you need to fetch the rule body anyway, 
there should be no extra overhead: where you'd normally throw an error 
(non-rule invoked as subrule) you'd switch to generic invocation instead.

Sounds like a good deal? :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
On Wed, Mar 19, 2003 at 02:31:58PM -0500, Dan Sugalski wrote:
Well, I'm not 100% sure we need it for rules. Simon's point is 
well-taken, but on further reflection what we're doing is subclassing 
the existing grammar and reinvoking the regex engine on that 
subclassed grammar, rather than redefining the grammar actually in 
use. The former doesn't require runtime redefinitions, the latter 
does, and I think we're going to use the former scheme.
That's not the impression I got from Simon

It would also be rather annoying.. think about balanced braces etc, take 
this rather contrieved, but valid example:

$x ~~ m X {
macro ... yada yada yada;
} X;
It seems to be that you're really inside a grammar rule when that macro 
is defined.  Otherwise you'd have to keep a lot of state outside the 
parser to keep track of such things, which is exactly what perl grammars 
were supposed to avoid I think.

We can't add them once we hit betas. I'd as soon add them now, rather 
than later.
Well, I'd rather not add it at all :-)


We'll need to meet in the middle..
Well, not to be too cranky (I'm somewhat ill at the moment, so I'll 
apologize in advance) but... no. No, we don't actually have to, 
though if we could that'd be nice.
OK, strictly speaking that's true, but I think we can


Semantics. Until Larry's nailed down what he wants, there are issues 
of reestablishing hypotheticals on continuation reinvocation, 
They should be though, if a variable was hypothesized when the continuation 
was taken, then it should be hypothesized when that continuation is invoked.

flushing those hypotheticals multiple times,
Not idea what you mean

what happens to hypotheticals when you invoke a continuation with 
hypotheticals in effect, 
Basically de-hypothesize all current hypotheticals, and re-hypothesize 
the ones that were hypothesized when the continuation was taken.  You can 
ofcourse optimize this by skipping the common ancestry, if you know 
what I mean

what happens to hypotheticals inside of coroutines when you 
establish them then yield out,
This follows directly from the implementation of coroutines: the first 
yield is a normal return, so if you hypothesize $x before that it'll stay 
hypothesized. if you then hypothesize $y outside the coroutine and call 
the coroutine again, $y will be de-hypothesized. If the coroutine then 
hypothesizes $z and yields out, $z will be de-hypothesized and $y
re-hypothesized.  $x will be unaffected by all this


and when hypotheticals are visible to other threads.
I haven't thought of that, but to be honest I'm not a big fan of preemptive 
threading anyway.  Cooperative threading using continuations is probably 
faster, has no synchronization issues.  And the behavior of hypotheticals 
follows naturally there (you can use 'let' or 'temp' to create thread-
local variables in that case)


I read through your proposal (I'm assuming it's the one that started 
this thread) and it's not sufficient unless I missed something, which 
I may have.
Also look at Sean O'Rourke's reply and my reply to that; it contains 
additional info.


Sounds like a good deal? :-)
At the moment, no. It seems like a potentially large amount of 
overhead for no particular purpose, really.
I have to admit I don't know the details of how your system works, but 
what I had in mind didn't have any extra overhead at all -- under the 
(apparently still debatable) assumption that you need to look up subrules 
at runtime anyway.

You do agree that if that is possible, is *is* a good deal?


I don't see any win in the regex case, and you're not generalizing it out 
to the point where there's a win there. (I can see where it would be 
useful in the general case, but we've come nowhere near touching that)
We have come near it.. backtracking is easy using continuations, and we can 
certainly have rules set the standard for the general case.

--
Matthijs van Duin  --  May the Forth be with you!


Re: Rules and hypotheticals: continuations versus callbacks

2003-03-19 Thread Matthijs van Duin
 it 
more complex.
More complex ?!

What I'm suggesting is a simple definition of hypothetical variables which 
makes backtracking easy to do using continuation in the general case, and 
therefore automatically also by rules.

Rules can then probably be optimized to *avoid* explicitly use continuation 
to the point where they have the speed you demand, while still keeping up 
appearances of the simple continuation-based backtracking semantics.


We're not backtracking with continuations, though.
I'm suggesting you do officially, but optimize it away behind the scenes. 
This leaves nice and simple semantics for backtracking in general, while 
in fact their implementation inside rules is simple and efficient.

I have to admit I'm not 100% sure this is possible, but give me some time 
to try to work out the details :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: is static?

2003-03-18 Thread Matthijs van Duin
On Tue, Mar 18, 2003 at 01:53:59PM +1100, Damian Conway wrote:
	Function	Assign unless...

 true||=
   defined //=
   exists h
One is almost tempted by something like C??=. Well, almost.
Nonono.. ??= is already for conditionals ofcourse :-)

$a ??= $b :: $c;

--
Matthijs van Duin  --  May the Forth be with you!


Rules and hypotheticals: continuations versus callbacks

2003-03-18 Thread Matthijs van Duin
 (I've called it hypothetical here), and they're restored 
only if that sub is left *normally*, not via an exception.

method test ($result is rw, Code continue0) is hypothetical {
given (let $result = .new) {
my continue1 = {
.quux($.{quux}, continue0);
}
my continue2 = {
.foo($.{foo}, continue1);
.bar($.{bar}, continue1);
}
continue2;
}
}
You might say this is exactly opposite to using continuations: the passed 
argument is used when the match is *successful*, and a return is a failure.  
Other things become opposite as well: alternations can be put right under 
each other while it's concatenation that needs extra work by chaining 
closures, in opposite order!

Note that the optimizer might see that continue2 isn't needed as variable 
in this case, however it would still need it if the [ foo | bar ] were 
preceded by something, so I'm showing it here in non-optimized form.  
(continue1 can also be set to .quux.assuming($.{quux}, continue0), but I 
don't know if that's faster)

Let's see how this would match on 'aaab':

1. enter hypothetical scope of test
2. hypothesize the result as a new state object
3. prepare the chain of closures
4. enter hypothetical scope of foo, hypothesize $.{foo}
5. foo: match failed, exit hypothetical scope (restore $.{foo})
6. enter hypothetical scope of bar, hypothesize $.{bar}
7. bar: match 'aaa', do callback to continue1
8. enter hypothetical scope of quux, hypothesize $.{quux}
9. quux: match failed, exit hypothetical scope (restore $.{quux})
10. bar: match 'aa', do callback to continue1
11. enter hypothetical scope of quux, hypothesize $.{quux}
12. quux: match 'ab', do callback to continue0
Let's again look at what happens if test is followed by fail:

14. bar: continue1 failed, match 'a', do callback to continue1
15. enter hypothetical scope of quux, hypothesize $.{quux}
16. quux: match failed, exit hypothetical scope (restore $.{quux})
17. bar: continue1 failed, exit hypothetical scope (restore $.{bar})
18. continue2 failed, exit hypothetical scope of test (restore $result)
But what if the match succeeds?  The top-level requires much more work 
here: the final continue needs to throw some match succeeded exception 
which needs to be catched by the top-level match and cause it to return 
the state object, with all hypothetical vars intact ofcourse.

Having code that does the match cause explicit backtracking is possible 
here too, but not as easy: it'll be necessary to become part of the call 
chain, so effectively you'll need a temporary rule to do it.  I certainly 
can't think of any clean syntax to accomplish it.

 Let's wrap it up

The solution using continuations has more inherent beauty: the rule- 
methods have a logical structure that you can even explain without needing 
to understand continuations.  The inter-rule calling conventions are also 
cleaner.  The top-level handling is so trivial a single statement at the 
beginning of reach rule can handle it, because of which { .subrule } works 
inside rules without having to do anything ugly behind the scenes (which 
*will* be necessary in the callback-system).

The callback system requires the rule-methods to have a weird internal 
structure, which makes the rule compiler harder to write, and also makes 
rules harder to debug (considering how large and involved grammars can 
be, I expect that debugging a rule in perl 6 will be a much more common 
occurrance than debugging a regex is in perl 5).  It also requires 
special handling at the start and completion of the match.

The callback system also uses heaps of closures.. that might negatively 
affect speed.

Finally, the continuation system also gives 'let' interesting semantics 
which may be useful outside of rules.

Basically, the continuation system has only one big drawback: it uses 
continuations.   I really have no idea how efficient those will be in 
parrot.  If using them makes rules significantly slower than speed will 
probably have to win from cleanness and the callback system should be used.

Or, as I mentioned at the top, maybe I'm just thinking way too complex 
and overlooking a simple and obvious system for backtracking into subrules.

So, comments?  (see also the questions at the top of the email)

--
Matthijs van Duin  --  May the Forth be with you!


Statement modifiers (yes, again)

2003-03-18 Thread Matthijs van Duin
I just read Piers' summary:
Matthijs van Duin wondered if the issue of multiple statement modifiers
has been settled. The thread is long, and the answer is essentially (and
authoritatively) Yes, it's settled. No, you can't do it. So, unless
Larry changes his mind the point is moot.
So apparently I haven't presented my point in that thread very well

Don't get me wrong, if the answer is no then that settles it for me, but 
most of the thread was not about multiple statement modifiers at all.  So 
it's the wrong question that has been answered.

To summarize what I said:
1. Has the issue of multiple modifiers been settled (answer: yes it has, 
  by Larry, and the answer is no)
2. If multiple modifiers aren't done, how about insert proposal

It's point 2, that proposal I'd like feedback on:  to replace the 
conditional statement modifiers (if, unless, when) by lowest-precedence 
left-associative operators, leaving only loops and topicalizers (for, 
while, given) as statement modifiers.

To save people from having to re-read the thread, here is the actual 
proposal in detail again:

PROPOSAL
 Replace the 'if', 'unless', 'when' statement modifiers by identically 
 named lowest-precedence left-associative operators that short-circuit 
 from right to left.

 This means 'FOO if BAR' is identical to 'BAR and FOO', except it has a 
 lower precedence, and 'FOO unless BAR' is identical to 'BAR or FOO', 
 except it has a lower precedence. FOO and BAR are arbitrary expressions.
 Because of left-associativity, 'FOO if BAR if BAZ' is identical to
 'BAZ and BAR and FOO'.

 'FOO when BAR' is similar to 'FOO if BAR' except BAR is matched magically 
 like the rhs of the ~~ operator and an implicit 'break' occurs if true.

RATIONALE
 1. it doesn't hurt anything: existing use of the modifiers (now operators) 
remains functionally the same.
 2. it allows new useful expressions
 3. it is more consistent: 'if' has no reason being more special than 'and',
 4. it shouldn't make parsing more difficult

--
Matthijs van Duin  --  May the Forth be with you!


Apoc 5 - some issues

2003-03-17 Thread Matthijs van Duin
OK, I've recently spent some intimate time with Apocalypse 5 and it has 
left me with a few issues and questions.

If any of this has already been discussed, I'd appreciate some links (I've 
searched google groups but haven't found anything applicable)

1. Sub-rules and backtracking

   name(expr)  # call rule, passing Perl args
   { .name(expr) }   # same thing.

   name pat# call rule, passing regex arg
   { .name(/pat/) }  # same thing.
Considering perl can't sanely know how to backtrack into a closure, wouldn't  
{ .name(expr) }  be equal to  name(expr):  instead?  (note the colon)

It seems to me that for a rule to be able to backtrack, you would need to 
pass a closure as arg that represents the rest of the match:  the rule 
matches, calls the closure, and if the closure returns tries to backtrack 
and calls it again, or returns if all possibilities are exhausted.

Or will a rule store all of its state into hypothetical variables?  It 
seems to me that would make the possibility of backtracking into closures 
even more problematic, but maybe i'm just missing something...

Related to this: what is the prototype for rules (in case you want to 
manually write or invoke them) ?

2. Rules with custom parsing

As mentioned in a previous Apocalypse, the \L, \U, and \Q sequences no longer
use \E to terminate--they now require bracketing characters of some sort.
(much later)
In addition to normal subrules, we allow some funny looking method names like:
rule \a { ... }
Can I conclude from this you can use is parsed on a rule to be able to 
grab the bracketed expression it's followed by?

3. Negated assertions

any assertion that begins with ! is simply negated.

\P{prop}!prop
(?!...) !before ...   # negative lookahead
[^[:alpha:]]-alpha
Considering prop means matches a character with property prop, it 
seems to me !prop would mean the ZERO-WIDTH assertion does not match a 
character with property prop, rather than match a character without 
property prop.

Shouldn't it be -prop instead?  (see also point 5)

4. Character class syntax

predefined character classes are just considered intrinsic grammar rules

[[:alpha:][:digit]] alphadigit
[_]+alpha+digit-Swedish
Can I conclude from this that the + to add character classes is optional?  
What about -foobar, is is that the inversion of foobar ? (I do 
hope so)  But -foo+bar will be the inversion of foo-bar right?

Also, what exactly is allowed inside a character class?  Apparently 
character sets  like [a-z_] and subrules like alpha.  What can I put into 
a set? single character and ranges obvious; but what about interpolated 
variables?  I assume I also can't put \w inside [] anymore since it's a 
subrule, so [\w.;] would become \w[.;] ?

5. Character class semantics

predefined character classes are just considered intrinsic grammar rules
This means you can place arbitrary rules inside a character class.  What 
if the rule has a width unequal to 1 or even variable-width?  I can think 
of a few possibilities:

a. Require subrules inside a character class to have a fixed width of 1 
char. (requires a run-time check since the rule might be redefined.. ick)

b. Rules inside a character class are ORed together, an inverted subrule 
is interpreted as [ !before subrule . ]

c. The whole character class is a zero-width assertion followed by the 
traversal of a single char.

My personal preference is (c), which also means \N is equivalent to -\n

6. Null pattern

That won't work because it'll look for the :wfoo modifier. However, there
are several ways to get the effect you want:
/[:w()foo bar]/ 
/[:w[]foo bar]/
Tsk tsk Larry, those look like null patterns to me :-)

While I'm on the subject.. why not allow  as the match-always assertion?  
It might conflict with huffman encoding, but I certainly don't think  
could ethically mean anything other than this.  And ! would ofcourse be 
the match-never assertion.

7. The :: operator

::# fail all |'s when backtracking

If you backtrack across it, it fails all the way out of the current
list of alternatives.
This suggests that if you do:
[ foo [ bar :: ]? | foo ( \w+ ) ]
that if it backtracks over the :: it will break out of the outermost [], 
since the innermost isn't a list of alternatives.

Or does it simply break out of the innermost group, and are the 
descriptions chosen a bit poorly?

That's it for now I think.. maybe I'll find more later :)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Apoc 5 - some issues

2003-03-17 Thread Matthijs van Duin
On Mon, Mar 17, 2003 at 11:17:21AM -0700, Luke Palmer wrote:
name(expr)  # call rule, passing Perl args
{ .name(expr) }   # same thing.
Considering perl can't sanely know how to backtrack into a closure, 
wouldn't  { .name(expr) }  be equal to  name(expr):  instead?
Nope.  name(expr): is equivalent to { .name{expr} }: .  It does know
how to backtrack into a closure:  it skips right by it (or throws an
exception through it... not sure which) and tries again.
Hypotheticals make this function properly.
That sounds very unlikely, and is also contradicted by earlier messages, 
like: closures don't normally get control back when backtracked over 
-- Larry Wall in http://nntp.x.perl.org/group/perl.perl6.language/10781

Hypothetical variables make things work right when backtracking *over* a 
closure, but certainly not *into* one.

I'm talking about cases like:

rule foo { a+ }
rule bar { { .foo } ab }
my intuition says this equals { [a+]: ab } and hence never matches


Sounds like continuation-passing style.  Yes, you can backtrack
through code with continuation-passing style.  Continuations have yet
to be introduced into the language.
You don't need continuations to do this though, you can do it in plain 
perl code too.  For example:

rule test { foo bar }
--
method test (cont) { .foo({ .bar(cont) }) }
foo gets a closure that represents the rest of the match (bar followed by 
whatever comes after test) and if it succeeds, invokes the closure hence 
calling bar. if bar fails, it returns to foo which can then try a different 
match and call the closure again.  If all parts match than the final closure 
will be called (passed by the match-function to the original rule) which 
does something to return the final version of the state object to the 
original called - for example using an exception.

I'm not saying rules will be implemented in such a way, but it's the first 
thing that comes to mind.


   rule somerule($0) {}
I meant ofcourse as a method (since rules are just methods if I understood 
correctly); to do the matching yourself rather than with perl 6 regex.


Considering prop means matches a character with property prop, it 
seems to me !prop would mean the ZERO-WIDTH assertion does not match a 
character with property prop, rather than match a character without 
property prop.
Right.  It has to be.  There is no way to implement it in a
sufficiently general way otherwise.
Hence the example of saying \P{prop} becomes !prop is wrong; it actually 
becomes -prop, right?


While I'm on the subject.. why not allow  as the match-always assertion?  
It might conflict with huffman encoding, but I certainly don't think  
could ethically mean anything other than this.  And ! would ofcourse be 
the match-never assertion.
You could always use (1) and (0), which are more SWIMmy :)
Ick, ugly; I'd rather use null and !null than those, but  and ! 
are shorter, and have (to me) fairly obvious meanings.  But it was just a 
random suggestion; I'm not going to actively try to advocate them if 
they're not liked :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: A6 questions

2003-03-17 Thread Matthijs van Duin
On Mon, Mar 17, 2003 at 10:52:26AM -0800, David Storrs wrote:
sub identifier {m{ [\w]-[\d] \w+ }}
rule identifier { [\w]-[\d] \w }
I personally don't see a lot of difference between those two, but I'll
go with you on the helps people know that $match should be a regex
point.  Good enough.
Ehhh, I think rules need more magic than just m{} inside a sub to allow 
proper backtracking semantics when it's used as a subrule

So that's another very good reason to make them different :-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Apoc 5 - some issues

2003-03-17 Thread Matthijs van Duin
On Mon, Mar 17, 2003 at 07:49:36PM +0100, Matthijs van Duin wrote:
(blah blah I wrote on closures and rule-invocation)

I'm not saying rules will be implemented in such a way, but it's the first 
thing that comes to mind.
Before anyone replies, I just realized I should probably just first browse 
around in parrot since regex is already implemented ;-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Apoc 5 - some issues

2003-03-17 Thread Matthijs van Duin
On Mon, Mar 17, 2003 at 12:14:00PM -0700, Luke Palmer wrote:
Before anyone replies, I just realized I should probably just first browse 
around in parrot since regex is already implemented ;-)
No---you shouldn't do that.  Regex (in languages/perl6) is a naive and
is due for a rewrite.
And I just realized the issue of subrules hasn't even been touched yet.

If regular subroutine invocation were used, then everything would be 
cleaned up rather the rule matches, and therefore there would be no way 
to backtrack into the subrule.

The subrule *has* to do a callback into the enclosing rule to match the 
remainder.  That way backtracking into the subrule is simply a return.  
It's a simple and neat solution.  (note that this isn't a continuation - 
when you invoke a continuation, it actually never returns)

This may start to sound a bit like perl6 implementation rather than 
language, but it most certainly *does* have impact on the language; 
specifically on the calling conventions of rules.

I have seen the following two ways to invoke a rule (in A5 etc):

Grammar.rule($matchstring)  # $matchstring =~ /Grammar::rule/
$state.rule()   # rule:  inside another rule
Note how it's rule: (with colon) because without any closure passed, it 
can never backtrack into the subrule.

So $state.rule() must have an optional closure parameter, and Grammer.rule() 
probably also needs an additional optional parameter: the modifiers

This makes the class method behave rather differently from the object 
method.. can this be consolidated?


I've volunteered to do that (after I do the type system (no, I'm not 
being ambitious or anything :-P )).  I'd appreciate some help in the 
design and implementation, so feel free to jump in!
I dunno, I'm already quite busy... but I'd say I'm helping with the design 
as we speak :-)

--
Matthijs van Duin  --  May the Forth be with you!


Infix macros?

2003-03-11 Thread Matthijs van Duin
Will infix operators be allowed to be macros instead of subs?

They'd be kinda necessary for infix operators for which standard sub 
conventions are insufficient and need to diddle the parse tree instead, 
such as short-circuiting operators.

It would also allow Damien's original ~ operator (R2L method call) to be 
implemented with something like this...

(warning: I had to invent a lot here, but it should hopefully make at least 
a little bit sense  :-)

macro infix:~ (Perl6::Node $x, Perl6::Node $y) is tighter(infix:=) {
   # regular case, turn  foo $x, $y ~ $obj  into  $obj.foo($x, $y)
   if ($x ~~ Perl6::SubCall) {
   return new Perl6::MethodCall: $x.name, $y, $x.args;
   }
   # ugly hack in case  foo $x, $y ~ $o  was parsed as  foo($x), $y ~ $o
   # however  foo, $x ~ $obj  should still give an error, so beware
   if ($x ~~ Perl6::List  !$x.parens  $x.items  0) {
   my $z = $x.items[0];
   if ($z ~~ Perl6::SubCall  !$z.parens  $z.args  0) {
   my @args = $z.args, $x.items[1...];
   return new Perl6::MethodCall: $z.name, $y, @args;
   }
   }
   # helpful error message ;-)
   croak Can't deal with the stuff left of ~ operator;
}
--
Matthijs van Duin  --  May the Forth be with you!


Statement modifiers

2003-03-10 Thread Matthijs van Duin
Hi all, just dropping in with some thoughts I had while reading the 
archive of this list.  I've tried to search the list but it's difficult 
with perl keywords being common english words and google unable to search 
for punctuation; if the stuff below has already been fully resolved, I'd 
appreciate some pointers to the corresponding messages. :-)

Anyway, let me start by adding to the statistics: I very much like  
method ~ $obj  and  $arg ~ sub  and I like support of Unicode aliases 
for operators as long as plain ascii versions remain available too.

Now the real subject.. has the issue of multiple statement modifiers 
already been settled?  I saw some mention it wasn't going to be supported, 
but also mentions of how it would be useful;  I can think of such a 
situation myself:

.method when MyClass given $obj;
   as alternative to:
$obj.method if $obj.isa(MyClass);
except without duplicating $obj, which may be worthwhile if it's a longer 
expression.  If multiple modifiers are really a no-no, then I think at 
least the conditionals (if, unless, when) could be made lowest-precedence 
right-associative infix operators, and leave the status of statement 
modifier for loops and topicalizers.

This would mean that the above would be valid, and also things like:
.. if .. if .. for ..;  But that multiple nested loops would be illegal 
using modifiers and would require a real block.  (which makes some sense, 
since it's hard to think of a construction where multiple loop-modifiers 
would be useful: if you have  ... for @a for @b  then you'd be unable to 
use the @b-element since $_ would be the loop var of the inner loop)

I also think this gives a nice symmetry of various operators that only 
differ in L2R/R2L and precedence (plus the ability to overload ofcourse):

$x and $y   $y if $x
$x or $y$y unless $x
$x . $y $y ~ $x
$x ( $y )   $y ~ $x
Which I personally think is a Good Thing: I like to structure my code to 
put the most important part of a statement on the left and exceptional 
cases and details on the right.  Having multiple operators with different 
precedence (, and, if) also helps avoiding lots of parentheses, which I 
think is another Good Thing because they make code look cluttered.  When 
I want visual grouping I prefer to use extra whitespace.  Perhaps it's 
not as maintainable, but it is more readable imho.

Hmmm.. and I just realized.. is something like 'print while ;' still 
available in perl 6?  And if so, that means the while-loop would 
topicalize in this case?  What would the criterium be for this case? (I 
hope not the kludge it is right now in perl 5 ;-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Statement modifiers

2003-03-10 Thread Matthijs van Duin
On Mon, Mar 10, 2003 at 08:20:39AM -0800, Paul wrote:
The real nightmare tends to show up when you duplicate a modifier.
What does
 .method given $x given $y; # which object's .method is called?

mean? It gets worse below
I made a mistake in my original post, they definitely need to be left-
associative.  Your example should obviously be interpreted as:
(.method given $x) given $y;  # calls $x.method

I think this is similar to how I mentioned that a duplicate 'for' is 
pointless.  Just because pointless modifier combinations exist doesn't 
mean multiple modifiers in general are a problem.


lowest? why lowest?
Ehm, because that is consistent with current behavior?

Careful with that If you make it a lowest
precedence operator,
 $x = $y if $z; # = is higher precedence

Does it get assigned if $z is undefined?
since 'if' has a lower precedence than '=', this is:
 ($x = $y) if $z;
or equivalently
 $z and ($x = $y)
In either case, the assignment is done if $z is true

I may be missing something, but

 print if $x if $y; # ??

Are you saying test $y, and if it's true, test $x, and if it's true
then print?
Yes

I suppose that might workbut that still makes it high
priority, doesn't it?
It means the left side is not always evaluated; that's short-circuiting 
and has nothing to do with precedence.  Notice how in perl 5 the 'or' 
operator is in the lowest precedence class, but certainly short-
circuits (think foo or die)


 print $x,$y\n for $x - @x for $y - @y; # is that approximate?
Syntax error.  The - operator doesn't make sense without a block.
See http://www.perl.com/pub/a/2002/10/30/topic.html
Still,

 print for @x for @y; # @y's topic masked

would probably make no sense unless ...
Note that I actually *said* it makes no sense.  I have to admit that 
if the conditionals (if, unless, when) would be operators, I'd have 
trouble to think of a situation where multiple modifiers are useful at 
all; which I why I said making the conditionals infix-operators would 
probably suffice.

Then again, I just thought up (perl 5 style):

  print for split while ;

but I have to admit I can probably live without the ability to write 
something like that ;-)

--
Matthijs van Duin  --  May the Forth be with you!


Re: Statement modifiers

2003-03-10 Thread Matthijs van Duin
On Mon, Mar 10, 2003 at 01:14:05PM -0700, Luke Palmer wrote:
It is nice to see someone who puts as much thought into posting as you
do.  Unfortunately, your proposal is moot, as we have a definitive
No, still can't chain them from Larry. 

 http://archive.develooper.com/perl6-language%40perl.org/msg09331.html
Thanks for the reference

However, I'm pretty sure he's talking about disallowing multiple modifiers 
there.  My if/unless/when as operator proposal is exactly to avoid having 
to support multiple modifiers.  So I think discussion on this might still 
be fruitful.

I'm actually a bit surprised noone had the idea earlier; to me the if-
modifier is so similar to the 'and'-operator to have no reason being 
a modifier (it's actually implemented using 'and' internally in p5).

Contrast that to the 'for'-modifier, which really has an impact on the 
statement unlike any operator could achieve, and hence really needs to 
be a statement modifier.

--
Matthijs van Duin  --  May the Forth be with you!