Re: Perl 6 Summary for 2004-04-26 through 2005-05-03

2005-05-04 Thread Michele Dondi
On Tue, 3 May 2005, Matt Fowles wrote:
Perl 6 Summary for 2004-04-26 through 2005-05-03
 ^^
 ^^
Wow!
Michele
--
Why should I read the fucking manual? I know how to fuck!
In fact the problem is that the fucking manual only gives you 
theoretical knowledge which is useless in practice ;)
- Giuseppe Oblomov Bilotta in a bunch of usenet groups.


Re: Junctions of classes, roles, etc.

2005-05-04 Thread Thomas Sandlaß
Abhijit Mahabal wrote:
When you dispatch, what happens would depend upon WALKMETH (according to 
the pseudocode for CALLONE in A12). Usually the first inherited method 
would get called.
Ohh, yes, that thing. I forget about it. And actually I hope that
there's a version among the standard pragmas that gives an error.
Or actually this option should go to the typechecker and then the
WALKMETH would nicely find a single, most specific method to call :)
But the important thing for me in this thread is that there are no
junctive bark methods in the alien beast classes! Well, unless the
WALKMETH of choice implements them =:)
--
TSa (Thomas Sandlaß)



Signatures and option specs [was: Open and pipe]

2005-05-04 Thread Gaal Yahas
On Mon, May 02, 2005 at 09:52:35PM +0200, Juerd wrote:
 I already suggested a syntax like '+$write|w' for having multiple
 ways to say the same thing. I don't like an explicit :mode. Let
 Perl figure that out based on passed named arguments.

I'd like to see this specced. What you're suggesting is that sub
signatures have something as powerful as a good getopt library; things
that spring to mind at this prospect are:

* canonical representations (eg, :w in your example should probably set
  $write)

* mutually exclusive options (for open modes, :write should exclude
  :append)

* computed options (if by let Perl figure that out $mode you didn't mean
  let perl's open figure it out with explicit code).

How would you suggest formalizing these?

(By the way, we need a spec for perl's command line, too. So far we've
been emulating perl5's in an ad-hoc manner.)

-- 
Gaal Yahas [EMAIL PROTECTED]
http://gaal.livejournal.com/


Re: Signatures and option specs [was: Open and pipe]

2005-05-04 Thread Juerd
Gaal Yahas skribis 2005-05-04 13:48 (+0300):
 * canonical representations (eg, :w in your example should probably set
   $write)

Or, possibly, $w := $write.

 * mutually exclusive options (for open modes, :write should exclude
   :append)

I don't really care if this goes in the signature. Runtime checking is
good enough for this.

sub foo (+$write, +$append) {
fail ... if $write and $append;
...
}

Although I don't think this particular check is needed at all, as append
isn't quite opening the file read-only. Just ignore that :write was also
given.

 * computed options (if by let Perl figure that out $mode you didn't mean
   let perl's open figure it out with explicit code).

I did mean let open figure it out. That is, let open decide based on
the named arguments it gets how to open something. Let *it* find a way
to specify 'a+' and stuff like that, so we can write it readably.


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


reduce metaoperator

2005-05-04 Thread Larry Wall
I propose that reduce become a metaoperator that can be applied to
any binary operator and turns it syntactically into a list operator.
I am currently thinking that the metaoperator is a prefix spelled \\
(though there are certainly lots of other possibilities that I've laid
awake all night thinking about).  There are the usual suspects like:

$sum = \\+ @array;
$fact = \\* 1..$num;

Then there are some rather interesting ones like:

$firsttrue = \\|| @args;
$firstdef = \\// @args;
@sumrows := \\+« @rows;

I particularly want this to replace the semantically ugly dim operator
(used to be semi) in multidimensional subscripts:

@foo[0..9; \\;(@dims); 0..9]

Also, by the way, infix:; now only creates arrays of lists only in
contexts bound to something like

Array [EMAIL PROTECTED]
Lazy [EMAIL PROTECTED]
Eager [EMAIL PROTECTED]

This is how the parameters to a subscript are declared, so they
automatically get a list of lists, even if there are no semicolons
in the subscript.  This gets rid of the retroactive semicolon
problem once and for all.

In ordinary list context, infix:; is just a list-op-ending big comma,
but is otherwise treated like an ordinary comma (but only if the
list is in some kind of brackets, of course).

Now here's the interesting part.  The same critera apply to extra
lists added with == or ==.  In other words, a function may be be
declared to recognize multiple input pipes as separate lists just
like a subscript recognizes multiple dimensions of slices.  But the
default is to flatten all input pipes into a single input stream.

The new semicolon semantics are relatively non-negotiable, but feel
free to hash out the reduce metaoperator.  Perhaps \\ is the ASCII
version, and people will prefer to write something with less visual
confusion and more mnemonic power:

$sum = ®+ @array;
$fact = ®* 1..$num;
$firsttrue = ®|| @args;
$firstdef = ®// @args;
@sumrows := ®+« @rows;
@foo[0..9; ®;(@dims); 0..9]

Hmm, that kind of says that the ASCII workaround should be:

$sum = (R)+ @array;
$fact = (R)* 1..$num;
$firsttrue = (R)|| @args;
$firstdef = (R)// @args;
@sumrows := (R)+« @rows;
@foo[0..9; (R);(@dims); 0..9]

Which does have the benefit of not letting people confuse \\ with //
semantically or visually.  I guess (R) would be a Texas reduce.  :-)

Larry


Re: Open and pipe

2005-05-04 Thread Aaron Sherman
On Mon, 2005-05-02 at 22:51, Uri Guttman wrote:
  LW == Larry Wall [EMAIL PROTECTED] writes:
 
   LW  multi sub opensocket (
   LW Str +$mode = 'rw',
   LW Str +$encoding = 'auto',
   LW Str [EMAIL PROTECTED]) returns IO;
 
 and how will that support async (non-blocking) connects? or listen
 sockets?

This is why named aliases for constructors are a bad idea. Nice theory,
but bad idea.

Unless the language allows us to specify that a sub IS an alias for a
constructor, e.g.:

sub opensocket := IO::Socket.new;

Why? Because IO::Socket.new takes parameters that are built out of its
entire inheritance tree, so a change to IO::Handle might radically
modify the signature of the constructor.

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: Open and pipe

2005-05-04 Thread Aaron Sherman
On Mon, 2005-05-02 at 16:13, Mark Reed wrote:
 On 2005-05-02 15:52, Juerd [EMAIL PROTECTED] wrote:
 
  Gaal Yahas skribis 2005-05-02 22:25 (+0300):
open 'ls', '|-'; # or even
open 'ls', :pipe = 'from'
  
  I dislike the hard-to-tell-apart symbols '' and '' for modes. 'r' and
  'w' are much easier, and get rid of the awful left/right mnemonic that
  fails to make sense to GUI users.
  
 Holy matter of opinion, Batman.  ¼¼ and ¼¹ are much easier to tell apart
 than ¼r¹ and ¼w¹;

As far as matter of opinion... yeah, I'm seeing a lot of that recently.

I would expect open to be a bit of an anachronism in P6, but still
used fairly often. For the most part, I would expect that:

my IO $read_fh = '/some/path' = 'r'; # Get an IO::File (is IO)
my IO $write_fh = '/other/path' = ''; # Get IO::File
my IO $pipe_fh = 'program args' = $IO::ReadPipe; # Get IO::Pipe
my IO $sock_fh = 'http://www.perl.org/' = $IO::URI; # Get IO::Socket

would just DWIM. But, perhaps I'm expecting too much...

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: reduce metaoperator

2005-05-04 Thread Juerd
Are these equivalent? (Assuming reduce isn't going away)

Larry Wall skribis 2005-05-04  5:36 (-0700):
 $sum = \\+ @array;
 $fact = \\* 1..$num;

$sum  = reduce infix:+, @arrayd;
$fact = reduce infix:*, 1..$num;

 $firsttrue = \\|| @args;
 $firstdef = \\// @args;
 @sumrows := \\+« @rows;

$firsttrue = reduce infix:||, @args;
$firstdef  = reduce infix://, @args;
@sumrows  := map { reduce infix:+, @$_ }, @rows;

 Now here's the interesting part.  The same critera apply to extra
 lists added with == or ==.  In other words, a function may be be
 declared to recognize multiple input pipes as separate lists just
 like a subscript recognizes multiple dimensions of slices.  But the
 default is to flatten all input pipes into a single input stream.

Hm, if == and == are made special syntax, maybe this would be
possible?

@foo == zip == @bar

 $sum = (R)+ @array;

I like that.


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: Open and pipe

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 08:47:17AM -0400, Aaron Sherman wrote:
: I would expect open to be a bit of an anachronism in P6, but still
: used fairly often. For the most part, I would expect that:
: 
:   my IO $read_fh = '/some/path' = 'r'; # Get an IO::File (is IO)
:   my IO $write_fh = '/other/path' = ''; # Get IO::File
:   my IO $pipe_fh = 'program args' = $IO::ReadPipe; # Get IO::Pipe
:   my IO $sock_fh = 'http://www.perl.org/' = $IO::URI; # Get IO::Socket
: 
: would just DWIM. But, perhaps I'm expecting too much...

Um, yes.

Larry


Re: reduce metaoperator

2005-05-04 Thread Juerd
Juerd skribis 2005-05-04 14:53 (+0200):
 @foo == zip == @bar

H...

   @quux
 ||
 ||
 \/
@foo == zip == @bar
 /\ 
 ||   
 ||
   @xyzzy

:)


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: reduce metaoperator

2005-05-04 Thread Rob Kinyon
This may be a naive question, but what's wrong with just having a
keyword called reduce()? Why do we need an operator for everything?
I'm worried that the list of P6 operators is going to be as long as
the list of P5 keywords, with a lot of them looking something like:

verbdirect objectindirect object

I propose that if you're thinking of using 3+ ASCII characters for an
operator that it should just become a keyword. For one thing, it's
more maintainable that way. I can remap ^K in vi to be 'perldoc -f
instead of 'man' and it will work for keywords, but not for Unicode
operators.

Rob

On 5/4/05, Larry Wall [EMAIL PROTECTED] wrote:
 I propose that reduce become a metaoperator that can be applied to
 any binary operator and turns it syntactically into a list operator.
 I am currently thinking that the metaoperator is a prefix spelled \\
 (though there are certainly lots of other possibilities that I've laid
 awake all night thinking about).  There are the usual suspects like:
 
 $sum = \\+ @array;
 $fact = \\* 1..$num;
 
 Then there are some rather interesting ones like:
 
 $firsttrue = \\|| @args;
 $firstdef = \\// @args;
 @sumrows := \\+« @rows;
 
 I particularly want this to replace the semantically ugly dim operator
 (used to be semi) in multidimensional subscripts:
 
 @foo[0..9; \\;(@dims); 0..9]
 
 Also, by the way, infix:; now only creates arrays of lists only in
 contexts bound to something like
 
 Array [EMAIL PROTECTED]
 Lazy [EMAIL PROTECTED]
 Eager [EMAIL PROTECTED]
 
 This is how the parameters to a subscript are declared, so they
 automatically get a list of lists, even if there are no semicolons
 in the subscript.  This gets rid of the retroactive semicolon
 problem once and for all.
 
 In ordinary list context, infix:; is just a list-op-ending big comma,
 but is otherwise treated like an ordinary comma (but only if the
 list is in some kind of brackets, of course).
 
 Now here's the interesting part.  The same critera apply to extra
 lists added with == or ==.  In other words, a function may be be
 declared to recognize multiple input pipes as separate lists just
 like a subscript recognizes multiple dimensions of slices.  But the
 default is to flatten all input pipes into a single input stream.
 
 The new semicolon semantics are relatively non-negotiable, but feel
 free to hash out the reduce metaoperator.  Perhaps \\ is the ASCII
 version, and people will prefer to write something with less visual
 confusion and more mnemonic power:
 
 $sum = (r)+ @array;
 $fact = (r)* 1..$num;
 $firsttrue = (r)|| @args;
 $firstdef = (r)// @args;
 @sumrows := (r)+« @rows;
 @foo[0..9; (r);(@dims); 0..9]
 
 Hmm, that kind of says that the ASCII workaround should be:
 
 $sum = (R)+ @array;
 $fact = (R)* 1..$num;
 $firsttrue = (R)|| @args;
 $firstdef = (R)// @args;
 @sumrows := (R)+« @rows;
 @foo[0..9; (R);(@dims); 0..9]
 
 Which does have the benefit of not letting people confuse \\ with //
 semantically or visually.  I guess (R) would be a Texas reduce.  :-)
 
 Larry



Re: reduce metaoperator

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 08:36, Larry Wall wrote:
 I propose that reduce become a metaoperator that can be applied to
 any binary operator and turns it syntactically into a list operator.

Sounds very cool! I like it... but...

 $sum = ®+ @array;

I don't think you can do that workably. In the font I use, I was
scratching my head asking how does @ work there?! Yep, I can't tell ®
and @ apart without getting REAL close to the screen.

I also deeply dislike \\//, but I can't tell you why rationally. :-/

That said, let me try to be helpful, and not just complain:

$sum = (+) @array;

I've not thought through all of the implications to the parser, but I
think this works, and it certainly ends up looking very mnemonic for
what you're trying to do!

Are there any infix:op where there is also a standalone op that would
confuse usage inside ()? If so, would it be too big a deal to
special-case those?

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 02:53:54PM +0200, Juerd wrote:
: Hm, if == and == are made special syntax, maybe this would be
: possible?
: 
: @foo == zip == @bar

It's already the case that == binds tighter, so it should work the
same as

@foo == (zip == @bar)

or

zip == @bar == @foo

or

zip(@bar; @foo)

Which, considering zip *is* going to care about multidimensional slices,
should do the right thing, presuming you really wanted @bar in front.
Ordinarily I'd write it

@foo ==
zip == @bar

to make it a little clearer.

You'll note that == is really just a version of ; that doesn't
require brackets.

Larry


Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 09:00:46AM -0400, Aaron Sherman wrote:
: That said, let me try to be helpful, and not just complain:
: 
:   $sum = (+) @array;
: 
: I've not thought through all of the implications to the parser, but I
: think this works, and it certainly ends up looking very mnemonic for
: what you're trying to do!

It's certainly one of the ones I considered, along with all the other
brackets, and |+|, plus variants  It just seemed like it'd be a little
visually confusing where you want to turn the list op into a function

$sum = (+)(@array);

But maybe that's not a problem.  But there are other potential
ambiguities besides visual, I suspect.

Larry


Re: reduce metaoperator

2005-05-04 Thread Juerd
Aaron Sherman skribis 2005-05-04  9:00 (-0400):
  $sum = ®+ @array;
 I don't think you can do that workably. In the font I use, I was
 scratching my head asking how does @ work there?! Yep, I can't tell ®
 and @ apart without getting REAL close to the screen.

Perhaps this just means that the texas reduce is written @ instead.

 Are there any infix:op where there is also a standalone op that would
 confuse usage inside ()? If so, would it be too big a deal to
 special-case those?

Maybe x?


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 08:59:04AM -0400, Rob Kinyon wrote:
: This may be a naive question, but what's wrong with just having a
: keyword called reduce()? Why do we need an operator for everything?

Because it's an operator/macro in any event, with weird unary or
listop parsing:

reduce(+) @array

Larry


Re: reduce metaoperator

2005-05-04 Thread Michele Dondi
On Wed, 4 May 2005, Larry Wall wrote:
I propose that reduce become a metaoperator that can be applied to
any binary operator and turns it syntactically into a list operator.
I second that. By all means! (But I thin it would be desirable to have a 
'plain' reduce operator as well)

Michele
--
The reason I want to do this is that I am drawing up legal documents,
and sometimes when I send out to a client I want to have a footnote to
explain some point of godawful convoluted legal prose (why don't I
just write it clearly in the first place??! Different topic)
- DrMemory in comp.text.tex, Re: Footnotes: turning them on/off


Re: Open and pipe

2005-05-04 Thread Uri Guttman
 AS == Aaron Sherman [EMAIL PROTECTED] writes:

  AS On Mon, 2005-05-02 at 22:51, Uri Guttman wrote:
LW == Larry Wall [EMAIL PROTECTED] writes:
   
  LW multi sub opensocket (
  LW Str +$mode = 'rw',
  LW Str +$encoding = 'auto',
  LW Str [EMAIL PROTECTED]) returns IO;
   
   and how will that support async (non-blocking) connects? or listen
   sockets?

  AS This is why named aliases for constructors are a bad idea. Nice theory,
  AS but bad idea.

i am in agreement there. but i am advocating for a proper set of args
for socket connections regardless of the name of the sub/method. it
could even be in io() provided there is a way to note it is a socket
connect/listen and also pass it named args. larry already agreed with
the named arguments point. 

  AS Unless the language allows us to specify that a sub IS an alias for a
  AS constructor, e.g.:

  AS   sub opensocket := IO::Socket.new;

  AS Why? Because IO::Socket.new takes parameters that are built out of its
  AS entire inheritance tree, so a change to IO::Handle might radically
  AS modify the signature of the constructor.

makes sense. we should look at the p5 IO:: tree and see what we want to
salvage/savage from it as well as io::all. each has its good and bad
points and hopefully we can figure out which is which. :)

uri

-- 
Uri Guttman  --  [EMAIL PROTECTED]   http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs    http://jobs.perl.org


Circular dereference?

2005-05-04 Thread Autrijus Tang
What should this do, if not infinite loop?

my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;

Thanks,
/Autrijus/


pgplOdMgUykiv.pgp
Description: PGP signature


Re: reduce metaoperator

2005-05-04 Thread Juerd
Larry Wall skribis 2005-05-04  6:10 (-0700):
 On Wed, May 04, 2005 at 08:59:04AM -0400, Rob Kinyon wrote:
 : This may be a naive question, but what's wrong with just having a
 : keyword called reduce()? Why do we need an operator for everything?
 Because it's an operator/macro in any event, with weird unary or
 listop parsing:
 reduce(+) @array

That's ugly, but there's also the map-ish form, and I'd like that to
still be available.

reduce { $^a + $^b }, @array;
reduce infix:+,@array;


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: reduce metaoperator

2005-05-04 Thread Rob Kinyon
Using that argument, every keyword is really an operator/macro.
Instead of sub/method/multimethod, we could use a special character.

sub foo { ... }

becomes

 foo { ... }

A method is , a multimethod is *, and so on. (I don't have a
Unicode mail client or I'd look for a Unicode character.)

What I'm saying is that humans still have to read this language.
Humans read English, not a stream of characters that have modifying
effects on both their left and right hand sides. Streams of characters
that will parse different depending on what's happening around them.

Keywords are just a lot easier to work with.

Now, here's another possibility - have these operators live in
modules. If you really really want the (R) operator, then use
Reduce::Operator; and now I know when I read your code what
(R)[EMAIL PROTECTED] means, because it's documented somewhere.

But, don't put it in the core. I thought the core was supposed to be
sparse with modules to add the richness.

Rob

On 5/4/05, Larry Wall [EMAIL PROTECTED] wrote:
 On Wed, May 04, 2005 at 08:59:04AM -0400, Rob Kinyon wrote:
 : This may be a naive question, but what's wrong with just having a
 : keyword called reduce()? Why do we need an operator for everything?
 
 Because it's an operator/macro in any event, with weird unary or
 listop parsing:
 
 reduce(+) @array
 
 Larry



Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 02:58:14PM +0200, Juerd wrote:
: Juerd skribis 2005-05-04 14:53 (+0200):
:  @foo == zip == @bar
: 
: H...
: 
:@quux
:  ||
:  ||
:  \/
: @foo == zip == @bar
:  /\ 
:  ||   
:  ||
:@xyzzy
: 
: :)

That's actually...er, doable...in Perl 6...er...if you install a token
parse rule with whitespace in it to disambiguate from the shorter ops...
But perhaps Unicode operators would be more toward.  Besides, then
you can have the diagonal arrows as well.

Larry


Re: Circular dereference?

2005-05-04 Thread Juerd
Autrijus Tang skribis 2005-05-04 21:13 (+0800):
 What should this do, if not infinite loop?
 my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;

I'm still against any explict scalar dereferencing, so: fail,
complaining about $x not being an arrayreference (not knowing how
to handle postcircumfix:[ ]).


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: reduce metaoperator

2005-05-04 Thread Uri Guttman
 LW == Larry Wall [EMAIL PROTECTED] writes:

  LW I propose that reduce become a metaoperator that can be applied to
  LW any binary operator and turns it syntactically into a list operator.
  LW I am currently thinking that the metaoperator is a prefix spelled \\
  LW (though there are certainly lots of other possibilities that I've laid
  LW awake all night thinking about).  There are the usual suspects like:

  LW $sum = \\+ @array;
  LW $fact = \\* 1..$num;

shouldn't that be s/fact/prod/ ? sure the input makes it a factorial but
the general case would be a product. not that what var names you choose
matters but i think it would be clearer if used as a real example in
some future docs.

uri

-- 
Uri Guttman  --  [EMAIL PROTECTED]   http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs    http://jobs.perl.org


Re: Circular dereference?

2005-05-04 Thread Juerd
Juerd skribis 2005-05-04 15:18 (+0200):
 I'm still against any explict scalar dereferencing, so: fail,
 complaining about $x not being an arrayreference (not knowing how
 to handle postcircumfix:[ ]).

Ehm :)

s/explicit/implicit/


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: reduce metaoperator

2005-05-04 Thread Uri Guttman
 J == Juerd  [EMAIL PROTECTED] writes:

  J Juerd skribis 2005-05-04 14:53 (+0200):
   @foo == zip == @bar

  J H...

  J@quux
  J  ||
  J  ||
  J  \/
  J @foo == zip == @bar
  J  /\ 
  J  ||   
  J  ||
  J@xyzzy

you are brainfucking me! stop it now!!

:)

uri

-- 
Uri Guttman  --  [EMAIL PROTECTED]   http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs    http://jobs.perl.org


Re: Circular dereference?

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 03:18:29PM +0200, Juerd wrote:
: Autrijus Tang skribis 2005-05-04 21:13 (+0800):
:  What should this do, if not infinite loop?
:  my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;
: 
: I'm still against any explict scalar dereferencing, so: fail,
: complaining about $x not being an arrayreference (not knowing how
: to handle postcircumfix:[ ]).

Yes, it doesn't immediately deref as an array, so it fails.

Now @$x would infinite loop according to what I said a couple weeks
ago, but maybe that's just the go down one level form that was
requested at the time, and @$$x is the go however many it takes form.

Larry


Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 09:18:46AM -0400, Uri Guttman wrote:
:  LW == Larry Wall [EMAIL PROTECTED] writes:
:   LW $fact = \\* 1..$num;
: 
: shouldn't that be s/fact/prod/ ? sure the input makes it a factorial but
: the general case would be a product. not that what var names you choose
: matters but i think it would be clearer if used as a real example in
: some future docs.

If it made you think, maybe it was a good example.  :-)

Larry


Re: reduce metaoperator

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 09:06, Larry Wall wrote:
 On Wed, May 04, 2005 at 09:00:46AM -0400, Aaron Sherman wrote:
 : That said, let me try to be helpful, and not just complain:
 : 
 : $sum = (+) @array;

 It's certainly one of the ones I considered, along with all the other
 brackets, and |+|, plus variants  

I could see [], but the others don't have any mnemonic for
list-opification.

Hmmm...

$sum = [+] @array

Nice.

 It just seemed like it'd be a little
 visually confusing where you want to turn the list op into a function
 
 $sum = (+)(@array);

$sum = [+](1,2,3);

Not bad, not bad.

 But maybe that's not a problem.  But there are other potential
 ambiguities besides visual, I suspect.

Juerd mentioned x, which is certainly one such. Any \w+ keyword is going
to be a problem, but I think you can make the case for any:

sub infix:foo($a,$b){...}

enforcing correct handling of:

sub list:[foo]([EMAIL PROTECTED]) {...}

Oh hey, I just made up list:... There's nothing for that listed in
A12, and that handy table from A12 doesn't show up in S12 or S13... is
that an oversight? Have new categories been added?

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 03:15:09PM +0200, Juerd wrote:
: Larry Wall skribis 2005-05-04  6:10 (-0700):
:  On Wed, May 04, 2005 at 08:59:04AM -0400, Rob Kinyon wrote:
:  : This may be a naive question, but what's wrong with just having a
:  : keyword called reduce()? Why do we need an operator for everything?
:  Because it's an operator/macro in any event, with weird unary or
:  listop parsing:
:  reduce(+) @array
: 
: That's ugly, but there's also the map-ish form, and I'd like that to
: still be available.
: 
: reduce { $^a + $^b }, @array;
: reduce infix:+,@array;

Yes, we'll certainly have that form too.  It's just a little cumbersome
to use that to interpolate multiple slices into a multidimensional
subscript.

@foo[0..9; reduce infix:;, @array; 0..9];

But it can probably be made to work nonetheless, presuming the list
interpolator respects interpolated multidimensional lists somehow.
Perhaps the dimensions are separated internally by some kind of
() xx Omega value to turn them surreal for contexts that care.

Larry


Re: reduce metaoperator

2005-05-04 Thread Juerd
Uri Guttman skribis 2005-05-04  9:23 (-0400):
 you are brainfucking me! stop it now!!

+++[++-]+++.[-].
[-]-.---.+++[+++-].+
++[---]+..+++[+++-].+..+++[---]
.-.+...[+-]++.+++[--]
.+++[---]-.[-]+++..+.--
.++..+++[]+...+++[
-]+.+++[].

Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 09:34:28AM -0400, Aaron Sherman wrote:
: On Wed, 2005-05-04 at 09:06, Larry Wall wrote:
:  On Wed, May 04, 2005 at 09:00:46AM -0400, Aaron Sherman wrote:
:  : That said, let me try to be helpful, and not just complain:
:  : 
:  :   $sum = (+) @array;
: 
:  It's certainly one of the ones I considered, along with all the other
:  brackets, and |+|, plus variants  
: 
: I could see [], but the others don't have any mnemonic for
: list-opification.
: 
: Hmmm...
: 
:   $sum = [+] @array
: 
: Nice.

I just thought that'd be visually confusing in a subscript:

@foo[0..9; [;[EMAIL PROTECTED]; 0..9]

But maybe it's not so bad.

:  It just seemed like it'd be a little
:  visually confusing where you want to turn the list op into a function
:  
:  $sum = (+)(@array);
: 
:   $sum = [+](1,2,3);
: 
: Not bad, not bad.
: 
:  But maybe that's not a problem.  But there are other potential
:  ambiguities besides visual, I suspect.
: 
: Juerd mentioned x, which is certainly one such. Any \w+ keyword is going
: to be a problem, but I think you can make the case for any:
: 
:   sub infix:foo($a,$b){...}
: 
: enforcing correct handling of:
: 
:   sub list:[foo]([EMAIL PROTECTED]) {...}

Yes, but it is rather ambiguous for any unary that can take $_, like

[int]

but that's not an infix, and there isn't that much overlap, so it could
probably be made to work, and not be too confusing, at least until
someone adds an infix:int operator...

Larry


Re: Circular dereference?

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 09:38:58PM +0800, Autrijus Tang wrote:
: On Wed, May 04, 2005 at 06:24:34AM -0700, Larry Wall wrote:
:  Yes, it doesn't immediately deref as an array, so it fails.
: 
: Oh.  So autodereference is only one level?  I got it all wrong
: in Pugs, then.  I wonder where I got that impression...

Oh, you probably got that impression from me.  I probably even believe
it now and then.  Maybe I'll believe it again tomorrow...

:  Now @$x would infinite loop according to what I said a couple weeks
:  ago, but maybe that's just the go down one level form that was
:  requested at the time, and @$$x is the go however many it takes form.
: 
: Err, wait.  So @$$x is different from @{${$x}}?

Nah, those should be equivalent, whatever sematics we come up with.

Larry


Re: reduce metaoperator

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 09:23, Uri Guttman wrote:
  J == Juerd  [EMAIL PROTECTED] writes:
 
   J Juerd skribis 2005-05-04 14:53 (+0200):
@foo == zip == @bar
 
   J H...
 
   J@quux
   J  ||
   J  ||
   J  \/
   J @foo == zip == @bar
   J  /\ 
   J  ||   
   J  ||
   J@xyzzy
 
 you are brainfucking me! stop it now!!

Well, Befunge (http://en.wikipedia.org/wiki/Befunge) predates the very
poorly named brainfuck (what do you have to be on to think that's a good
name for even a joke language?!)

So I guess he's befunging you!

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: reduce metaoperator

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 09:45, Larry Wall wrote:
 On Wed, May 04, 2005 at 09:34:28AM -0400, Aaron Sherman wrote:

 : Hmmm...
 : 
 : $sum = [+] @array
 : 
 : Nice.
 
 I just thought that'd be visually confusing in a subscript:
 
 @foo[0..9; [;[EMAIL PROTECTED]; 0..9]

Now, why did I think you wouldn't have already considered every
permutation? ;-)

 But maybe it's not so bad.

Yeah, it's confusing in some places, but I think they're mostly edges.

 Yes, but it is rather ambiguous for any unary that can take $_, like
 
 [int]
 
 but that's not an infix, and there isn't that much overlap, so it could
 probably be made to work, and not be too confusing, at least until
 someone adds an infix:int operator...

And they could get a warning that tells them of the potential conflict
when they do

I don't think there's a perfect solution for what you want, but this is
pretty darned close.

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: Coroutine Question

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 09:47, Joshua Gatcomb wrote:

 So without asking for S17 in its entirety to be written, is it
 possible to get a synopsis of how p6 will do coroutines?

A coroutine is just a functional unit that can be re-started after a
previous return, so I would expect that in Perl, a coroutine would be
defined by the use of a variant of return, such as:

sub generate_this() {
for 1..10 - $_ {
coreturn $_;
}
}

Of course, I'm pulling that out of my @ss, so YMMV. ;-)

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: Coroutine Question

2005-05-04 Thread Joshua Gatcomb
On 5/4/05, Luke Palmer [EMAIL PROTECTED] wrote:
 On 5/4/05, Joshua Gatcomb [EMAIL PROTECTED] wrote:
  So without asking for S17 in its entirety to be written, is it
  possible to get a synopsis of how p6 will do coroutines?  I ask
  because after reading Dan's What the heck is:  a coroutine, it is
  clear there is more than one way to dictate behavior.

 Well, one way is to use generating functions:

 my @a = gather {
 for 1..10 {
 say;
 take;
 }
 }

 (Where that = might be spelled :=).  Here, the code is not executed
 until an element of @a is demanded.  It is executed as many times as
 necessary to fetch that element, and then stopped in its tracks.

Ok - this isn't what I was expecting at all.  That doesn't make it a
bad thing.  Given something that looks a lot more like a typical
coroutine:

sub example is coroutine {
   yield 1;
   yield 2;
   yield 3;
}

I would expect
for 1 .. 5 { say example() } to print 1\n2\n3\n1\n\2

If I got fancy and added a parameter

sub example ( $num ) is coroutine {
   yield $num;
   yield $num + 1;
   yield $num - 2;
}

I would expect
for 1 .. 5 { say example( 7 ) } to print 7\n8\n6\n7\n8;

The questions I am really asking is:
1.  Will coroutines be supported at all?
2.  If yes, will they look like coroutines in other languages?
3.  If yes, in what ways will they be behave (handling of parameters
for instance)?
4.  Finally, what is the proper syntax for declaring and calling coroutines?

I am fine with a we haven't got that far yet answer.  I was just
hoping to write some tests to drive features.


 Luke
- Hide quoted text -

Cheers,
Joshua Gatcomb
a.k.a. L~R


Re: Cmmposition binop

2005-05-04 Thread Stuart Cook
 What I refer to now is something that takes two {coderefs,anonymous
 subs,closures} and returns (an object that behaves like) another anonymous
 sub, precisely the one that acts like the former followed by the latter
 (or vice versa!).

Do you mean like the mathematical 'f o g'?

i.e. (f o g)($x) == f(g($x))

Maybe we could just use 'o'.

(Too bad we can't use Haskell's 'f . g'...)


Stuart


Re: Cmmposition binop

2005-05-04 Thread Rob Kinyon
What about the function compose() that would live in the module
keyword, imported by the incantation use keyword qw( compose );?

(NB: My P6-fu sucks right now)

multimethod compose (@*List) {
return {
$_() for @List;
};
}

On 5/4/05, Michele Dondi [EMAIL PROTECTED] wrote:
 I had implicitly touched on this in the past, but speaking of binops - and
 of functional features in Perl6, is there any provision of a (list
 associative?) composition binop?
 
 I had naively thought about == and/or ==, but that's somewhat on a
 different level.
 
 What I refer to now is something that takes two {coderefs,anonymous
 subs,closures} and returns (an object that behaves like) another anonymous
 sub, precisely the one that acts like the former followed by the latter
 (or vice versa!).
 
 Optionally, (ab)using C ==  for it, if we have
 
 my $composed = sub_1 == sub_2;
 
 and sub_1 has a signature, and sub_2 has a returns trait, $composed should
 have the same signature as sub_1 and the same returns trait as sub_2. Also
 it should complain if sub_2 has a signature and sub_1 a returns trait and
 these do not match.
 
 How 'bout this idea?
 
 Michele
 --
 L'amava come si ama qualcosa che e' necessario alla vita.
 La odiava come si odia chi tradisce.
 - Laura Mancinelli, Gli occhi dell'imperatore



Coroutine Question

2005-05-04 Thread Luke Palmer
On 5/4/05, Joshua Gatcomb [EMAIL PROTECTED] wrote:
 Ok - this isn't what I was expecting at all.  That doesn't make it a
 bad thing.  Given something that looks a lot more like a typical
 coroutine:

 sub example is coroutine {
 yield 1;
 yield 2;
 yield 3;
 }

 I would expect
 for 1 .. 5 { say example() } to print 1\n2\n3\n1\n\2

Ding!  You just hit the disagreement point.  You're on Damian's side.
(But don't take the argument below to be against you; I'm just
stating it for the list, assuming I haven't already (which may be a
poor assumption)).

Here's a short form of my side, then:  once you start a coroutine,
you've given it state.  Subroutines, according to most modern
programming practices, shouldn't have state (well, they can manipulate
globals and such, but static variables in subroutines is looked down
upon... it's one of those shuns that I actually agree with).  So, in
my proposal, when you call a coroutine, it returns an iterator (and
doesn't call anything):

my $example = example();
=$example;  # 1
=$example;  # 2

The thing this buys over the traditional (which I refer to as the
stupid) way, is that you can do this:

my $example = example();
my $example2 = example();
=$example; # 1
=$example; # 2
=$example2; # 1
=$example; # 3

There is _no way_ to do that using the stupid way.

Sorry for my language... it's just that if I were dropped into a
project that invented that abstraction for something it was doing, it
would be one of the first things I'd change.  Totally unscalable
design.

 If I got fancy and added a parameter

 sub example ( $num ) is coroutine {
 yield $num;
 yield $num + 1;
 yield $num - 2;
 }

 I would expect
 for 1 .. 5 { say example( 7 ) } to print 7\n8\n6\n7\n8;

And here is where it gets trickier:

say example(7);   # 7
say example(7);   # 8
say example(8);   # 8?  6?

Luke


Re: Open and pipe

2005-05-04 Thread Gaal Yahas
On Wed, May 04, 2005 at 08:47:17AM -0400, Aaron Sherman wrote:
 I would expect open to be a bit of an anachronism in P6, but still
 used fairly often. For the most part, I would expect that:
 
   my IO $read_fh = '/some/path' = 'r'; # Get an IO::File (is IO)
   my IO $write_fh = '/other/path' = ''; # Get IO::File
   my IO $pipe_fh = 'program args' = $IO::ReadPipe; # Get IO::Pipe
   my IO $sock_fh = 'http://www.perl.org/' = $IO::URI; # Get IO::Socket
 
 would just DWIM. But, perhaps I'm expecting too much...

Ah yes, that's another thing I was wondering about: what does opening a
pipe return. If it's a one-way pipe, okay, this may be a single handle;
but for bidirectional opens, we need $in, $out, and $err handles; and
even in the simple unidirectional case, where does the Process handle
go?[1]

In the glorious OOP version of these builtins, IO can encasulate the
three handles, plus the process handles. But if we are to provide a
pipe1 builtin, it can return a single handle (and not the pid, unless
we return a list or use OUT variables (yech)). Should we provide further
procedural interfaces for pipe2 and pipe3?

[1] Process handle encapsulates unix pid / win32 process handle.

-- 
Gaal Yahas [EMAIL PROTECTED]
http://gaal.livejournal.com/


Re: Coroutine Question

2005-05-04 Thread Ingo Blechschmidt
Hi,

Joshua Gatcomb wrote:
 On 5/4/05, Luke Palmer [EMAIL PROTECTED] wrote:
 On 5/4/05, Joshua Gatcomb [EMAIL PROTECTED] wrote:
  So without asking for S17 in its entirety to be written, is it
  possible to get a synopsis of how p6 will do coroutines?  I ask
  because after reading Dan's What the heck is:  a coroutine, it is
  clear there is more than one way to dictate behavior.

 Well, one way is to use generating functions:

 my @a = gather {
 for 1..10 {
 say;
 take;
 }
 }
 
 Ok - this isn't what I was expecting at all.  That doesn't make it a
 bad thing.  Given something that looks a lot more like a typical
 coroutine:

  sub example(...) {
my $index = 0;
my @a:= gather {
  ...coroutine code here, use take to yield a result...;
};

return { @a[$index++] };
  }

  my $gen = example(...);
  say $gen();
  say $gen();
  say $gen();

(FWIW, I like something along the lines of is coroutine and then
yield to yield a result, too, but I don't have a strong opinion on
this.)

--Ingo

-- 
Linux, the choice of a GNU | Row, row, row your bits, gently down the
generation on a dual AMD   | stream...  
Athlon!| 



Re: Coroutine Question

2005-05-04 Thread Joshua Gatcomb
On 5/4/05, Luke Palmer [EMAIL PROTECTED] wrote:
 On 5/4/05, Joshua Gatcomb [EMAIL PROTECTED] wrote:
  Ok - this isn't what I was expecting at all.  That doesn't make it a
  bad thing.  Given something that looks a lot more like a typical
  coroutine:
 
  sub example is coroutine {
  yield 1;
  yield 2;
  yield 3;
  }
 
  I would expect
  for 1 .. 5 { say example() } to print 1\n2\n3\n1\n\2
 
 Ding!  You just hit the disagreement point.  You're on Damian's side.

Not exactly.  I am basing my expectation on the following link and
then asking with all the different ways to define behavior - has a
decision WRT p6 been made.

http://www.sidhe.org/~dan/blog/archives/000178.html

 Sorry for my language... it's just that if I were dropped into a
 project that invented that abstraction for something it was doing, it
 would be one of the first things I'd change.  Totally unscalable
 design.
 
  If I got fancy and added a parameter
 
  sub example ( $num ) is coroutine {
  yield $num;
  yield $num + 1;
  yield $num - 2;
  }
 
  I would expect
  for 1 .. 5 { say example( 7 ) } to print 7\n8\n6\n7\n8;
 
 And here is where it gets trickier:
 
 say example(7);   # 7
 say example(7);   # 8
 say example(8);   # 8?  6?

Exactly my point.  There is more than one way to define the behavior
(especially with parameter handling).  That is why I included a
reference and asked how p6 was going to do it.  Personally, I don't
care because I know people like you, Larry, Damian, et all will make a
sound decision.  What I want to know is if you have a decision already
that isn't published in its entirety so I can start writing tests.

Ok, I do care.  Regardless of behavior, I would hope the syntax would
somewhat resemble that of other languages.
 
 Luke
 
Cheers,
Joshua Gatcomb
a.k.a. L~R


Re: Coroutine Question

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 10:07, Aaron Sherman wrote:
 On Wed, 2005-05-04 at 09:47, Joshua Gatcomb wrote:
 
  So without asking for S17 in its entirety to be written, is it
  possible to get a synopsis of how p6 will do coroutines?
 
 A coroutine is just a functional unit that can be re-started after a
 previous return, so I would expect that in Perl, a coroutine would be
 defined by the use of a variant of return

Oh, I failed to comment on a few things (and given Luke's responses,
this answer can be seen to dove-tail into what he said, though I don't
think you need a coroutine trait most of the time).

Here they are as questions with my expected default answers:

Q: What does a coroutine return when exausted?
A: It either explicitly returns something or falls off the end. This
allows you to:

sub ten() { for 1..10 - $_ { coreturn $_ } return undef; }

correctly terminating ten() when called in a loop.

If you ever call coreturn, then dropping off the end of the routine
probably implicitly returns undef rather than the last statement
executed as normal. Why? As a default way to signal the caller that
we're done. More sophisticated means (adding traits to the undef) might
be employed.

Q: What happens if you change parameters?
A: Nothing. You would have to store the information about what
parameters were active somewhere, and no matter where you choose
(current lexical pad, parameters, coroutine itself, etc.), there are
many cases that are non-obvious to the programmer, and this gets
strange:

while someco(@x) - $_ {
while someco(@x) - $_ {...}
}

If you want to modify behavior based on parameter, return a closure that
is a coroutine:

sub someco(@p) { return - { for @p - $_ { coreturn $_ } } }

my $co1 = someco(@x);
while $co1.() - $_ {
my $co2 = someco(@x);
while $co2.() - $_ {...}
}

-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: Cmmposition binop

2005-05-04 Thread Ingo Blechschmidt
Hi,

Rob Kinyon wrote:
 What about the function compose() that would live in the module
 keyword, imported by the incantation use keyword qw( compose );?

FWIW, I like o better -- function composing is very often used in FP,
and should therefore have a short name.

Luckily, it's very easy to define that:
  sub *infix:o(Code $a, Code $b) {...}

 multimethod compose (@*List) {
 return {
 $_() for @List;
 };
 }

I don't that will work for functions that take arguments.

My take at it:
  sub compose (Code [EMAIL PROTECTED]) {
# Note: We're losing compile-time type checking here.
return - [EMAIL PROTECTED] is copy {
  @args = $_([EMAIL PROTECTED]) for reverse @fs;
  return @args;
};
  }

  sub f(Int $x) { 100 + $x }
  sub g(Int $x) {   2 * $x }
  my $f_o_g = compose f, g;
  say $f_o_g(42); # 184


Is there a way to not lose compile-time type checking, so that the
following will barf at compile time?
  sub f(Int $x) returns Int { 100 + $x }
  sub g(Int $a, Int $b) returns Int {...}
  my $f_o_g = compose f, g; # should die


--Ingo

-- 
Linux, the choice of a GNU | Life would be so much easier if we could
generation on a dual AMD   | just look at the source code.
Athlon!| -- Dave Olson



Plethora of operators

2005-05-04 Thread Rob Kinyon
I just started following the list again after a few months (though I
have been skimming the bi-weekly summaries) and I'm a little alarmed
at what seems to be a trend towards operaterizing everything in sight
and putting those operators in the core.

My understanding of P6 after the reading the AES was that the core was
supposed to be very small, robust, and reusable. The biggest feature
was supposed to be the PGE, from which you could define anything you
wanted as an add-on module. Kinda like Lisp, but Perlishly dwimming.

What happened to the idea of having modules that define syntax? Did I
miss a change in focus over the past few months?

Rob


Re: Open and pipe

2005-05-04 Thread Juerd
Gaal Yahas skribis 2005-05-04 17:24 (+0300):
 Ah yes, that's another thing I was wondering about: what does opening a
 pipe return. If it's a one-way pipe, okay, this may be a single handle;
 but for bidirectional opens, we need $in, $out, and $err handles; and

That'd be tridirectional, then.

A normal filehandle can already handle bidirection.

I think the following solution suffices in a clean way:

$h = open a pipe;

Now,
$h.in;
$h.out;
$h.err;

$h.print(foo);   # use $h.out
$l = $h.readline;  # use $h.in


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: Open and pipe

2005-05-04 Thread Rob Kinyon
Would that mean that a filehandle opened readonly would throw an
exception if you attempted to either print or warn on it?

On 5/4/05, Juerd [EMAIL PROTECTED] wrote:
 Gaal Yahas skribis 2005-05-04 17:24 (+0300):
  Ah yes, that's another thing I was wondering about: what does opening a
  pipe return. If it's a one-way pipe, okay, this may be a single handle;
  but for bidirectional opens, we need $in, $out, and $err handles; and
 
 That'd be tridirectional, then.
 
 A normal filehandle can already handle bidirection.
 
 I think the following solution suffices in a clean way:
 
 $h = open a pipe;
 
 Now,
 $h.in;
 $h.out;
 $h.err;
 
 $h.print(foo);   # use $h.out
 $l = $h.readline;  # use $h.in
 
 
 Juerd
 --
 http://convolution.nl/maak_juerd_blij.html
 http://convolution.nl/make_juerd_happy.html
 http://convolution.nl/gajigu_juerd_n.html



Re: Open and pipe

2005-05-04 Thread Gaal Yahas
On Wed, May 04, 2005 at 04:59:21PM +0200, Juerd wrote:
  Ah yes, that's another thing I was wondering about: what does opening a
  pipe return. If it's a one-way pipe, okay, this may be a single handle;
  but for bidirectional opens, we need $in, $out, and $err handles; and
 
 That'd be tridirectional, then.

Bi and up.

 A normal filehandle can already handle bidirection.

Yes, but to exploit that would be something of a perversion: the in and
out handles are very separate as far as everybody else is concerned.

 I think the following solution suffices in a clean way:
 
 $h = open a pipe;
 
 Now,
 $h.in;
 $h.out;
 $h.err;
 
 $h.print(foo);   # use $h.out
 $l = $h.readline;  # use $h.in

Yes, if $h is the not-very-primitive version of IO. Surely the type of
$h.in is not the same as $h itself?

-- 
Gaal Yahas [EMAIL PROTECTED]
http://gaal.livejournal.com/


Re: Open and pipe

2005-05-04 Thread Juerd
Rob Kinyon skribis 2005-05-04 11:02 (-0400):
 Would that mean that a filehandle opened readonly would throw an
 exception if you attempted to either print or warn on it?

I don't know what warning on a filehandle should be or do, but ignoring
that bit, yes, an exception would be the right thing to have.

Compare this to Perl 5 and see how similar it is.


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: Open and pipe

2005-05-04 Thread Juerd
Gaal Yahas skribis 2005-05-04 18:15 (+0300):
 Yes, if $h is the not-very-primitive version of IO. Surely the type of
 $h.in is not the same as $h itself?

Why not? $h does IO::Handle::Tridirectional, and $h.in does not, even though
$h and $h.in are-a IO::Handle.

Or whatever the classes will be, of course.


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: Open and pipe

2005-05-04 Thread Rob Kinyon
 Rob Kinyon skribis 2005-05-04 11:02 (-0400):
  Would that mean that a filehandle opened readonly would throw an
  exception if you attempted to either print or warn on it?
 
 I don't know what warning on a filehandle should be or do, but ignoring
 that bit, yes, an exception would be the right thing to have.
 
 Compare this to Perl 5 and see how similar it is.

The idea is that
$h.print() goes to $h.out
$h.readline() goes to $h.in
$h.warn() goes to $h.err

Making the tri-directional trifecta complete.

Rob


Re: Open and pipe

2005-05-04 Thread Juerd
Rob Kinyon skribis 2005-05-04 11:20 (-0400):
 $h.print() goes to $h.out
 $h.readline() goes to $h.in
 $h.warn() goes to $h.err
 Making the tri-directional trifecta complete.

It's sort-of consistent, but I don't like it, because warnings are much
more complicated than just things that are printed to stderr.


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: Circular dereference?

2005-05-04 Thread Thomas Sandlaß
Autrijus Tang wrote:
What should this do, if not infinite loop?
my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;
Hmm, after the my both $x and $y store an undef.
Then $x stores a ref to undef. Then $y stores
a ref to ref of undef. I see no circle.
Now let's look at $x = 1. I think it goes down
to the ref and let's it reference the value 1.
This is actually necessary because the ref that $x
contains has got other referees---that is the one
stored in $y. Thereafter $y sees the same value 1
through a chain of two references.
Graphically this looks as follows:
$y - ref
 \
$x -- ref - 1
So I think even $x = \$x should just do the right thing.
And that is not in any way different from $x = $x + 5.
The RHS is evaluated and the resulting value is stored
in $x.
--
TSa (Thomas Sandlaß)


Re: Circular dereference?

2005-05-04 Thread Autrijus Tang
On Wed, May 04, 2005 at 05:30:48PM +0200, Thomas Sandla wrote:
 Autrijus Tang wrote:
 What should this do, if not infinite loop?
 
 my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;
 
 Hmm, after the my both $x and $y store an undef.
 Then $x stores a ref to undef. Then $y stores
 a ref to ref of undef. I see no circle.

Note that your explanation is completely different
from the Perl 5 semantics, which my impression was
that the same model is followed by Perl 6.  To wit:

# Perl 5 code
my ($x, $y); $x = \$y; $y = \$x; print $$$x;

If the reference semantics changed drastically, please
reflect it prominiently in the relevant Synopsis. :)

Thanks,
/Autrijus/


pgpZv1yGV5leC.pgp
Description: PGP signature


Re: Circular dereference?

2005-05-04 Thread Thomas Sandlaß
Autrijus Tang wrote:
If the reference semantics changed drastically, please
reflect it prominiently in the relevant Synopsis. :)
Unfortunately I don't feel entitled to do so. I'm
just an interessted bystander, not a member of the
design team.
Sorry.
--
TSa (Thomas Sandlaß)



Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 09:55:57AM -0400, Aaron Sherman wrote:
: I don't think there's a perfect solution for what you want, but this is
: pretty darned close.

Yes, and I was always a little fond of the bracket solution since it
lets you visually distinguish

$x = [»+«] @foo;

$x = [+]« @foo;

In interests of simplification, it's vaguely possible we should
replace unary * with [,], which would let *x is always mean ::*x,
but probably not.

Larry


Re: Circular dereference?

2005-05-04 Thread Juerd
Thomas Sandlaß skribis 2005-05-04 17:30 (+0200):
 my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;
 Hmm, after the my both $x and $y store an undef.
 Then $x stores a ref to undef. Then $y stores
 a ref to ref of undef. I see no circle.

No, again, please do not make the mistake of thinking VALUES have
identity. VARIABLES (containers) do. A reference points to a container,
never to a value directly. The undef in question is the undef value, not
the undef variable (which exists too).

This is something that doesn't change from Perl 5 to Perl 6.

Try in perl (5) the following code:

my ($x, $y);
my ($xx, $yy) = \($x, $y);

They're both undef, but their references are different.

There are names, containers and values.

Declaration and := bind names to containers.

Assignment copies values.

Containers aren't copied.

A reference points to a container.

A name points to a container.

A reference is a value.

The reference isn't to $y's container's value, but to the container.
Regardless of its value. It is also not to the name $y.

 So I think even $x = \$x should just do the right thing.

The right thing is the most vague way to describe semantics.

 And that is not in any way different from $x = $x + 5.
 The RHS is evaluated and the resulting value is stored
 in $x.

$x = $x + 5 overwrites the value of $x's container with the old value
plus five. \$x is the same before and after. The value isn't stored in
$x, but in its container, to which $x is merely a name (implicit
reference).


Juerd
-- 
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html 
http://convolution.nl/gajigu_juerd_n.html


Re: Circular dereference?

2005-05-04 Thread Aaron Sherman
On Wed, 2005-05-04 at 11:30, Thomas Sandlaß wrote:
 Autrijus Tang wrote:
  What should this do, if not infinite loop?
  
  my ($x, $y); $x = \$y; $y = \$x; $x[0] = 1;
 
 Hmm, after the my both $x and $y store an undef.
 Then $x stores a ref to undef. Then $y stores
 a ref to ref of undef. I see no circle.

Squint harder ;-)

Your mistake was here: Then $x stores a ref to undef.

It does not. It stores a ref to $y, and I can prove it:

my($x,$y);
$x = \$y;
$y = 7;
say $$x;
$y = \$x;
say $$y; # ?!

If we agree that the first say should print 7, then we must conclude
that either we've changed the value of undef to 7, or we've created a
circular reference.

If we do not agree that the first say prints 7, then we have more
fundamental differences of understanding about how P5 works to figure
out before we can agree on how P6 should work.


-- 
Aaron Sherman [EMAIL PROTECTED]
Senior Systems Engineer and Toolsmith
It's the sound of a satellite saying, 'get me down!' -Shriekback




Re: Plethora of operators

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 10:58:22AM -0400, Rob Kinyon wrote:
: I just started following the list again after a few months (though I
: have been skimming the bi-weekly summaries) and I'm a little alarmed
: at what seems to be a trend towards operaterizing everything in sight
: and putting those operators in the core.

I think your concern is overblown here.  Yes, it's a slippery slope.
No, we are not sliding all the way down it.  And it's just as easy
to slide up this slope as well as down, and end up with Lisp rather
than APL.  Neither extreme is healthy.

Are there any particular other operators you're worried about?
I think the current design does a pretty good job of factoring out the
metaoperators so that the actual set of underlying basic operators *is*
relatively small.  Yes, you can now say something like

$x = [»+^=«] @foo;

but the basic operator there is just ^, with a + modifier to indicate
numeric XOR, = to indicate an assignment operator, »« to indicate
explicit paralellism, and now [] to indicate reduction, all in a nice
visual pill so you can think of it as a single operator when you want
to.  But I didn't even think about adding a reduction metaoperator till
I wanted it for something else in the design that had been bugging me
for a long, long time.  Almost nothing in the design of Perl 6 is there
for a single purpose.

: My understanding of P6 after the reading the AES was that the core was
: supposed to be very small, robust, and reusable.

Eh, I don't think that was ever a major goal after the RFCs came out.
After the RFCs it soon became apparent that we had to figure out at
least one obvious way to do most of these things, or people would just
reinvent several incompatible ways.  The main design goal is to find
an impedance match between the problem space and the solution space,
and that means some kind of middling approach to complexity.

: The biggest feature
: was supposed to be the PGE, from which you could define anything you
: wanted as an add-on module. Kinda like Lisp, but Perlishly dwimming.

Again, even with rules we're aiming at a combination of simplicity
and power.  We have not hesitated to add notation where it clarifies.

: What happened to the idea of having modules that define syntax? Did I
: miss a change in focus over the past few months?

Nope.  You can still warp syntax as much as you like.  But we'd like
to discourage people from doing that by default merely because the
core neglects to provide a standard default solution.

That was the big problem with Perl 5's OO design.  It was too minimal.
It didn't specify an obvious way to do it, so everybody rolled their own
in an incompatable fashion.  We're not going so far as Python philosophy,
where if there's an obvious way to do it, you disallow any other solutions.
But if you oversimplify the core, you force the complexity elsewhere.
It's just the Waterbed Theory of linguistic complexity.  Push down
here, it goes up there.

In short, it's still the very same old Easy things should be easy,
and hard things should be possible.  It's just that with Perl 6, we're
rethinking what should be easy, and what should be merely possible.
Most things are nailed down by now to one side or the other, but
now and then something flips over to the other side.  And last night
I decided that reduce should flip to easy, especially since it's a
really easy metaoperator to explain to a newbie.  (Much easier than
the +, ~, and ? bitop prefixes that people nonetheless seem to like,
for instance.)

Larry


Re: Open and pipe

2005-05-04 Thread Uri Guttman
 J == Juerd  [EMAIL PROTECTED] writes:

  J Rob Kinyon skribis 2005-05-04 11:20 (-0400):
   $h.print() goes to $h.out
   $h.readline() goes to $h.in
   $h.warn() goes to $h.err
   Making the tri-directional trifecta complete.

  J It's sort-of consistent, but I don't like it, because warnings are much
  J more complicated than just things that are printed to stderr.

also you have the .err backwards. in the case of running a subprocess,
the err handle is readonly (the subprocess writes to ITS stderr with
warn). so you need another method to make a easy read of the err handle.

and you need ways to get at the handles themselves so you can use
sysread/write, select/poll and event loops. but i like the idea of
returning a single smart object which returns reasonable things in
appropriate contexts.

remember at all times when discussing i/o for p6 that the goals are to
support all forms of i/o in a consistant manner. it is much more than
just a polymorphic open or io::all. they only address one aspect of i/o
(initiating an i/o request) and that skips read/write/fcntl/event loops
and more.

uri

-- 
Uri Guttman  --  [EMAIL PROTECTED]   http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs    http://jobs.perl.org


Re: Plethora of operators

2005-05-04 Thread Rob Kinyon
 Are there any particular other operators you're worried about?
 I think the current design does a pretty good job of factoring out the
 metaoperators so that the actual set of underlying basic operators *is*
 relatively small.  Yes, you can now say something like
 
 $x = [»+^=«] @foo;
 
 but the basic operator there is just ^, with a + modifier to indicate
 numeric XOR, = to indicate an assignment operator, »« to indicate
 explicit paralellism, and now [] to indicate reduction, all in a nice
 visual pill so you can think of it as a single operator when you want
 to.  But I didn't even think about adding a reduction metaoperator till
 I wanted it for something else in the design that had been bugging me
 for a long, long time.  Almost nothing in the design of Perl 6 is there
 for a single purpose.

The basic operator is ^. . I've been programming for a while,
following P6 pretty heavily, and I would not have been able to parse
that out of the 6 characters.

My basic concern is that [»+^=«] looks like line-noise. Yes, I can
parse it out, given time and understanding of the various operators,
but that's starting to smack of golf in production code, even though
it's not.

 : What happened to the idea of having modules that define syntax? Did I
 : miss a change in focus over the past few months?
 
 Nope.  You can still warp syntax as much as you like.  But we'd like
 to discourage people from doing that by default merely because the
 core neglects to provide a standard default solution.
 
 That was the big problem with Perl 5's OO design.  It was too minimal.
 It didn't specify an obvious way to do it, so everybody rolled their own
 in an incompatable fashion.

No-one came up with an incompatible way to do CGI or to handle
filenames, yet neither is within the language. If p5p had provided a
Class::* module within the core, that would have been the standard.
Now, this wouldn't have prevented others from providing alternatives
(such as CGI::Simple for CGI), but there would have been something
people could reach for if they needed it that would be installed. (I
actually think this was a mistake p5p made.)

Operators like [] and  can be provided for in a standard way, yet
not be in the core language. I'm not arguing against the operator
itself - I like [] as a reduce() circumfix operator modifier and wish
I had a way of putting it into Perl5. But, I would love to see it as:

use operator::reduce;
use keyword::flarg;

That way, you have the ability to document the usage of some of the
weirder operators.

Here's the base concern - I program Perl for a living as a contractor.
Every site I go to, I'm told Don't use those -weird- features. The
features they're referring to? map/grep, closures, CODErefs, symbol
table manipulation ... the standard basics.

If the feature was in a module, kinda like a source filter (but not as
sucky), then the feature is more palatable because everyone has a
chance to agree that it should be added. It's stupid, but it's easier
to get everyone to agree to add the use of a module than to use a
builtin feature. I don't understand why, but that's my experience
across 4 states. *shrugs*

*thinks for a minute*

[»+^=«] reminds me of a P5 regex that has a comment saying This is
black magic. Don't touch!. --That's-- my complaint.

Rob


Re: Coroutine Question

2005-05-04 Thread John Macdonald
On Wed, May 04, 2005 at 10:43:22AM -0400, Aaron Sherman wrote:
 On Wed, 2005-05-04 at 10:07, Aaron Sherman wrote:
  On Wed, 2005-05-04 at 09:47, Joshua Gatcomb wrote:
  
   So without asking for S17 in its entirety to be written, is it
   possible to get a synopsis of how p6 will do coroutines?
  
  A coroutine is just a functional unit that can be re-started after a
  previous return, so I would expect that in Perl, a coroutine would be
  defined by the use of a variant of return

A co(operating) routine is similar to a sub(ordinate) routine.
They are both a contained unit of code that can be invoked.

A subroutine carries out its entire functionality completely
and then terminates before returning control back to the caller.

A coroutine can break its functionality into many chunks. After
completing a chunk, it returns control back to the caller
(or passes control to a different coroutine instead) without
terminating.  At some later point, the caller (or some other
coroutine) can resumes this coroutine and it will continue
on from where it left off.  From the point of view of this
coroutine, it just executed a subroutine call in the middle
of its execution.  When used to it full limit, each coroutine
treats the other(s) as a subroutine; each thinks it is the
master and can run its code as it pleases, and call the other
whenever and from wherever it likes.

This can be used for a large variety of functions.

The most common (and what people sometimes believe the
*only* usage) is as a generator - a coroutime which creates a
sequence of values as its chunk and always returns control
to its caller.  (This retains part of the subordinate aspect
of a subroutine.  While it has the ability to resume operation
from where it left off and so doesn't terminate as soon as it
has a partial result to pass on, it has the subordinate trait
of not caring who called it and not trying to exert any control
over which coroutine is next given control after completing a
chunk).

The mirror image simple case is a consumer - which accepts a
sequence of values from another coroutine and processes them.
(From the viewpoint of a generator coroutine, the mainline
that invokes it acts is a consumer coroutine.)

A read handle and a write handle are generator and consumer data
contructs - they aren't coroutines because they don't have any
code that thinks it has just called a subroutine to get
the next data to write (or to process the previous data that
was read).  However, read and write coroutines are perfectly
reasonable - a macro processor is a generator coroutine that
uses an input file rather than a mathematical sequence as
one facet of how it decides the next chunk to be generated
and passed on; a code generator is a consumer coroutine that
accepts parsed chunks and creates (and writes to a file perhaps)
code from it.

Controlling the flow of control is powerful - a bunch of
coroutines can be set up to act like a Unix pipeline.  The first
coroutine will read and process input.  Occassionally it
will determine that it has a useful chunk to pass on to its
successor, and pass control to that successor.  The niddle
coroutines on the chain will pass control to their predecessor
whenever they need new input and to their successor when
they have transformed that input into a unit of output.
The last coroutine will accept its input and process it,
probably writing to the outside world in some way.

Coroutines can permit even wider range in the control flow.
Coroutines were used in Simula, which was a simulation and
modelling language, and it allowed independent coroutines to
each model a portion of the simulation and they would each be
resumed at appropriate times, sometimes by a master scheduler
which determined which major coroutine needed to be resumed
next, but also sometimes by each other.

A coroutine declaration should essentially declare 2
subroutine interfaces, describing the parameter and return
value information.  One is the function that gets called to
create a new instance of the coroutine; the other defines
the interface that is used to resume execution of an existing
instance of that coroutine.  The resume interface will look
like a definition of a subroutine - describing the argument
list and return values for the interface.

Having special purpose coroutine declarations for simple
generators and consumers would be possible and could hide the
need (in more general cases) for the full double interface.

The creation interface should (IMHO) return an object that can
be resumed (using the resumtion interface), could be tested
for various aspects of its state - .isalive (has it terminated
by returning from the function), .caller (which coroutine last
resumed it), probably others.

The act of resuming another coroutine is simply calling its
second interface with an appropriate set of arguments and
expecting that resumption to return an appropriate set of
values (when this coroutine is next resumed).  The resume
operation for 

Re: Circular dereference?

2005-05-04 Thread Thomas Sandlaß
Aaron Sherman wrote:
Squint harder ;-)
I'm trying!

If we agree that the first say should print 7, then we must conclude
that either we've changed the value of undef to 7, or we've created a
circular reference.
In my view of refs 7 is printed, indeed. But I've difficulty to understand
what you mean with the value of undef. Undefinedness to me is the absence
of value. But as Larry argues, actual values that are some kind of soft or
weak exception are very usefull because they just travel along the flow of
control instead of disrupting it. Both exception and undef carry information
of what could not be achieved. I like that view, too. In that sense I'm
speaking of undef values like I speak of an integer value.
I think we agree that references are a level of indirection.
We also agree that variables are names that allow us to get
at---and here I think our views diverge---a box, cell or container
for values. So after resolving a name, we always have one level of
indirection from the cell to the value. To me a referencial value
is just such a thing without an entry in a namespace or symbol table.
And yes, Juerd and I have fundamentally different opinions of what
has got identity. To me only values can be identical. Cells are an
implementation vehicle to handle values.

If we do not agree that the first say prints 7, then we have more
fundamental differences of understanding about how P5 works to figure
out before we can agree on how P6 should work.
I'm not argueing how Perl 5 works. My concern is about defining pratical
and usefull semantics for Perl 6. But as I said, I don't consider myself
authoritative. And I'm sorry, that I said the right thing when I meant
avoid circular refs.
Regards,
--
TSa (Thomas Sandlaß)


Re: Coroutine Question

2005-05-04 Thread Patrick R. Michaud
On Wed, May 04, 2005 at 02:22:43PM -0400, John Macdonald wrote:
 On Wed, May 04, 2005 at 10:43:22AM -0400, Aaron Sherman wrote:
  On Wed, 2005-05-04 at 10:07, Aaron Sherman wrote:
   A coroutine is just a functional unit that can be re-started after a
   previous return, so I would expect that in Perl, a coroutine would be
   defined by the use of a variant of return
 
 A co(operating) routine is similar to a sub(ordinate) routine.
 They are both a contained unit of code that can be invoked.
 
 A subroutine carries out its entire functionality completely
 and then terminates before returning control back to the caller.
 [...]
 This can be used for a large variety of functions.
 The most common (and what people sometimes believe the
 *only* usage) is as a generator - a coroutime which creates a
 sequence of values as its chunk and always returns control
 to its caller.  ...

Notably, the grammar engine for Perl 6 rules is taking full
advantage of Parrot coroutines.  Invoking a rule (coroutine)
causes the rule to continue until it completes a match, at which
point it returns the results of the match to the caller.  But
the caller can later return control back to the rule/match so
that the matching (and backtracking) continues from where it
last finished.

Pm


Re: Type system questions.

2005-05-04 Thread Larry Wall
On Tue, May 03, 2005 at 09:53:59PM +0800, Autrijus Tang wrote:
: On Tue, May 03, 2005 at 05:32:44AM -0700, Larry Wall wrote:
:  : # Type Instantiation?
:  : sub apply (fun::a returns ::b, ::a $arg) returns ::b {
:  :   fun($arg);
:  : }
:  
:  The first parameter would be fun:(::a) these days, but yes.
:  (Stylistically, I'd leave the  off the call.)
: 
: So, the returns trait is not part of a function's long name, but
: it can still be matched against (and instantiated)?

Well, we've made some rumblings about the return type being a
tiebreaker on MMD after all other invocants have been considered,
but other than that, I don't consider it a part of the long name,
at least for standard Perl 6 as currently defined.  An MLish version
of Perl might.  But maybe installing the return type inside the
signature is a way to indicate that you want the return type considered.

: Hrm, is it specific to the returns trait, or we can match against 
: any traits at all?
: 
: sub foo ($thingy does Pet[::food]) { ... }

Which is basically what

sub foo (Pet[::food] $thingy) { ... }

is doing already, unless you consider that what $thingy returns is
different from what it actually is, which would be odd.

: Also, S12 talks about is context:
: 
: method wag ([EMAIL PROTECTED] is context(Lazy)) { $:tail.wag([EMAIL 
PROTECTED]) }

That is probably misspecified slightly, insofar as the context is
already lazy by default.  The problem is that the incoming Lazy
is being bound as the .specs of an Array, and I was thinking the
flattening aspects of an array might cause a double flattening
when we pass it on to the delegated method.  But I suppose if we
work it right, a simple

method wag ([EMAIL PROTECTED]) { $:tail.wag([EMAIL PROTECTED]) }

should coalesce the two abstract flattenings of the two calls into one
actual lazy flattening.

: What is the type of Lazy here?

Intended to be an ordinary lazy list such as you always get in for
the list unless you force it to be eager.  It basically is what is
bound to the .specs of the Array for its generators, if I'm thinking
about it right.

: Also, is context a role or a class?

I'd call it a trait in this case since it's apparently doing heavy duty
context magic to the parse (which you can't do with a method anyway...)

: If it is a class, how does it parameterize over Lazy ?

It's mostly there to allow is context(Scalar) so we can have
a subroutine call with a list of scalars.  But we also want is
context(Eager) functionality to force eager flattening of the list.
The latter currently has a shorthand of

sub foo ([EMAIL PROTECTED]) {...}

to get a P5 style immediate flattening.

:  : # Single colon as tuple composer?
:  : my $Triple::= :(Bool, Int, Str);
:  : my $TripleTuple   ::= :($Triple, $Triple);
:  : my pair_with_int ::= - Type ::t { :(::t, Int) };
:  
:  I think of it as the signature composer.  Does Haskell call them
:  tuples?  That word tends to mean any immutable list in other circles,
:  though I suppose :(4, 2, foo) could be construed as a type whose
:  values are constrained to single values.
: 
: Err, sorry, I meant tuple type composer.  In haskell:
: 
: -- Value --   -- Type --
: (True, 1, Hello) :: (Bool, Int, Str)
: 
: This was because I couldn't think of other operators other than ,
: to operate on types inside :().  Come to think of it, there are lots
: of other things possible:
: 
: my $SomeType ::= :((rand  0.5) ?? Bool :: Int);

I don't think that amount of generality should be allowed in :(), at least,
not without something explicitly interpolative, like

my $SomeType ::= :( ::( (rand  0.5) ?? Bool :: Int ) );

: Also, is it correct that (4, 2) can be typed as both a :(Eager) list and a
: :(Any, Any) tuple?  Is it generally the case that we can freely typecast
: unbounded list of values from (and to) bounded ones?

I don't offhand know what restrictions we might want to put on it.

:  But it makes :(int $x) terribly ambiguous if it's an evaluative
:  context.  (Which is why we wanted :() in the first place, to avoid
:  such evaluation.)
: 
: Evaluative context?
: 
: However, I'm a bit confused now.  Does a :(...) expression stand for a
: type, a variable binding, or both?
: 
: :($x) # variable binding or deref $x into a type?
: :(Int)# a type?
: :(Int $x) # bind a type into $x?

I was thinking of it only as signature syntax, so it would always be
a variable binding, or pseudo-binding where it doesn't make sense to
actually create a variable.  I was thinking that the syntax might even
just allow you to drop the identifier, so :($:?%,*@) is a valid type,
identical to :($x: ?%y, [EMAIL PROTECTED]) when the names don't matter.

By the way, the problem with just throwing a returns in the sig is
that

:($x: ?%y, [EMAIL PROTECTED] returns Int)

would be taken as

:($x: ?%y, Int [EMAIL PROTECTED])

So maybe it would have to be written

:($x: ?%y, 

Re: Coroutine Question

2005-05-04 Thread Rod Adams
John Macdonald wrote:
The most common (and what people sometimes believe the
*only* usage) is as a generator - a coroutime which creates a
sequence of values as its chunk and always returns control
to its caller.  (This retains part of the subordinate aspect
of a subroutine.  While it has the ability to resume operation
from where it left off and so doesn't terminate as soon as it
has a partial result to pass on, it has the subordinate trait
of not caring who called it and not trying to exert any control
over which coroutine is next given control after completing a
chunk).
 

[Rest of lengthy, but good explanation of coroutines omitted]
Question:
Do we not get all of this loveliness from lazy lists and the given/take 
syntax? Seems like that makes a pretty straightforward generator, and 
even wraps it into a nice, complete object that people can play with.

Now, I'm all in favor of TMTOWTDI, but in this case, if there are no 
other decent uses of co-routines, I don't see the need for AWTDI. 
Given/Take _does_ create a coroutine.

If there are good uses for coroutines that given/take does not address, 
I'll gladly change my opinion. But I'd like to see some examples.
FWIW, I believe that Patrick's example of the PGE returning matches 
could be written with given/take (if it was being written in P6).

-- Rod Adams


Re: Coroutine Question

2005-05-04 Thread Damian Conway
[Not back, just sufficiently irritated...]
Luke Palmer wrote:
in my proposal, when you call a coroutine, it returns an iterator (and
doesn't call anything):
my $example = example();
=$example;  # 1
=$example;  # 2
The thing this buys over the traditional (which I refer to as the
stupid) way, is that you can do this:
my $example = example();
my $example2 = example();
=$example; # 1
=$example; # 2
=$example2; # 1
=$example; # 3
There is _no way_ to do that using the stupid way.
Nonsense. You just return an anonymous coroutine:
sub example { coro {...} }
my $example  = example();
my $example2 = example();
$example();  # 1
$example();  # 2
$example2(); # 1
$example();  # 3
Damian


Re: Coroutine Question

2005-05-04 Thread John Macdonald
On Wed, May 04, 2005 at 03:02:41PM -0500, Rod Adams wrote:
 John Macdonald wrote:
 
 The most common (and what people sometimes believe the
 *only* usage) is as a generator - a coroutime which creates a
 sequence of values as its chunk and always returns control
 to its caller.  (This retains part of the subordinate aspect
 of a subroutine.  While it has the ability to resume operation
 from where it left off and so doesn't terminate as soon as it
 has a partial result to pass on, it has the subordinate trait
 of not caring who called it and not trying to exert any control
 over which coroutine is next given control after completing a
 chunk).
  
 
 [Rest of lengthy, but good explanation of coroutines omitted]
 
 Question:
 
 Do we not get all of this loveliness from lazy lists and the given/take 
 syntax? Seems like that makes a pretty straightforward generator, and 
 even wraps it into a nice, complete object that people can play with.
 
 Now, I'm all in favor of TMTOWTDI, but in this case, if there are no 
 other decent uses of co-routines, I don't see the need for AWTDI. 
 Given/Take _does_ create a coroutine.
 
 
 If there are good uses for coroutines that given/take does not address, 
 I'll gladly change my opinion. But I'd like to see some examples.
 FWIW, I believe that Patrick's example of the PGE returning matches 
 could be written with given/take (if it was being written in P6).

Um, I don't recall what given/take provides, so I may be only
addressing the limitations of lazy lists...

I mentioned Unix pipelines as an example.  The same concept of
a series of programs that treat each other as a data stream
translates to coroutines: each is a mainline routine that
treats the others as subroutines.  Take a simple pipeline
component, like say tr.  When it is used in the middle
of a pipeline, it has command line arguments that specify
how it is to transform its data and stdin and stdout are
connected to other parts of the pipeline.  It reads some
data, transforms it, and then writes the result.  Lather,
rinse, repeat.  A pipeline component program can be written
easily because it keeps its own state for the entire run but
doesn't have to worry about keeping track of any state for
the other partsd of a pipeline.  This is like a coroutine -
since a coroutine does not return at each step it keeps its
state and since it simply resumes other coroutines it does not
need to keep track of their state at all.  To change a coroutine
into a subroutine means that the replacement subroutine has to
be able, on each invokation, to recreate its state to match
where it left off; either by using private state variables
or by having the routine that calls it take over the task of
managing its state.  If pipeline components were instead like
subroutines rather than coroutines, then whenever a process
had computed some output data, instead of using a write
to pass the data on to an existing coroutine-like process,
it would have to create a new process to process this bit of
data.  Using coroutines allows you to create the same sort of
pipelines within a single process; having each one written as
its own mainline and thinking of the others as data sources
and sinks that it reads from and writes to is very powerful.
Lazy lists are similar to redirection of stdin from a file at
the head of a pipeline.  Its fine if you already have that data
well specified.  Try writing a perl shell program that uses
coroutines instead of separate processes to handle pipelines
and has a coroutine library to compose the pipelines; this
would be a much more complicated programming task to write
using subroutines instead of coroutines.

The example of a compiler was also given - the parser runs over
th input and turns it into tokens, the lexer takes tokens and
parses them into an internal pseudo code form, the optimizer
takes the pseudo code and shuffles it around into pseudocode
that (one hopes) is better, the code generator takes the
pseudocode and transforms it into Parrot machine code, the
interpretor takes the Parrot machine code and executes it.
They mostly connect together in a kind of pipeline; but
there can be dynamic patches to that pipeline (a BEGIN block,
for example, causes the interpretor to be pulled in as soon
as that chunk if complete, and if that code includes a use
it might cause a new pipeline of parser/lexer/etc to be set up
to process an extra file right now, while keeping the original
pipeline intact to be resumed in due course.  (This example
also fits with Luke's reservations about failing to distinguish
clearly between crating and resuming a coroutine - how are you
going to start a new parser if calling the parse subroutine
will just resume the instance that is already running instead
of creating a separate coroutime.)

For many simple uses generators are exactly what you need,
but they have limits.  A more powerful coroutine mechanism can
easily provide the simple forms (and, I would expect, without
any serious loss 

Re: Coroutine Question

2005-05-04 Thread Damian Conway
John Macdonald wrote a lovely summary of coroutines [omitted]. Then added:
 I'd use resume instead of coreturn
We've generally said we'd be using yield.
 and the interface for resume would allow values to be sent
 in as well as out.
Indeed. As John suggested, the yield keyword (or whatever we call it) will 
evaluate to the argument list that is passed when the coroutine is resumed.

 sub byn(@base) is coroutine($count,@add) {
 while(1) {
 push(@base,@add);
 return undef unless @base;
 ($count, @add) = resume .call (splice @base, 0, $count );
 }
 }
Under the stupid proposal, that would be:
  sub byn(@base) {
  return coro ($count, [EMAIL PROTECTED] is copy) {  # Anonymous coroutine
  while (1) {
  push @base, @add;
  return undef unless @base;
  ($count, @add) = yield splice @base, 0, $count;
  }
  }
  }
 my $co = byn(1..10);

 print $co.resume(3);  # 1 2 3
 print $co.resume(2,50..56)# 4 5
 print $co.resume(10); # 6 7 8 9 10 50 51 52 53 54
 print $co.resume(10); # 55 56
 print $co.resume(10); # (undef)
 print $co.resume(10); # exception or undef
Which would become:
  my $co = byn(1..10);
  print $co(3); # 1 2 3
  print $co(2,50..56)   # 4 5
  print $co(10);# 6 7 8 9 10 50 51 52 53 54
  print $co(10);# 55 56
  print $co(10);# undef
  print $co(10);# exception
Damian



Re: Coroutine Question

2005-05-04 Thread Rod Adams
John Macdonald wrote:
On Wed, May 04, 2005 at 03:02:41PM -0500, Rod Adams wrote:
 

If there are good uses for coroutines that given/take does not address, 
I'll gladly change my opinion. But I'd like to see some examples.
FWIW, I believe that Patrick's example of the PGE returning matches 
could be written with given/take (if it was being written in P6).
   

Um, I don't recall what given/take provides, so I may be only
addressing the limitations of lazy lists...
 

First off, it's gather/take, not given/take. My mistake. Oops.
Here goes my understanding of gather/take:
gather takes a block as an argument, and returns a lazy list object. 
Inside that block, one can issue take commands, which push one or 
more values onto the list. However, since it's all lazy, it only 
executes the block when someone needs something off the list that 
currently isn't there. And even then, it only executes the block up 
until enough takes have happened to get the requested value. If the 
block exits, the list is ended.

A simple example:
 sub grep ($test, [EMAIL PROTECTED]) {
   gather {
 for @values - $x {
   take $x if $x ~~ $test;
 }
   }
 }
Since the block in question is in effect a closure, and gets called 
whenever a new value to the lazy list is requested, I believe it 
provides all of the generator aspects of coroutines. It could access 
various up-scoped/global variables, thereby changing it's behavior 
midcourse, if needed. You can create several versions of the same 
generator, all distinct, and with separate states, and easily keep 
them separate. To create a new list, you call the function. To resume a 
list, you ask for a value from the list it hasn't already created.

Once you have some of these lazy list functions made, pipelining them 
together is trivial via == or ==.

For many simple uses generators are exactly what you need,
but they have limits.  A more powerful coroutine mechanism can
easily provide the simple forms (and, I would expect, without
any serious loss of performance).
I'll ask again for a couple examples of the non-generator uses, mostly 
out of curiosity, but also to better evaluate the proposals being kicked 
around in this thread.

-- Rod Adams



Re: reduce metaoperator

2005-05-04 Thread John Williams
   $sum = reduce(+) @array; # macro
   $sum = reduce infix:+ @array; # regular sub
   $sum = [+] @array;   # meta operator
   ($sum = 0) += @array;# hyper tricks

   use My::Cool::Reduce::Mixin; # unless in core
   $sum = @array.reduce(+); # macro-ey method

There can never be too many ways to do it, but as one who hasn't been able
to keep very current with perl6l lately, I think a particularly relevant
question we should be asking a lot is, How long will it take someone
reading that code to figure out what it means?

The nice thing about reduce is that it is easy to find in the index.
[+] will not be in the index, although I'm sure it would be the first
thing one would look for.  I'm not sure how many other things a newbie
will try before looking up metacircumfix:[ ] or however that is
spelled.

Also in the same light, I am not convinced that reduce is used often
enough to deserve a shorter huffman encoding.  Sure it makes that fancy
slice shorter, but how much headscratching is required for a human to
parse that shortness?

Personally, I think this would be more readable, in part because the
object paradigm is widely understood.

@foo[0..9; @array.reduce(infix;) ; 0..9];
@foo[0..9; @array.reduce(;) ; 0..9];

I actually kinda like the idea, in spite of coming across as a naysayer
here.  It just seems like an idea that should be in a module instead of in
core, which seems like an awful strange thing to say to $Larry.

~ John Williams

P.S.  I like [+] better than \\+ because I cannot figure out what \\ would
be mnemonic for.

P.P.S.  We can't use (R)+ ... it's already trademarked!  :)



Re: Coroutine Question

2005-05-04 Thread John Macdonald
On May 4, 2005 06:22 pm, Rod Adams wrote:
 John Macdonald wrote:
 
 On Wed, May 04, 2005 at 03:02:41PM -0500, Rod Adams wrote:
   
 
 If there are good uses for coroutines that given/take does not address, 
 I'll gladly change my opinion. But I'd like to see some examples.
 FWIW, I believe that Patrick's example of the PGE returning matches 
 could be written with given/take (if it was being written in P6).
 
 
 
 Um, I don't recall what given/take provides, so I may be only
 addressing the limitations of lazy lists...
   
 
 First off, it's gather/take, not given/take. My mistake. Oops.
 
 Here goes my understanding of gather/take:
 
 gather takes a block as an argument, and returns a lazy list object. 
 Inside that block, one can issue take commands, which push one or 
 more values onto the list. However, since it's all lazy, it only 
 executes the block when someone needs something off the list that 
 currently isn't there. And even then, it only executes the block up 
 until enough takes have happened to get the requested value. If the 
 block exits, the list is ended.

Strange.  The names gather/take suggest accepting values rather than
generating them (yet generating them onto the lizy list is what you
describe this code as doing).  I don't like the name yield either for the
same reason - it suggests that the data only goes one way while the
operation of transferrig control from one coroutine to another is a pair
has the first producing a value to the other (but also being ready to
accept a value in return when it gets resumed in turn).  Whether a
particular resume is passing data out, or accepting data in, or both,
is a matter of what your code happen to need at that moment.

 A simple example:
 
   sub grep ($test, [EMAIL PROTECTED]) {
 gather {
   for @values - $x {
 take $x if $x ~~ $test;
   }
 }
   }
 
 Since the block in question is in effect a closure, and gets called 
 whenever a new value to the lazy list is requested, I believe it 
 provides all of the generator aspects of coroutines. It could access 
 various up-scoped/global variables, thereby changing it's behavior 
 midcourse, if needed. You can create several versions of the same 
 generator, all distinct, and with separate states, and easily keep 
 them separate. To create a new list, you call the function. To resume a 
 list, you ask for a value from the list it hasn't already created.

Asking for a value by scanning a lazy list provides no mechanism for
sending information to the routine that will provide that value.

For example, the parser for perl5 is written with contorted code because
it needs just such a feedback mechanism - the parser has to turn
characters into tokens differently depending upon the context.  If it was
written as a lazy list of tokens, there would have to be this feedback
done somehow.  (Is \1 one token or two?  In a string, it is one token for
the character with octal value 001 (or perhaps a part of the token that
is the entire string containing that character as only a portion), in a
substitute, it is one token that refers back to the first match, in open
expression code, it is two tokens representing a refernce operator and
the numeric value 1.  (POD and here-strings are other forms that
require feedback.)

 Once you have some of these lazy list functions made, pipelining them 
 together is trivial via == or ==.
 
 For many simple uses generators are exactly what you need,
 but they have limits.  A more powerful coroutine mechanism can
 easily provide the simple forms (and, I would expect, without
 any serious loss of performance).
 
 I'll ask again for a couple examples of the non-generator uses, mostly 
 out of curiosity, but also to better evaluate the proposals being kicked 
 around in this thread.
 
 -- Rod Adams
 
 
 


When scoping

2005-05-04 Thread Luke Palmer
What should the output of this be:

given hello {
when /hello/ {
say One;
when /hello/ { say Two; }
when /hello/ { say Three; }
continue;
}
say Four;
}

I think:

One
Two
Three
Four

But pugs thinks:

One
Two

The trouble is that I can't seem to come up with an example to test
whether the nested whens are breaking out of the outer when or the
given.  Anyway, I'm more curious what Perl 6 thinks it should do.

Luke


stdio

2005-05-04 Thread Gaal Yahas
How do I open a file named -? How do I open stdout (and the other
standard handles)?

-- 
Gaal Yahas [EMAIL PROTECTED]
http://gaal.livejournal.com/


Re: reduce metaoperator

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 09:55:31PM -0600, John Williams wrote:
:$sum = reduce(+) @array;   # macro

That one suffers the operator/term confusion I mentioned earlier.

:$sum = reduce infix:+ @array;   # regular sub

That one's complicated enough that you actually installed a syntax error.

:$sum = [+] @array; # meta operator

That's rather pretty.  :-)

:($sum = 0) += @array;  # hyper tricks

That's rather ugly.

:use My::Cool::Reduce::Mixin;   # unless in core
:$sum = @array.reduce(+);   # macro-ey method

Macro methods are probably not going to be in anyone's Best Practices book.

: There can never be too many ways to do it, but as one who hasn't been able
: to keep very current with perl6l lately, I think a particularly relevant
: question we should be asking a lot is, How long will it take someone
: reading that code to figure out what it means?

It would be nice to have an easy-to-access What's this? interface
that could be stitched into your favorite editor to identify what's
under the cursor, or at least a command like:

p6explain '[+]'

: The nice thing about reduce is that it is easy to find in the index.
: [+] will not be in the index, although I'm sure it would be the first
: thing one would look for.  I'm not sure how many other things a newbie
: will try before looking up metacircumfix:[ ] or however that is
: spelled.

I think a p6explain would be a rather popular program.

: Also in the same light, I am not convinced that reduce is used often
: enough to deserve a shorter huffman encoding.  Sure it makes that fancy
: slice shorter, but how much headscratching is required for a human to
: parse that shortness?

That has to be balanced out against the probability that it will soon
be learned by the newbie, especially if it gets into popular idioms.
For instance, since [EMAIL PROTECTED] is going to install spaces between 
elements
just like @foo, we might see frequently [EMAIL PROTECTED] to do join('',@foo).
Sum of squares is now [+](@foo »*« @foo) or [+](@foo »**« 2).
We'll probably see [//] and [||] now and then.  Lisp hackers will
probably like [=] to turn a flat list into a car/cdr list.  []
could mean monotonically increasing.

: Personally, I think this would be more readable, in part because the
: object paradigm is widely understood.
: 
: @foo[0..9; @array.reduce(infix;) ; 0..9];
: @foo[0..9; @array.reduce(;) ; 0..9];

Well, we're also trying to give the mathematicians various kinds of
built-in syntactic relief where it doesn't greatly interfere with
one's sanity.  The verbose approach would really turn off the PDLers.

: I actually kinda like the idea, in spite of coming across as a naysayer
: here.  It just seems like an idea that should be in a module instead of in
: core, which seems like an awful strange thing to say to $Larry.

As I said in another message, I think this one is kind of a no-brainer
to teach to newbies, since like += it can be taught as a rewrite rule.
I would like it to be as much a part of core Perl as += is.

It also might be slightly easier to optimize something like [+] than
its less direct counterparts.

Larry


Re: When scoping

2005-05-04 Thread Larry Wall
On Wed, May 04, 2005 at 11:00:31PM -0600, Luke Palmer wrote:
: What should the output of this be:
: 
: given hello {
: when /hello/ {
: say One;
: when /hello/ { say Two; }
: when /hello/ { say Three; }
: continue;
: }
: say Four;
: }
: 
: I think:
: 
: One
: Two
: Three
: Four
: 
: But pugs thinks:
: 
: One
: Two

I'm inclined to think Pugs has it right here.

: The trouble is that I can't seem to come up with an example to test
: whether the nested whens are breaking out of the outer when or the
: given.  Anyway, I'm more curious what Perl 6 thinks it should do.

Perl 6 thinks :-) that break (either implicit or explicit) should
break out of the topicalizer block and ignore non-topicalizer
intermediate blocks, just as next and last ignore any non-loop
blocks to find the innermost enclosing loop, and return ignores
inner blocks to find the true sub block.  (And the inner when
isn't considered a retopicalization.)

To get the other behavior, you have to say one of:

given hello {
when /hello/ {
say One;
when /hello/ { say Two; continue; }
when /hello/ { say Three; continue; }
continue;
}
say Four;
}

given hello {
when /hello/ {
say One;
say Two when /hello/;
say Three when /hello/;
continue;
}
say Four;
}

given hello {
when /hello/ {
say One;
if /hello/ { say Two; }
if /hello/ { say Three; }
continue;
}
say Four;
}

That seems like enough WTDI.  (Though arguably, the continue version
could be defined to skip Three.  But it seems more straightfoward
to define it as leaving the current when block.)

Larry


Re: reduce metaoperator

2005-05-04 Thread Luke Palmer
On 5/4/05, Larry Wall [EMAIL PROTECTED] wrote:
 [] could mean monotonically increasing.

Not unless we make boolean operators magic.  There are arguments for
doing that, but I don't really want to think about how that would be
done at the moment.  Reduce over a straight-up (or left) boolean
operator doesn't make much sense...

Luke