Re: XOR does not work that way.

2009-06-24 Thread John Macdonald
On Tue, Jun 23, 2009 at 07:51:45AM +1000, Damian Conway wrote:
 Perl 6's approach to xor is consistent with the linguistic sense of
 'xor' (You may have a soup (x)or a salad (x)or a cocktail), [ ... ]

That choice tends to mean exactly one, rather than the first one
the waiter hears.  (A good waiter will explain the choice limitation
at the time the order is made rather than having to deal with it
being escalated to a complaint when the missing item is demanded.)

Which means that short-circuiting is not right here - it must
go through the entire list to determine whether there are zero
true selections, find the first of exactly one true selections,
or die if there are more than one true selections.  The only valid
short-circuiting would be to die at the second true value without
needing to check whether there are any more - it is already an
invalid response and there is no need to figure just how badly
invalid it is.  But for any non-error response, no short circuiting
is possible for (brace yourselves) the one true response style
any more than it is for the odd count response style.


Re: XOR does not work that way.

2009-06-24 Thread John Macdonald
On Wed, Jun 24, 2009 at 01:35:25PM -0400, John Macdonald wrote:
 On Tue, Jun 23, 2009 at 07:51:45AM +1000, Damian Conway wrote:
  Perl 6's approach to xor is consistent with the linguistic sense of
  'xor' (You may have a soup (x)or a salad (x)or a cocktail), [ ... ]
 
 That choice tends to mean exactly one, rather than the first one
 the waiter hears.  (A good waiter will explain the choice limitation
 at the time the order is made rather than having to deal with it
 being escalated to a complaint when the missing item is demanded.)
 
 Which means that short-circuiting is not right here - it must
 go through the entire list to determine whether there are zero
 true selections, find the first of exactly one true selections,
 or die if there are more than one true selections.  The only valid
 short-circuiting would be to die at the second true value without
 needing to check whether there are any more - it is already an
 invalid response and there is no need to figure just how badly
 invalid it is.  But for any non-error response, no short circuiting
 is possible for (brace yourselves) the one true response style
 any more than it is for the odd count response style.

And when or does not mean exactly one it means any subset you
wish, which again doesn't provide a lot of use for short circuiting.
The only case where short circuiting has any utility is for testing
whether any of the alternatives were selected while not caring
which other might have been also selected.

I'm not too concerned about what meaning is chosen for xor
(although if it is anything other than and odd number I'll probably
use it wrong a bunch of times but not often enough to learn better -
I'm a computer scientist and I've known what xor means for a long
enough time to not think about it meaning something else, but I
don't use it very often outside of job interviews :-).


Re: XOR does not work that way.

2009-06-24 Thread John Macdonald
On Wed, Jun 24, 2009 at 11:10:39AM -0700, Jon Lang wrote:
 On Wed, Jun 24, 2009 at 10:35 AM, John Macdonaldj...@perlwolf.com wrote:
  On Tue, Jun 23, 2009 at 07:51:45AM +1000, Damian Conway wrote:
  Perl 6's approach to xor is consistent with the linguistic sense of
  'xor' (You may have a soup (x)or a salad (x)or a cocktail), [ ... ]
 
  That choice tends to mean exactly one, rather than the first one
  the waiter hears.  (A good waiter will explain the choice limitation
  at the time the order is made rather than having to deal with it
  being escalated to a complaint when the missing item is demanded.)
 
  Which means that short-circuiting is not right here - it must
  go through the entire list to determine whether there are zero
  true selections, find the first of exactly one true selections,
  or die if there are more than one true selections.
 
 Which, I believe, is exactly how XOR short-circuiting currently works:
 it short-circuits to false if both sides are true; otherwise, it
 returns true or false as usual for XOR and continues on down the
 chain.

Failing to distinguish zero from more than one makes cases
where xor is of any utility even more rare, it would seem to me
(and it's already quite rare).


Re: Amazing Perl 6

2009-05-29 Thread John Macdonald
On Thu, May 28, 2009 at 08:10:41PM -0500, John M. Dlugosz wrote:
 John Macdonald john-at-perlwolf.com |Perl 6| wrote:
 However, the assumption fails if process is supposed to mean that
 everyone is capable of generating Unicode in the messages that they
 are writing.  I don't create non-English text often enough to have
 it yet be useful to learn how.  (I'd just forget faster than I'd use
 it and have to learn it again each time - but Perl 6 might just be
 the tipping point to make learning Unicode composition worthwhile.)
   


 Just copy/paste from another message or a web page.  Perhaps a web page  
 designed for that purpose...

Yep, I've done that.

But comparing the difference in effort between:

- press a key
- Google for a web page that has the right character set, cut, refocus, paste

means that I don't bother for the one or two weird characters
every few months that is my current Unicode usage.  If I were
working with Unicode frequently, it would be worth setting up
links and mechanisms, or learning the keyboard compose sequences
for frequently used characters.  I'm sure that there are many
people in a similar situation.


Re: New CPAN

2009-05-29 Thread John Macdonald
On Fri, May 29, 2009 at 04:23:56PM +0200, Mark Overmeer wrote:
 What's in a name.
 Is it also
   CPAN is the Comprehensive Parrot Archive Network
   CPAN is the Comprehensive Pieton Archive Network
   CPAN is the Comprehensive Pony   Archive Network
   CPAN is the Comprehensive PHPArchive Network
   CPAN is the Comprehensive PRuby  Archive Network
 
 So, where do you stop?
 Perl6 and Perl5 have some things in common, just like PHP and Perl5.
 Some people say that Perl6 is a different language, not a next
 generation of Perl5.

With parrot supporting many scripting languages and making them
accessible from perl, there is a huge value in having cpan.pm
(whatever it ends up being called) providing transparent access
to all of those and more - regardless of whether they are all in
a single archive or (more likely) a network of archives, each
providing some sub-set of the possibilities.

Taking a valuable module written in another language and rewriting
it in perl has been done many times in the past, but will be less
necessary in the future.  A perl6 sub-class of the original module
will be a much easier task and often provide the specific addition
that is required.

 Do we need to install Perl5 on our system to get access to the
 install tools to install Perl6 modules?

Certainly we need to install Perl5 *modules* until they are *all*
superceeded by Perl6 (or Ruby or Python) replacements.


Re: New CPAN

2009-05-29 Thread John Macdonald
On Fri, May 29, 2009 at 07:26:11PM +0200, Daniel Carrera wrote:
 Btw, if the majority wants to start uploading Ruby, Python and Lua  
 modules to CPAN, we can rename CPAN so that the P stands for something  
 else that doesn't mean anything. Comprehensive Peacock Archive  
 Network? Comprehensive Platypus Archive Network?

Comprehensive Programming Archive Network.

(I started with Pscripting, but decided that was too restrictive.)


Re: Amazing Perl 6

2009-05-28 Thread John Macdonald
On Wed, May 27, 2009 at 05:42:58PM -0500, John M. Dlugosz wrote:
 Mark J. Reed markjreed-at-gmail.com |Perl 6| wrote:
 On Wed, May 27, 2009 at 6:05 PM, John M. Dlugosz
 2nb81l...@sneakemail.com wrote:
   
 And APL calls it |¨ (two little dots high up)
 

 Mr. MacDonald just said upthread that the APL reduce metaoperator was
 spelled /.  As in:

  +/1 2 3
  6

 So how does |¨ differ?


   

 Sorry, the two dots is APL's equivilent of the hyper operators, not the  
 reduction operators.  Easy to get those mixed up!

 For example, |1 2 3 ⍴¨ 10| would natively be written in Perl 6 as |10  
 »xx« (1,2,3)|.

 --John

Yes.  The full expression in raw APL for n! is:

*/in

(where i is the Greek letter iota - iotan is Perl's 1..$n).

Like many things in APL, having a 3-character combination of raw
operators to provide a function makes creating a special operator
unnecessary.  (Although, if I recall correctly, there were also raw
operators for combinatorial pick and choose, and factorial(n) is
the same as choose(n,n) [or pick(n,n) whichever is the one the
considers the order of the returned list to be significant] and
factorial was actually returned if there was only one operand
provided.)


Re: Amazing Perl 6

2009-05-28 Thread John Macdonald
On Thu, May 28, 2009 at 09:30:25AM -0700, Larry Wall wrote:
 On Thu, May 28, 2009 at 10:51:33AM -0400, John Macdonald wrote:
 : Yes.  The full expression in raw APL for n! is:
 : 
 : */in
 : 
 : (where i is the Greek letter iota - iotan is Perl's 1..$n).
 
 Only if the origin is 1.  This breaks under )ORIGIN 0.  cough $[ /cough

Yep.  That was why I was comfortable playing with $[ when it first came
along in early perl (wow, you can set the origin to any value, not just
0 or 1 - now that's progress).

 By the way, the assumption here is that everyone can process Unicode,
 so it's fine to write */⍳n.  :)

That's correct (in my case at least) if process means accept and
display properly.

However, the assumption fails if process is supposed to mean that
everyone is capable of generating Unicode in the messages that they
are writing.  I don't create non-English text often enough to have
it yet be useful to learn how.  (I'd just forget faster than I'd use
it and have to learn it again each time - but Perl 6 might just be
the tipping point to make learning Unicode composition worthwhile.)

 Note that that's the APL iota U+2373, not any of the other 30 or so
 iotas in Unicode.  :/  Well, okay, half of those have precomposed
 accents, but still...
 
 Larry


Re: Unexpected behaviour with @foo.elems

2009-05-27 Thread John Macdonald
On Tue, May 26, 2009 at 04:38:21PM -0700, yary wrote:
 perl4-perl5.8 or so had a variable that let you change the starting
 index for arrays, so you could actually make the above work. But then
 everyone who'd re-arranged their brains to start counting at 0, and
 written code that has a starting index of 0, would have problems.

That was $[ and it goes back to perl 1 or so.  I recall
experimenting with it a couple of times.  Using it, though,
means that you have to use $[ as the lower range limit for
*every* array everywhere.

That gets stale very quickly, and I then decided that I would
just never change the setting of $[ and would remove such a
change from any code that I called.

This single global value for the initial index of all arrays
was one of the things that lead to the greater understanding
that action at a distance is hazardous to your sanity as a
code maintainer.


Re: Amazing Perl 6

2009-05-27 Thread John Macdonald
On Wed, May 27, 2009 at 02:21:40PM -0400, Mark J. Reed wrote:
 On Wed, May 27, 2009 at 1:59 PM, Daniel Carrera 
 daniel.carr...@theingots.org wrote:
  Wow... That's a foldl! In a functional language, that would be called a
  fold.
 
 In Haskell it may be called fold (well, foldl and foldr), but the concept
 has has a variety of names.  Two of the more common ones are reduce and
 inject; I believe Perl6 chose reduce for consistency with the Perl5
 List::Util module.  Common Lisp and Python also call it reduce:
 
 (defun ! (n)
  (reduce #'* (loop for i from 1 to n collecting i)))
 
 
 def fact(n):
  return reduce(lambda x,y: x*y, range(1,n+1))
 
 
 While Ruby calls it inject.
 
 
 def fact(n)
(1..n).inject { |x,y| x*y }
 end
 
 
 Perl 6 has a lot of functional features.  IMO the nice thing about its
 version of reduce is the way it's incorporated into the syntax as a
 metaoperator.

Historically, the name reduce was used (first?) in APL, which also
provided it as a meta-operator.  op/ would use op to reduce the array
on the right of the meta-operator.  (Although, in APL, it could be an
n-dimensional object, not necessarily a 2-dimensional array - the
reduce would compute an (n-1)-dimensional object from it.  This could
be used to generate row-sums and column sums.  APL was extremely terse,
you could compute almost anything in a single line - Perl golfing
afficionados have never really caught up, although with the addition
of Unicode operators Perl 6 could now go ahead.)


Re: simultaneous conditions in junctions

2009-04-01 Thread John Macdonald
On Wed, Apr 01, 2009 at 09:44:43AM -0400, Mark J. Reed wrote:
 The idea is that junctions should usually be invisible to the code,
 and autothreading handles them behind the scenes.  [ ... ]

If I understand correctly, (which is by no means assured) a function
call with a junction as an argument generally acts as if it were
autothreaded.  So:

$x = any(1,2,3);
$y = f($x);

should work like:

$y = any( f(1), f(2), f(3) );

Right?

sub f($x) {
return $x  1.5  $x ?? $x :: undef;
}

$y = f($x);
$z = $x  1.5  $x ?? $x :: undef;

Unless autothreading is also implied by conditionals, $y
and $z would have significantly different results; $y ===
any(undef,undef,undef) while $z === any(1,2,3).  But, if
autothreading *is* implied by conditionals, where do the
threads get joined?  I have a feeling that the autothreading
has to happen essentially at the point of the creation of the
junction to avoid getting a result from a junction that none
of the joined quantities is capable of justifying (such as
the one described ealier of (-4|4) matching the criteria to be
in the range 0..1).  I suspect that juctions will be perl6's
action-at-a-distance hazard.  (With quantum entanglement,
you get action-at-a-distance effects in the real world.
When we import them into a computer language, the resulting
action-at-a-distance should come as no surprise - except for
its inherent surprise/hazard nature.)  Now, if it is acceptable
for -4|4 to return true for 0 = $x = 1 when tested directly
in an expression, but to return false if it is tested in a
subroutine, then perl6 junctions are not really modelling
quantum superpositions and I, at least, will need to find a
different metaphor for what they actually do do (and for what
they can be used for and when).


Re: On Sets (Was: Re: On Junctions)

2009-03-29 Thread John Macdonald
On Sat, Mar 28, 2009 at 10:39:01AM -0300, Daniel Ruoso wrote:
 That happens because $pa and $pb are a singular value, and that's how
 junctions work... The blackjack program is an example for sets, not
 junctions.
 
 Now, what are junctions good for? They're good for situation where it's
 collapsed nearby, which means, it is used in boolean context soon
 enough. Or where you know it's not going to cause the confusion as in
 the above code snippet.

Unfortunately, it is extremely common to follow up a boolean is this
true with either if so, how and/or if not, why not.  A boolean test
is almost always the first step toward dealing with the consequences,
and that almost always requires knowing not only what the result of the
boolean test were, but which factors caused it to have that result.

The canonical example of quantum computing is using it to factor huge
numbers to break an encryption system.  There you divide the huge number
by the superposition of all of the possible factors, and then take the
eigenstate of the factors that divide evenly to eliminate all of the
huge pile of potential factors that did not divide evenly.  Without
being able to take the eigenstate, the boolean answer yes, any(1..n-1)
divides n is of very little value.


Re: Logo considerations

2009-03-24 Thread John Macdonald
On Tue, Mar 24, 2009 at 09:17:15AM -0400, Mark J. Reed wrote:
 Are we seeking a logo for Perl 6 in general or Rakudo in particular?
 It seems like the latter should be derived from the former, perhaps
 with the Parrot logo mixed in.

The graphene logo inspires me to suggest that a carbon
ring be used as the logo for Parrot.  Languages based
on Parrot could then use a tiny carbon ring attached to
their own logo (such as grapheme for Rakudo).  Carbon does
connect well to many other chemical combinations, including
joining together things that don't otherwise bond directly
to each other.  (The duct tape of the microverse, bringing
carbon-based program forms to the world. :-)  A neat thing
that could come out of this would be that there would be
a convenient logo for a module that made use of multiple
languages - the carbon ring with an appropriate number of
language logos attached to it.

In keeping with the tradition that carbon rings often
have symbols inside the ring - I'd put a parrot inside a
hexagonal birdcage as the full-sized Parrot logo, and
only reduce it to just the small hexagon ring when it is
being used in a connected fashion, attached to other logos.

(Of course, this is not the proper forum for discussing
changing the Parrot logo to a carbon ring.)


Re: Logo considerations

2009-03-24 Thread John Macdonald
On Tue, Mar 24, 2009 at 11:56:46AM -0400, Guy Hulbert wrote:
 On Tue, 2009-24-03 at 08:42 -0700, Paul Hodges wrote:
  --- On Tue, 3/24/09, John Macdonald j...@perlwolf.com wrote:
  
   The graphene logo inspires me to suggest that a carbon
   ring be used as the logo for Parrot...
 
 Did you mean Rakudo here ?
 
 Parrot seems to have a logo already.

Well, it may have been removed from Paul quote, but I mentioned
in my original message that this was the wrong forum to be
suggesting a new logo for Parrot, but yes Parrot is what I
was referring to.

I just realized one more connotation of using the carbon ring
for Parrot - since it provides a platform for both building
and connecting a wide variety of languages, this is the:

one ring to bind them


Re: Logo considerations

2009-03-24 Thread John Macdonald
On Tue, Mar 24, 2009 at 09:49:42AM -0700, Jon Lang wrote:
 2009/3/24 Larry Wall la...@wall.org:
  http://www.wall.org/~larry/camelia.pdf
 
 Cute.  I do like the hyper-operated smiley-face.
 
 What I'd really like to see, though, is a logo that speaks to Perl's
 linguistic roots.  That, more than anything else I can think of, is
 _the_ defining feature of Perl.

Maybe that's the quotes above and below the smiley face.  This
has a pure ASCII rendition:

  ``:-)''

(although the second pair of quotes should be tilted right)

or maybe that is:

v
v
  =:-)=
^
^

to be a full 90 degree rotation as all ascii smileys ought


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-06 Thread John Macdonald
On Thu, Dec 04, 2008 at 04:40:32PM +0100, Aristotle Pagaltzis wrote:
 * [EMAIL PROTECTED] [EMAIL PROTECTED] [2008-12-03 21:45]:
  loop {
  doSomething();
  next if someCondition();
  doSomethingElse();
  }
 
 I specifically said that I was aware of this solution and that I
 am dissatisfied with it. Did you read my mail?

While this is still the same solution that you dislike, how about
recasting it a bit:

loop {
PRE_CONDITION: {
doSomething();
}

last unless someCondition();

BODY: {
doSomethingElse();
}
}

That uses additional indenting and labelling to identify the iteration-setup
and actual loop body parts, and keeping the termination condition easily
visible with non-indenting.


Re: MAIN conflict in S06?

2008-11-14 Thread John Macdonald
On Fri, Nov 14, 2008 at 01:50:59PM -0500, Brandon S. Allbery KF8NH wrote:
 WHat *is* the outermost scope in that case?  When is code in that scope 
 executed?  I could see this as being a hack to allow a module to be used 
 either directly as a main, or used; the former ignoring top level scope 
 code, the latter ignoring MAIN.  I think Python has something similar.

Python names the outermost scope __MAIN__ if the file is directly
interpreted, but it gets a name related to the filename if it is used.
That outermost code is always executed, but the standard idiom to have
code that is only executed when the file is executed directly is to wrap
it in an if test that compares the name against __MAIN__.


Re: interpolating complex closures

2008-02-23 Thread John Macdonald
On Fri, Feb 15, 2008 at 03:12:20PM -0800, Larry Wall wrote:
 No, there's no problem with that.  This is Perl 6, which is full of
 wonderfulness, not Perl 5, which was written by a person of minimal clue. :)
 
 That's part of what S02 means right at the top where it's talking
 about a one-pass parser.  There's no lookahead to find the end of a
 construct.  You just come to it when you come to it, and the parser
 has to be smart enough to know which terminators mean what in each
 context.
 
 Larry

Hmm, just when editors have gotten smart enough about parsing to often 
get the colouring right for perl 5...


Re: Generalizing ?? !!

2007-06-12 Thread John Macdonald
On Mon, Jun 11, 2007 at 01:43:40AM -, NeonGraal wrote:
 Surely if you defined !! to return undef but true and both operators
 to be left associative then it all works.
 
  1==0 ?? True !! False - (undef) !! False which seems right to
 me.
 
  1==1 !! False ?? True - (undef but true) ?? True also good.
 
 TTFN, Struan

Nope.

$a = $b ?? $c !! $c;

If $b is true, you want a to have the value of $c, NOT $c but true.
Later code may want to use the value of $a in a boolean context.
When the but true is added to make the short circuiting work, it
can have longer lasting effects that are not desired.

(I know Larry already agreed with the principle of not mucking
around with simple stuff, but I just wanted to provide a bit
more detail of how said mucking might be a problem.  We don't
need no stinking epicycles upon epicycles. :-)

-- 


Re: propose renaming Hash to Dict

2007-06-01 Thread John Macdonald
On Fri, Jun 01, 2007 at 07:07:06AM -0400, Brandon S. Allbery KF8NH wrote:
 
 On Jun 1, 2007, at 5:44 , Thomas Wittek wrote:
 
 Larry Wall:
 Nope.  Hash is mostly about meaning, and very little about  
 implementation.
 Please don't assume that I name things according to Standard Names in
 Computer Science.  I name things in English.  Hash is just something
 that is disordered, which describes the associative array interface
 rather nicely, distinguishing it from the ordered Array interface.
 
 Hm, but with which would you explain a hash in plain english?
 What would be the closest equivalents in the real world?
 
 ...make a hash of things (meaning, a mess)
 corned beef hash

That's two people that have given the same list, but both
have omitted the more common (in modern times) phrase hash
browned potatos which is a hash of chopped potato, onion,
and sometimetimes other things fried brown.  I'll ignore the
McDonald's version which hashes together just the potatos,
since a collective of a single element is still a collective
mathematically, but not usually considered so linguistically
unless you've got a big enough advertising budget to pull it
off.

-- 


Re: Is Perl 6 too late?

2007-05-14 Thread John Macdonald
On Mon, May 14, 2007 at 02:36:10PM +0200, Thomas Wittek wrote:
 Andy Armstrong schrieb:
 On 14 May 2007, at 12:31, Thomas Wittek wrote:
 How did C, C#, Java, Ruby, Python, Lua, JavaScript, Visual Basic, etc. 
 know?
 They didn't.
 If there is a new release, you always have to check if your code still 
 runs.
 
 I think that may be the point I'm making.
 
 Your point is that you don't have one?
 Do you believe, that new keywords are the only cause of breaking 
 backwards compatibility? I don't think so.
 So you rely on testing your code anyway. Sigils won't save you from that.

Back in the 90's I was with a company that had a 20K line
perl program.  We would provide a copy of perl as part of the
program suite, so that we could control which version was
being used for our software and when it was upgraded while
still allowing the customer to have their own version of perl
that they could upgrade on their own schedule.  Before any perl
upgrade was built into our suite, we would of course test it
extensivily to ensure that all of the code was still compatible.
Until the perl4-perl5 change, there was never any problem -
Larry is a wizard at adding totally new concepts and features
in a way that just happens to include all of the old usage
bits as a special case that falls magically out of the new,
enhanced, more coherent whole.  But there is no way that this
would have been possible without the distinction between named
operators and variables provided by sigils.  Removing the sigil
on a function call (it used to always be written sub(args...))
did, I think, lead to the difficulty in perl5 where it became
difficult to add new keyword operators to the language - because
they could conflict with subroutine names in existing code.

Needless to say, that level of dependable upgradability without
requiring code rewrites was considered to e a huge benefit of
using perl for our company.

(For the record, we delayed converting from perl4 to perl5 for
many years, woried about the possibility of subtle problems
arising from the massive changes that had been made to the
language.  When I finally tried it out, there were only a few
changes that really affected us.  I had the code converted in
about two weeks, although we then ran it in parallel with the
old code for about two months before accepting that nothing
tricky had snuck in.)

-- 


Re: explicit line termination with ;: why?

2007-05-14 Thread John Macdonald
On Tue, May 15, 2007 at 01:22:48AM +0200, Thomas Wittek wrote:
 Andrew Shitov:
  If the line of code is not ended with ';' the parser tries first
  to assume [..]
 
 Wouldn't that be unambigous?
 
  foo = 23
  bar = \
42
 
 ?
 
 I think there would be no ambiguities and you only had to add additional
 syntax for the rare cases instead of the common cases.

Without explicit \ to join unterminated lines you get:

  foo = 23
  if x == 7
  { y = 5; z = 6 }

Is that:

  foo = 23
  if x == 7;
  { y = 5; z = 6 }

or:

  foo = 23;
  if x == 7
  { y = 5; z = 6 }

?

With explicit \ to join unterminated lines you just get more
ugliness than having semicolons.  It's also, in many cases,
harder to edit - that's why a trailing comma in a list that
is surrounded by parens, or a trailing semicolon in a block
surrounded by braces, is easier to manage.  The syntax of
the last element is the same as the rest so you can shuffle
the order around easily without having to add a separator to
the end of what used to be the final element and remove the
separator on what is now the final element.

Having punctuation where there is a stop is more natural than
having an explicit marker for don't stop here, keep going.

-- 


Re: explicit line termination with ;: why?

2007-05-14 Thread John Macdonald
On Tue, May 15, 2007 at 02:02:06AM +0200, Thomas Wittek wrote:
 John Macdonald schrieb:
  It's also, in many cases,
  harder to edit - that's why a trailing comma in a list that
  is surrounded by parens, or a trailing semicolon in a block
  surrounded by braces, is easier to manage.
 
 Now that the list is surrounded by parens makes clear that it ends with
 the closing paren and not with a line break. So you could still use
 commas (without backslashes) to separate the items over multiple lines.
 See e.g. http://docs.python.org/ref/implicit-joining.html

I was actually talking about existing perl5 here.  I write:

my %h = (
x = 100,
y = 75,
z = 99,
);

explicitly writing the unrequired comma on the last element
(z=99).  That way, if I add another element to the hash there's
no danger that I will forget to go back and add the comma to
the line above.  Alternately, if I reorder the hash elements
(maybe sorting on the value instead of the key) I don't have
to check whether there is now a commaless line in the middle
of the reordered bunch.

-- 


Re: [svn:perl6-synopsis] r14385 - doc/trunk/design/syn

2007-04-27 Thread John Macdonald
On Fri, Apr 27, 2007 at 08:46:04AM -0700, [EMAIL PROTECTED] wrote:
 +The matches are guaranteed to be returned in left-to-right order with
 +respect to the starting positions.  The order within each starting
 +position is not guaranteed and may depend on the nature of both the
 +pattern and the matching engine.  (Conjecture: or we could enforce
 +backtracking engine semantics.  Or we could guarantee no order at all
 +unless the pattern starts with :: or some such to suppress DFAish
 +solutions.)

Are you sure you want to guarantee left-to-right starting
position order?  If there are multiple processors available, and
in a lazy context, it may be preferrable to not guarantee any
order.  Then, if one processor that starts at a later position
but which finds a match quickly while another processor starts
earlier but needs to take a lot longer to find its first match,
the lazy processing can start working on the first match found
at the earliest possible time.

-- 


Re: [svn:perl6-synopsis] r14376 - doc/trunk/design/syn

2007-04-17 Thread John Macdonald
On Tue, Apr 17, 2007 at 11:22:39AM -0700, [EMAIL PROTECTED] wrote:
 Note that unless no longer allows an else

I'm sorry to see this.

This is one item from PBP that I don't really agree with.
Personally, I find I am at least as likely to make mistakes
about the double negative in if (!cond) ... else  as I am
for unless (cond) ... else .  Since that tends to be a
minority viewpoint, I only use unless/else for code that
will not be maintained by anyone other than me; but for my
own code I'd rather keep the (to me) better readability.

-- 


Re: Should a dirhandle be a filehandle-like iterator?

2007-04-15 Thread John Macdonald
On Fri, Apr 13, 2007 at 08:14:42PM -0700, Geoffrey Broadwell wrote:
 [...] -- so non-dwimmy open
 variants are a good idea to keep around.
 
 This could be as simple as 'open(:!dwim)' I guess, or whatever the
 negated boolean adverb syntax is these days 

open(:file), open(:dir), open(:url), ... could be the non-dwimmy
versions.  If you don't specify an explicit non-dwimmy base
variant, the dwim magic makes a (preferrably appropriate) choice.

-- 


Re: What should file test operators return?

2007-04-13 Thread John Macdonald
On Fri, Apr 13, 2007 at 10:29:43AM +0100, Moritz Lenz wrote:
 Hi,
 
 brian d foy wrote:
  At the moment the file test operators that I expect to return true or
  false do, but the true is the filename.
 
 that helps chaining of file test:
 
 $fn ~~ :t ~~ :x
 or something.
 If you want a boolean, use
 ? $fn ~~ :x
 or something.

It might also be useful when the test is being applied to a
junction - it gives the effect of grep.

-- 


Re: [svn:perl6-synopsis] r14325 - doc/trunk/design/syn

2007-03-16 Thread John Macdonald
On Fri, Mar 16, 2007 at 05:29:21PM -0700, Larry Wall wrote:
 On Thu, Mar 15, 2007 at 05:14:01PM -0400, Zev Benjamin wrote:
 : If the idea of having an author attribute is to allow multiple
 : implementations of a module, why not add an API version attribute?  The
 : idea would be to detach the module version number from the module API
 : version number.
 
 I was thinking that emulates encompasses that notion, but maybe we
 haven't got the name quite right.  And maybe we need an API naming
 convention.

I was thinking extends version would mean that any program
that depended upon the named version would still work, although
this version extends the interface so that programs that fully
use the extended interface would not work with the older version.
equivalent version on the other hand, implies that the interface
is forward AND backward compatible (presumably the underlying
implementation has changed for some good reason).

-- 


Re: Bit shifts on low-level types

2007-02-27 Thread John Macdonald
On Tue, Feb 27, 2007 at 06:31:31PM +, Smylers wrote:
 Geoffrey Broadwell writes:
 
  Perhaps having both + and ? operators?  Since coerce to boolean and
  then right shift is meaningless, ...
 
 It's useless, rather than meaningless; you've neatly defined what the
 meaning of that (useless) operator would be.
 
[ ... ]
 
  this seems ripe to DWIM.
 
 But DWIM is the meaning you previously defined, surely?
 
  (For me, DWIM here means + does high bit extension, ? does zero
  fill.)
 
 Why?  You think that somebody not knowing about this operator would
 correctly infer its existence from other operators?  Even if somebody
 guessed that both operators exist it looks pretty arbitrary which is
 which.

While I tend somewhat to agree that this level of bit
manipulation is not common enough to justify warping the
language; I disagree that the choice of meaning between +
and ? is arbitrary and not subject to inference.  The normal
assembler opcodes for the two forms of right shift are LRS
(logical right shift) and ARS (arithmetic right shift) with some
variation in spelling for different hardware architectures.
The arithmetic variant propagates the sign bit; the boolean
variant inserts zeros.  A sign bit is an integer property
that has no meaning in boolean context.  It would be hard to
find any rationale for reversing the meaning of the two.

-- 


Re: [svn:perl6-synopsis] r13540 - doc/trunk/design/syn

2007-01-27 Thread John Macdonald
On Sat, Jan 27, 2007 at 06:18:50PM +, Nicholas Clark wrote:
 Is it defined that $a + $b evaluates the arguments in any particular order?
 Even guaranteeing that either the left or the right gets completely evaluated
 first would be better than C :-)

In C, that is deliberately left undefined to allow the code
generator to have more flexibility in optimizing the code
it generates.  It's always easy to separate side-effects into
multiple statements that are executed in the desired order
if you need a specific order.  If everything in the language
implies a specific order, no opportunity for optimization
remains - even if there is no actual necessity for the
particular order to be followed.

-- 


Re: renaming grep to where

2006-09-20 Thread John Macdonald
On Wed, Sep 20, 2006 at 07:11:42PM +0100, Andy Armstrong wrote:
 On 20 Sep 2006, at 19:05, Larry Wall wrote:
 Let it be.  :)
 
 I could just as easily have called for a revolution :)

No, you should have quoted differently:

 On 20 Sep 2006, at 19:05, Larry Wall whispered words of wisdom:
 Let it be.  :)

-- 


Re: renaming grep to where

2006-09-19 Thread John Macdonald
On Tue, Sep 19, 2006 at 04:39:35PM -0700, Jonathan Lang wrote:
 Anyway, it's not clear to me that grep always has an exact opposite.
 
 I don't see why it ever wouldn't: you test each item in the list, and
 the item either passes or fails.  'select' would filter out the items
 that fail the test, while 'reject' would filter out the ones that pass
 it.

If grep is being kept, and an inverse is also desired,
it could be called perg (grep backwards, pronounced
purge :-)

I'm not serious.  Really.

-- 


Re: renaming grep to where

2006-09-19 Thread John Macdonald
On Tue, Sep 19, 2006 at 07:56:44PM -0400, [EMAIL PROTECTED] wrote:
 I envision a select, reject, and partition, where
 
 @a.partition($foo)
 
 Returns the logical equivalent of
 
 [EMAIL PROTECTED]($foo), @a.select($foo)]
 
 But only executes $foo once per item.  In fact. I'd expect partition
 to be the base op and select and reject to be defined as
 partition()[1] and partition()[0] respectively...

Hmm, that has appeal.  If you assign a partition to a list of
arrays, 0/false selected go into the first, number goes into
the n'th, with the last also getting numbers that are too big
and strings that are true.  But it could instead be assigned
to pairs, and the partition block selects a key or default
which chooses the target.

-- 


Re: My first functional perl6 program

2006-08-25 Thread John Macdonald
On Wed, Aug 23, 2006 at 04:10:32PM -0700, Larry Wall wrote:
 Yes, that should work eventually, given that hypers are supposed to stop
 after the longest *finite* sequence.  In theory you could even say
 
 my %trans = ('a'..*) »=« ('?' xx *);
 
 but we haven't tried to define what the semantics of a lazy hash would be...

That's a rather large hazard.  %trans{$key} would run until
the lazy hash is extended far enough to determine whether
$key is ever matched as a key - and that could easily mean
trying forever.  So, it is only safe to use the hash for
keys that you *know* are in the domain; or, if the lazy
key generation is simple enough, keys that you *know* can
be excluded in a finite amount of time.  (I'm assuming that
1 ~~ any( 5..* ) will be smart enough to return false.)

-- 


Re: $a.foo() moved?

2006-04-06 Thread John Macdonald
On Thu, Apr 06, 2006 at 12:10:18PM -0700, Larry Wall wrote:
 The current consensus on #perl6 is that, in postfix position only (that
 is, with no leading whitespace), m:p/\.+ \sws before \./ lets you embed
 arbitrary whitespace, comments, pod, etc, within the postfix operator.
 
 This allows both the short
 
 :foo. .()
 
 as well as the longer
 
 $x...
 .foo()

The one quibble I see with this is that postfix one or more
dots, including 3 might be a touch confusing with infixexactly
3 dots (i.e. the yada operator).  Depending upon context
... can thus be either an error (code not yet written)
or layout control and valid to execute (I put execute in
quotes because by the time you get around to executing the
code the ... will have served it purpose of controlling the
parsing and be gone).

(This is just the one-shot I'm not used to it yet vote. :-)

-- 


Re: $a.foo() moved?

2006-04-06 Thread John Macdonald
On Thu, Apr 06, 2006 at 02:49:33PM -0500, Patrick R. Michaud wrote:
 On Thu, Apr 06, 2006 at 03:38:59PM -0400, John Macdonald wrote:
  On Thu, Apr 06, 2006 at 12:10:18PM -0700, Larry Wall wrote:
   The current consensus on #perl6 is that, in postfix position only (that
   is, with no leading whitespace), m:p/\.+ \sws before \./ lets you 
   embed
   arbitrary whitespace, comments, pod, etc, within the postfix operator.
   
  
  The one quibble I see with this is that postfix one or more
  dots, including 3 might be a touch confusing with infixexactly
  3 dots (i.e. the yada operator).  
 
 There isn't an infix:... operator.  There's 
 term:... (yada yada yada), and there's 
 postfix:... ($x..Inf).

Hmm, yep I got the terminology wrong, but my point remains -
one operator that is ..., exactly 3 dots, and another that
can be ... but can be spelled with a different number of
dots if you feel like it, is somewhat confusing.

-- 


Re: handling undef better

2005-12-21 Thread John Macdonald
On Wed, Dec 21, 2005 at 10:25:09AM -0800, Randal L. Schwartz wrote:
  Uri == Uri Guttman [EMAIL PROTECTED] writes:
 
 Uri i will let damian handle this one (if he sees it). but an idea would be
 Uri to allow some form ofkey extraction via a closure with lazy evaluation
 Uri of the secondary (and slower) key.
 
 I still don't see that.  I understand about the lazy key evaluation.
 However, the sort block in Perl5 contains more than just two key
 computations: it also contains the logic to decide *how* to compare
 the keys, and *when* more information is needed (a secondary key step,
 for example).  Not sure how you're going to replace that with just
 information about how to compute a key.  I think you've had your head
 inside GRT for too long. :)
 
 So, for the simple case (string sort against some function of each item),
 I can see the need for a good shortcut.  However, the general case (let
 me tell you how to sort two items), you'll still need a very perl5-ish
 interface.

If I understand the p6 way correctly (which is not necessarily
true :-) it provides the key computation function in addition to
the comparison function.  So, the key computation can return a
list of keys for each value (possibly in lazy not-yet-evaluated
for so that the computation is only incurred the first time that
that key component is actually used).  The comparison function
is often simpler than a p5 comparison function (because of the
existance of the key function and because of the smarter match
capabilities) but could still be as complicated a a p5 sort
comparison function for those rare cases that really need it.

-- 


Re: Transliteration preferring longest match

2005-12-15 Thread John Macdonald
On Thu, Dec 15, 2005 at 09:56:09PM +, Luke Palmer wrote:
 On 12/15/05, Brad Bowman [EMAIL PROTECTED] wrote:
  Why does the longest input sequence win?
 Is it for some consistency that that I'm not seeing? Some exceedingly
  common use case?  The rule seems unnecessarily restrictive.
 
 Hmm.  Good point.  You see, the longest token wins because that's an
 exceedingly common rule in lexers, and you can't sort regular
 expressions the way you can sort strings, so there needs to be special
 machinery in there.
 
 There are two rather weak arguments to keep the longest token rule:
 
 * We could compile the transliteration into a DFA and make it
 fast.  Premature optimization.
 * We could generalize transliteration to work on rules as well.
 
 In fact, I think the first Perl module I ever wrote was
 Regexp::Subst::Parallel, which did precisely the second of these. 
 That's one of the easy things that was hard in Perl (but I guess
 that's what CPAN is for).  Hmm.. none of these is really a compelling
 argument either way.

If a shorter rule is allowed to match first, then the longer
rule can be removed from the match set, at least for constant
string matches.  If, for example, '=' can match without
preferring to try first for '==' then you'll never match '=='
without syntactic help to force a backtracking retry.

-- 


Re: Chained buts optimizations?

2005-11-15 Thread John Macdonald
On Tue, Nov 15, 2005 at 11:23:49AM -0800, Larry Wall wrote:
 On Tue, Nov 15, 2005 at 02:11:03PM -0500, Aaron Sherman wrote:
 : All of that is fine, as far as I'm concerned, as long as we give the
 : user the proviso that chained buts might be optimized down into a single
 : cloning operation or not at the compiler's whim, but it could be a nasty
 : shock if it's not documented, and it's a rather ugly amount of overhead
 : if we don't allow for the optimization.
 
 The situation will probably not arise frequently if we just give people
 the opportunity to write
 
 my $a = $b but C | D | E | F;
 
 instead, or whatever our type set notation turns out to be.

If adding a but involves calling some code to initialize the
but-iness (I don't know if it does myself), that code might
inspect or operate on the item that is being modified in a way
that would be changed if a previous but had not yet been fully
initialized.  So, the initialization code for each of the buts
(if any) should be called in order.  Reblessing for each one
would only matter if the subsequent but code used introspection
and varied its actions depending on the blessed state.

The choice between:

my $a = $b but C | D | E | F;

and:

my $a = $b but C but D but E but F;

might be used to control the short-cut initialization (which,
would have to be an explicit definition rather than an
optimization since it could have different meaning).

-- 


Re: Perl 6 fears

2005-10-24 Thread John Macdonald
On Mon, Oct 24, 2005 at 02:47:58PM +0100, Alberto Manuel Brandão Simões wrote:
 Another is because it will take too long to port all CPAN modules to 
 Perl 6 (for this I suggest a Porters force-task to interact with current 
 CPAN module owners and help and/or port their modules).

I think Autrijus has the right idea that Perl 5 CPAN modules
should just work in Perl 6 without change.

Just as Perl 6 is the community rewrite of Perl 5, moving
a module from CPAN to 6PAN should not be done as a hasty
make sure everything is moved over kind of event, but rather
should be done as a way of choosing the best features out of
the various Perl 5 modules on CPAN and peoples merging in the
experience people have gained from using them as well as the
experience people gain from *using* Perl 6.

So, that is both good news and bad news - the good news is that
CPAN will be just as useful for Perl 6 right from the start;
the bad news is that the community rewrite of CPAN won't
happen overnight but could take as long for CPAN as it did
for Perl 6.  (In fact, depending upon how you measure it, it
will take longer because some modules will never be rewritten,
but of course, those will be the modules that no-one feels a
sufficiently pressing need to rewrite; while for other modules,
the Perl 5 CPAN version will fit in well enough that there
is no urgency to convert it, even though it is being used,
until a few years done the road a rewrite allows additional
Perl 6/6PAN capabilities to be merged in.)

-- 


Re: new sigil

2005-10-22 Thread John Macdonald
On Fri, Oct 21, 2005 at 09:35:12AM -0400, Rob Kinyon wrote:
 On 10/21/05, Steve Peters [EMAIL PROTECTED] wrote:
  On Fri, Oct 21, 2005 at 02:37:09PM +0200, Juerd wrote:
   Steve Peters skribis 2005-10-21  6:07 (-0500):
Older versions of Eclipse are not able to enter these characters.  
That's
where the copy and paste comes in.
  
   That's where upgrades come in.
  
  That's where lots of money to update to the next version of WSAD becomes the
  limiting factor.
 
 So, you are proposing that the Perl of the Unicode era be limited to
 ASCII because a 15 year old editor cannot handle the charset? That's
 like suggesting that operating systems should all be bootable from a
 single floppy because not everyone has access to a CD drive.

Um, that's not what I'm hearing.

To type in a Unicode character requires machinations beyond just
hitting a labelled key on the keybourd.  There are no standards
for these machinations - what must be done is different for
Windows vs. Linux, and different for specific applications
(text-mode mutt vs. xvi vs. Eclipse vs. ...).

So, a book can't just show code and expect the reader to be
able to use it, and no book is going to be able to tell all
of its users how to type the characters because there are so
many different ways.

Any serious programmer will be able to sort out how to do
things but casual programmers won't be typing the extended
characters enough to learn how to do it without looking it
up each time.  Proprammers that use many different computers
and applications will have difficulty remembering which of
the varous incantations happen to work on the system they're
currently using.  People who do sort out a good working
environment will be at a loss when they occassionally have to do
something on a different system and no longer know how to type
basic characters.  (But since in their normal environment they
do know how, they may never have known the ASCII workarounds,
so they'll have to look them up.)  I've gotten away from
programming enough that I often have to look up a function
or operator definition to check on details; but that is much
less disruptive to the thought process than having to look up
how to type a character.

I think that the reasons for using Unicode characters are good
ones and that there is no good alternative.  However, doing
so does make Perl less accessable for casual programmers.
(While we may deride the Learn to Web Program in 5 Minutes
crowd, that did get many people involved with Perl, and I'm
sure some of them evolved beyond those limited roots, just
as an earlier generation of programmers had some who evolved
beyond their having started with Basic into nonetheless becoming
competent and knowledgeable craftsmen.)

We need to have a Why Unicode is the lesser of evils document
to refer to whenever this issue rizes up again.  The genuine
problems involved ensure that the issue will continue to arise,
so we can't just get mad at the people who raise it.

-- 


Re: Closed Classes Polemic (was Re: What the heck is a submethod (good for))

2005-10-13 Thread John Macdonald
On Thu, Oct 13, 2005 at 03:01:29PM -0400, Rob Kinyon wrote:
  I think this is an opportune time for me to express that I think the
  ability to close-source a module is important.  I love open source,
  and I couldn't imagine writing anything by myself that I wouldn't
  share.  But in order for Perl to be taken seriously as a commercial
  client-side language, it must be possible to close the source.  I
  started writing a game with a few friends last year, and as we were
  picking our implementation strategy, using Perl as the primary
  sequencing engine for non-time-critical tasks was immediately
  discounted when I commented that anybody can look at your perl source
  if they want to.
 
 I'd be interested in finding out how this is reasonably feasible for,
 given that you just said a disassembler for Parrot is going to be
 relatively simple due to the level of introspection Perl is going to
 require.

When I added the original encryption mechanism for perl (in
early perl 3 days) I knew that it would not be an absolute ban
to stop people from recreating my company's original source
(and that was long before B::Deparse came along, of course).
I certainly knew how to beat the encryption; and anyone with
half a clue would know that it could be beaten.  (Clue: perl
has to be able to read the unencrypted code.  It wasn't hard
to find the right place in the perl source to insert a print
statement that would dump out code that had been decrypted.)

However, anyone who took the effort to recreate that source
from encrypted form would *know* that any use of that decrypted
source was not authorized by the copyright owners.  This was
considered by that company to be an adequate protection.
Any infringement that did occur would clearly be a deliberate
misuse and could be prosecuted with a reasonable assurance
of success.  (No such infringement was ever found - either
anyone who considered it decided it would be too much work, or
no-one was interested enough to consider it, or quite possibly
there were one or more instances where it was decrypted for the
challenge of beating the encryption but not in any way that
lead to an obvious competitive misuse.)

Just because you can't make locking perfect does not mean it
has no value.

-- 


Re: Look-ahead arguments in for loops

2005-10-01 Thread John Macdonald
On Fri, Sep 30, 2005 at 08:39:58PM -0600, Luke Palmer wrote:
 Incidentally, the undef problem just vanishes here (being replaced by
 another problem).

Which reminds me that this same issue came up a while ago in a
different guise.  There was a long discussion about the reduce
functionality that takes an array and applies an operator to
each value and the previously collected result.  (Much of the
discussion was on determining what the identity value for an
operator was to initialize the previous result.)  Most of
the time that you want a loop that remembers the previous
value, it can be equally well expressed an a reduction of the
series of value using an customer defined operator.

I forget what the final choice was for syntax for the reduce
operator (it was probably even a different name from reduce -
that's the APL name), but it would be given a list and an
operator and run as:

my $running = op.identity;
$running = $running op $_ for @list;

So, to get a loop body that knows the previous value, you
define an operator whose identity is the initial value of the
list and reduce the rest of the list.


-- 


Re: Look-ahead arguments in for loops

2005-10-01 Thread John Macdonald
On Sat, Oct 01, 2005 at 02:22:01PM -0600, Luke Palmer wrote:
 And the more general form was:
 
 $sum = reduce { $^a + $^b } @items;
 
 Yes, it is called reduce, because foldl is a miserable name.

So, the target of running a loop with both the current
and previous elements accessible could be written as either:

reduce :identity undef
{ code using $^prev and $^cur ... ; $^cur }
@items;

or:

reduce :identity @items[0]
{ code using $^prev and $^cur ... ; $^cur }
@items[1...];

-- 


Re: conditional wrapper blocks

2005-09-20 Thread John Macdonald
On Tue, Sep 20, 2005 at 08:58:41PM +0200, Juerd wrote:
 Yuval Kogman skribis 2005-09-20 20:33 (+0300):
  Today on #perl6 I complained about the fact that this is always
  inelegant:
  if ($condition) { pre }
  unconditional midsection;
  if ($condition) { post }
 
 I believe it's not inelegant enough to do something about.
 
 The unconditional part is easily copied if it's a single line, and can
 easily be factored to a method or sub if it's not. Especially with
 lexical subs.

There's a middle range where midsection is complicated enough
that copying it is a bad idea, but it is still sufficiently
short that factoring it into a sub causes more clutter and
interrupted thought patterns.

I've half-heartedly wished for this sort of linguistic construct
many times over the years, but it comes up rarely enough that
it's never been a burning desire.  Furthermore, any language
construct I've considered seems worse to me than just using
statement modifiers.  (So, this desire became far less of a
burning issue when I began programming in perl instead of C
and other languages.)

I don't like copying code of even a small amount of complexity -
sooner or later a change will happen to one copy that doesn't
get made to the other copy of the code, and a bug has been
inserted.

So, I just assign the condition to a variable and use:

{
my $cond = ... condition ...;

pre if $cond;

...
mid
...

postif $cond;
}

When I read this sort of code to check it for problems, I
want clarity.  I'm going to have to read it twice to ensure
that it is correct for both of the ways it will be used
(when $cond is true and when it is false).  Moving mid into
a subroutine makes that reading harder.  Even the syntax that
Yuval suggests is more cluttered for reading the straight-line
code for the $cond is true case.

Reading the code for the case when $cond is false is easy no
matter which coding method you use - all of the midsection
code is in one place with any technique.

But when reading the code for the case when $cond is true,
I can read the code as if it were:

pre

...
mid
...

post

by simply ignoring the trailing  if $cond; as I read.
Ignoring the text off to the right is easy; much easier than
ignoring syntax that is mixed into the code, or than reading
code that has been broken into 3 parts and the middle part is
somewhere on the page before or after the other two parts.

-- 


Re: Demagicalizing pairs

2005-08-24 Thread John Macdonald
On Wed, Aug 24, 2005 at 04:27:03PM +1000, Damian Conway wrote:
 Larry wrote:
 
 Plus I still think it's a really bad idea to allow intermixing of
 positionals and named.  We could allow named at the beginning or end
 but still keep a constraint that all positionals must occur together
 in one zone.
 
 If losing the magic from ='d pairs isn't buying us named args wherever we 
 like, why are we contemplating it?

When calling a function, I would like to be able to have a
mixture of named and positional arguments. The named argument
acts as a tab into the argument list and subsequent unnamed
arguments continue on.  That allows you to use a name for a
group of arguments:

move( from= $x, $y, delta= $up, $right );

In this case, there could even be an optional z-coordinate
argument for each of the from and delta groups.

The named group concept works well for interfaces that use the
same groups in many different functions.  It is especially
powerful in languages which do not have structured types,
which means it is not so necessary in Perl, but even here,
you often are computing the components (like $up and $right
above) separately, rather than always computing a single
structured value (which would mean writing delta=(x=$up,
y=$right) instead).

-- 


Re: Demagicalizing pairs

2005-08-24 Thread John Macdonald
On Wed, Aug 24, 2005 at 10:12:39AM -0700, Chip Salzenberg wrote:
 On Wed, Aug 24, 2005 at 08:38:39AM -0400, John Macdonald wrote:
  When calling a function, I would like to be able to have a
  mixture of named and positional arguments. The named argument
  acts as a tab into the argument list and subsequent unnamed
  arguments continue on.
 
 I see a main point of named parameters to free the caller from the
 tyranny of argument order (and vice versa).  It seems to me you're
 asking for the worst of both worlds.

Perhaps I didn't make it clear in my original message -
I agree that arbitrary mixing of named and positional is
usually a bad thing.

The only place where I find it useful is with a group of
arguments that are always provided in the same order, used one
or more times each by a number of functions, with additional
arguments for some/all of those functions.

So, a function that takes position and/or vector values would
provide a name for each vector/position, but expect each to have
an x, a y, and (possibly) a z argument following the name.

I saw this in the DO system - a shell written at CDC back in
the late 70's.  The provided scripts were designed so that
all programming scripts used the same sequence of arguments
after the OPT keyword, the LINK keywork, etc.

As I said originally, the value is diluted in a language
with structured data types - you can use a single argument
for a position that is a hash or array which contains the
x/y/z components within it.

The named group helps especially if you generally want to
provide separate-but-related arguments.  This tends to be
things like an optional sub-action that requires multiple
parameters if it is used at all.

So, I'm mostly saying that a mixture of named and positional
arguments is not ALWAYS bad, and that there may be some
value in permitting such a mixture in certain circumstances.

-- 


Re: AUTLOAD and $_

2005-06-20 Thread John Macdonald
On Mon, Jun 20, 2005 at 04:37:31PM -0600, Luke Palmer wrote:
 On 6/20/05, chromatic [EMAIL PROTECTED] wrote:
  On Mon, 2005-06-20 at 12:11 +0200, Juerd wrote:
  
   I think there exists an even simpler way to avoid any mess involved.
   Instead of letting AUTOLOAD receive and pass on arguments, and instead
   of letting AUTOLOAD call the loaded sub, why not have AUTOLOAD do its
   thing, and then have *perl* call the sub?
  
  Who says AUTOLOAD will always either call a loaded sub or fail?
 
 Uh, what else can it do?  It doesn't have to load a sub to return a
 code reference.
 
 Luke

I recall Damian using AUTOLOAD (in perl5) to evaluate the
result of the function call without loading a function with the
given name.  This was to allow arbitrary names to be invoked,
when the same name is unlikely to be used again.  This was
basically a method that took a string contant argument, but
it used the method name as the constant and didn't need to
specify a name for the actual ccmmon method.

I'm not certain that this is actually a worth supporting, it's
more of a golf/obfuscation technique than a significant tool,
unless there are additional clever uses of the technique that
go beyond this basic trick.

-- 


Re: reduce metaoperator on an empty list

2005-06-09 Thread John Macdonald
On Thu, Jun 09, 2005 at 06:41:55PM +0200, TSa (Thomas Sandlaß) wrote:
 Edward Cherlin wrote:
 That means that we have to straighten out the functions that can 
 return either a Boolean or an item of the argument type. 
 Comparison functions   = = = != should return only Booleans,
 
 I'm not sure but Perl6 could do better or at least trickier ;)
 Let's assume that   = = when chained return an accumulated
 boolean and the least or greatest value where the condition was
 true. E.g.
 
   0  2  3   returns  0 but true
 
   1  2  1   returns  1 but false
 
   4  5  2   returns  2 but false
 
 Then the reduce versions [] and [=] naturally come out as min
 and strict min respectively.
 
 Is it correct that [min] won't parse unless min is declared
 as an infix op, which looks a bit strange?
 
 if 3 min 4 { ... }

The natural method of implementation would imply that the
final is returned:

0  2  3   returns  3 but true
  
1  2  1   returns  1 but false
  
4  5  2   returns  2 but false

The application of each stage of the chain has to remember
the right hand value (for the next stage of the comparison)
as well as the accumulated boolean result.  When the boolean
result is true, that has  and = returning the max, and  and
= returning the min - the opposite of what you asked above.
When the numbers are not in the desired order, it would be
nice to shirtcircuit and not continue on with the meaningless
comparisons as soon as one fails - which means that the max
or min value could not be known.

Whatever is chosen, though, still has to make sense for other
chained comparisons:

$v != $w  $x  $z == $z

cannot sensibly return either the max or the min (which would
it choose?).

I'd be inclined to have the result be val but true/false
where val is the right hand operand of the final comparison
actually tested.  When a consistant set of operators is used
(a mixture of , =, and ==; or a mixture of , =, and ==)
- then a true boolean result also provides the max (or min
respectively) value, while a false boolean result provides
the value of the first element that was out of order.

-- 


Re: reduce metaoperator on an empty list

2005-05-24 Thread John Macdonald
On Fri, May 20, 2005 at 10:14:26PM +, [EMAIL PROTECTED] wrote:
 
  Mark A. Biggar wrote:
   Well the identity of % is +inf (also right side only).
  
  I read $n % any( $n..Inf ) == $n. The point is there's no
  unique right identity and thus (Num,%) disqualifies for a
  Monoid. BTW, the above is a nice example where a junction
  needn't be preserved :)
 
 If as usual the definition of a right identity value e is that a op e = a for 
 all a,
 then only +inf works.  Besdies you example should have been;
 $n % any (($n+1)..Inf),  $n % $n = 0. 
 
   E.g. if XY is left associative and  returns Y when true then ...
  
  Sorry, is it the case that $x = $y  $z might put something else
  but 0 or 1 into $x depending on the order relation between $y and $z?
 
 Which is one reason why I siad that it might not make sense to define the 
 chaining ops in terms of the associtivity of the binary ops,  But as we are 
 interested in what [] over the empty list shoud return , the identity (left 
 or right) of '' is unimportant as I think that should return false as there 
 is nothing to be less then anything else.  Note that defaulting to undef 
 therefore works in that case.

The identity operand is -inf for  and =, and +inf for 
and =.  A chained relation  (, =, =) is then taken to
mean monotonically increasing (decreasing, non-decreasing,
non-increasing), and an empty list, like a one element list,
is always in order.

-- 


Re: Perl development server

2005-05-24 Thread John Macdonald
On Tue, May 24, 2005 at 12:12:57PM +0200, Juerd wrote:
 Unfortunately, onion is already taken by another important Perl server:
 onion.perl.org.
 
 I'm currently considering 'ui', which is Dutch for 'onion'. I bet almost
 nobody here knows how to pronounce ui ;)

For a development machine, the yiddish pronunciation would
work well.  That would make it sound like OY! :-)

-- 


Re: Coroutine Question

2005-05-04 Thread John Macdonald
On Wed, May 04, 2005 at 10:43:22AM -0400, Aaron Sherman wrote:
 On Wed, 2005-05-04 at 10:07, Aaron Sherman wrote:
  On Wed, 2005-05-04 at 09:47, Joshua Gatcomb wrote:
  
   So without asking for S17 in its entirety to be written, is it
   possible to get a synopsis of how p6 will do coroutines?
  
  A coroutine is just a functional unit that can be re-started after a
  previous return, so I would expect that in Perl, a coroutine would be
  defined by the use of a variant of return

A co(operating) routine is similar to a sub(ordinate) routine.
They are both a contained unit of code that can be invoked.

A subroutine carries out its entire functionality completely
and then terminates before returning control back to the caller.

A coroutine can break its functionality into many chunks. After
completing a chunk, it returns control back to the caller
(or passes control to a different coroutine instead) without
terminating.  At some later point, the caller (or some other
coroutine) can resumes this coroutine and it will continue
on from where it left off.  From the point of view of this
coroutine, it just executed a subroutine call in the middle
of its execution.  When used to it full limit, each coroutine
treats the other(s) as a subroutine; each thinks it is the
master and can run its code as it pleases, and call the other
whenever and from wherever it likes.

This can be used for a large variety of functions.

The most common (and what people sometimes believe the
*only* usage) is as a generator - a coroutime which creates a
sequence of values as its chunk and always returns control
to its caller.  (This retains part of the subordinate aspect
of a subroutine.  While it has the ability to resume operation
from where it left off and so doesn't terminate as soon as it
has a partial result to pass on, it has the subordinate trait
of not caring who called it and not trying to exert any control
over which coroutine is next given control after completing a
chunk).

The mirror image simple case is a consumer - which accepts a
sequence of values from another coroutine and processes them.
(From the viewpoint of a generator coroutine, the mainline
that invokes it acts is a consumer coroutine.)

A read handle and a write handle are generator and consumer data
contructs - they aren't coroutines because they don't have any
code that thinks it has just called a subroutine to get
the next data to write (or to process the previous data that
was read).  However, read and write coroutines are perfectly
reasonable - a macro processor is a generator coroutine that
uses an input file rather than a mathematical sequence as
one facet of how it decides the next chunk to be generated
and passed on; a code generator is a consumer coroutine that
accepts parsed chunks and creates (and writes to a file perhaps)
code from it.

Controlling the flow of control is powerful - a bunch of
coroutines can be set up to act like a Unix pipeline.  The first
coroutine will read and process input.  Occassionally it
will determine that it has a useful chunk to pass on to its
successor, and pass control to that successor.  The niddle
coroutines on the chain will pass control to their predecessor
whenever they need new input and to their successor when
they have transformed that input into a unit of output.
The last coroutine will accept its input and process it,
probably writing to the outside world in some way.

Coroutines can permit even wider range in the control flow.
Coroutines were used in Simula, which was a simulation and
modelling language, and it allowed independent coroutines to
each model a portion of the simulation and they would each be
resumed at appropriate times, sometimes by a master scheduler
which determined which major coroutine needed to be resumed
next, but also sometimes by each other.

A coroutine declaration should essentially declare 2
subroutine interfaces, describing the parameter and return
value information.  One is the function that gets called to
create a new instance of the coroutine; the other defines
the interface that is used to resume execution of an existing
instance of that coroutine.  The resume interface will look
like a definition of a subroutine - describing the argument
list and return values for the interface.

Having special purpose coroutine declarations for simple
generators and consumers would be possible and could hide the
need (in more general cases) for the full double interface.

The creation interface should (IMHO) return an object that can
be resumed (using the resumtion interface), could be tested
for various aspects of its state - .isalive (has it terminated
by returning from the function), .caller (which coroutine last
resumed it), probably others.

The act of resuming another coroutine is simply calling its
second interface with an appropriate set of arguments and
expecting that resumption to return an appropriate set of
values (when this coroutine is next resumed).  The resume
operation for 

Re: Coroutine Question

2005-05-04 Thread John Macdonald
On Wed, May 04, 2005 at 03:02:41PM -0500, Rod Adams wrote:
 John Macdonald wrote:
 
 The most common (and what people sometimes believe the
 *only* usage) is as a generator - a coroutime which creates a
 sequence of values as its chunk and always returns control
 to its caller.  (This retains part of the subordinate aspect
 of a subroutine.  While it has the ability to resume operation
 from where it left off and so doesn't terminate as soon as it
 has a partial result to pass on, it has the subordinate trait
 of not caring who called it and not trying to exert any control
 over which coroutine is next given control after completing a
 chunk).
  
 
 [Rest of lengthy, but good explanation of coroutines omitted]
 
 Question:
 
 Do we not get all of this loveliness from lazy lists and the given/take 
 syntax? Seems like that makes a pretty straightforward generator, and 
 even wraps it into a nice, complete object that people can play with.
 
 Now, I'm all in favor of TMTOWTDI, but in this case, if there are no 
 other decent uses of co-routines, I don't see the need for AWTDI. 
 Given/Take _does_ create a coroutine.
 
 
 If there are good uses for coroutines that given/take does not address, 
 I'll gladly change my opinion. But I'd like to see some examples.
 FWIW, I believe that Patrick's example of the PGE returning matches 
 could be written with given/take (if it was being written in P6).

Um, I don't recall what given/take provides, so I may be only
addressing the limitations of lazy lists...

I mentioned Unix pipelines as an example.  The same concept of
a series of programs that treat each other as a data stream
translates to coroutines: each is a mainline routine that
treats the others as subroutines.  Take a simple pipeline
component, like say tr.  When it is used in the middle
of a pipeline, it has command line arguments that specify
how it is to transform its data and stdin and stdout are
connected to other parts of the pipeline.  It reads some
data, transforms it, and then writes the result.  Lather,
rinse, repeat.  A pipeline component program can be written
easily because it keeps its own state for the entire run but
doesn't have to worry about keeping track of any state for
the other partsd of a pipeline.  This is like a coroutine -
since a coroutine does not return at each step it keeps its
state and since it simply resumes other coroutines it does not
need to keep track of their state at all.  To change a coroutine
into a subroutine means that the replacement subroutine has to
be able, on each invokation, to recreate its state to match
where it left off; either by using private state variables
or by having the routine that calls it take over the task of
managing its state.  If pipeline components were instead like
subroutines rather than coroutines, then whenever a process
had computed some output data, instead of using a write
to pass the data on to an existing coroutine-like process,
it would have to create a new process to process this bit of
data.  Using coroutines allows you to create the same sort of
pipelines within a single process; having each one written as
its own mainline and thinking of the others as data sources
and sinks that it reads from and writes to is very powerful.
Lazy lists are similar to redirection of stdin from a file at
the head of a pipeline.  Its fine if you already have that data
well specified.  Try writing a perl shell program that uses
coroutines instead of separate processes to handle pipelines
and has a coroutine library to compose the pipelines; this
would be a much more complicated programming task to write
using subroutines instead of coroutines.

The example of a compiler was also given - the parser runs over
th input and turns it into tokens, the lexer takes tokens and
parses them into an internal pseudo code form, the optimizer
takes the pseudo code and shuffles it around into pseudocode
that (one hopes) is better, the code generator takes the
pseudocode and transforms it into Parrot machine code, the
interpretor takes the Parrot machine code and executes it.
They mostly connect together in a kind of pipeline; but
there can be dynamic patches to that pipeline (a BEGIN block,
for example, causes the interpretor to be pulled in as soon
as that chunk if complete, and if that code includes a use
it might cause a new pipeline of parser/lexer/etc to be set up
to process an extra file right now, while keeping the original
pipeline intact to be resumed in due course.  (This example
also fits with Luke's reservations about failing to distinguish
clearly between crating and resuming a coroutine - how are you
going to start a new parser if calling the parse subroutine
will just resume the instance that is already running instead
of creating a separate coroutime.)

For many simple uses generators are exactly what you need,
but they have limits.  A more powerful coroutine mechanism can
easily provide the simple forms (and, I would expect, without
any serious loss

Re: Coroutine Question

2005-05-04 Thread John Macdonald
On May 4, 2005 06:22 pm, Rod Adams wrote:
 John Macdonald wrote:
 
 On Wed, May 04, 2005 at 03:02:41PM -0500, Rod Adams wrote:
   
 
 If there are good uses for coroutines that given/take does not address, 
 I'll gladly change my opinion. But I'd like to see some examples.
 FWIW, I believe that Patrick's example of the PGE returning matches 
 could be written with given/take (if it was being written in P6).
 
 
 
 Um, I don't recall what given/take provides, so I may be only
 addressing the limitations of lazy lists...
   
 
 First off, it's gather/take, not given/take. My mistake. Oops.
 
 Here goes my understanding of gather/take:
 
 gather takes a block as an argument, and returns a lazy list object. 
 Inside that block, one can issue take commands, which push one or 
 more values onto the list. However, since it's all lazy, it only 
 executes the block when someone needs something off the list that 
 currently isn't there. And even then, it only executes the block up 
 until enough takes have happened to get the requested value. If the 
 block exits, the list is ended.

Strange.  The names gather/take suggest accepting values rather than
generating them (yet generating them onto the lizy list is what you
describe this code as doing).  I don't like the name yield either for the
same reason - it suggests that the data only goes one way while the
operation of transferrig control from one coroutine to another is a pair
has the first producing a value to the other (but also being ready to
accept a value in return when it gets resumed in turn).  Whether a
particular resume is passing data out, or accepting data in, or both,
is a matter of what your code happen to need at that moment.

 A simple example:
 
   sub grep ($test, [EMAIL PROTECTED]) {
 gather {
   for @values - $x {
 take $x if $x ~~ $test;
   }
 }
   }
 
 Since the block in question is in effect a closure, and gets called 
 whenever a new value to the lazy list is requested, I believe it 
 provides all of the generator aspects of coroutines. It could access 
 various up-scoped/global variables, thereby changing it's behavior 
 midcourse, if needed. You can create several versions of the same 
 generator, all distinct, and with separate states, and easily keep 
 them separate. To create a new list, you call the function. To resume a 
 list, you ask for a value from the list it hasn't already created.

Asking for a value by scanning a lazy list provides no mechanism for
sending information to the routine that will provide that value.

For example, the parser for perl5 is written with contorted code because
it needs just such a feedback mechanism - the parser has to turn
characters into tokens differently depending upon the context.  If it was
written as a lazy list of tokens, there would have to be this feedback
done somehow.  (Is \1 one token or two?  In a string, it is one token for
the character with octal value 001 (or perhaps a part of the token that
is the entire string containing that character as only a portion), in a
substitute, it is one token that refers back to the first match, in open
expression code, it is two tokens representing a refernce operator and
the numeric value 1.  (POD and here-strings are other forms that
require feedback.)

 Once you have some of these lazy list functions made, pipelining them 
 together is trivial via == or ==.
 
 For many simple uses generators are exactly what you need,
 but they have limits.  A more powerful coroutine mechanism can
 easily provide the simple forms (and, I would expect, without
 any serious loss of performance).
 
 I'll ask again for a couple examples of the non-generator uses, mostly 
 out of curiosity, but also to better evaluate the proposals being kicked 
 around in this thread.
 
 -- Rod Adams
 
 
 


Re: -X's auto-(un)quoting?

2005-04-24 Thread John Macdonald
On Saturday 23 April 2005 14:19, Juerd wrote:
 Mark A. Biggar skribis 2005-04-23 10:55 (-0700):
  After some further thought (and a phone talk with Larry), I now think
  that all of these counted-level solutions (even my proposal of _2.foo(),
  etc.) are a bad idea.
 
 In that case, why even have OUTER::?

Referring to something by relative position is great when refactoring
will not change the relationship.

If you refactor the enclosing context, whatever context is wrapped
around me is changed by refactoring in the right way; while that
specific thing that is 2 levels out (at the time I wrote this code) is
changed in the wrong way, because the specific context you want to
refer to may now be 1 or 3 or 50 levels out.


Re: Unify cwd() [was: Re: $*CWD instead of chdir() and cwd()]

2005-04-16 Thread John Macdonald
On Saturday 16 April 2005 01:53, Michael G Schwern wrote:
 How cwd() is implemented is not so important as what happens when it hits
 an edge case.  So maybe we can try to come up with a best fit cwd().  I'd 
 start by listing out the edge cases and what the possible behaviors are.  
 Maybe we can choose a set of behaviors which is most sensible across all 
 scenarios and define cwd() to act that way.  Or maybe even just define what 
 various cwd variations currently do.
 
 Here's the ones I know of off the top of my head.  You probably know more.
 
 * The cwd is deleted
 * A parent directory is renamed
 * A parent directory is a symlink

There is also the possibility for permissions issues:

* You don't have permissions to determine cwd as an absolute pathname
* You are in a directory that you couldn't have chdir'ed into (that makes the
   localized $CWD fail to return you to the original location when it goes out
   of scope).

It's not hard to run a program that is setuid (to a non-root account) from 
within
a directory that is owner-only accessible.


Re: identity tests and comparing two references

2005-04-06 Thread John Macdonald
On Wed, Apr 06, 2005 at 11:30:35AM -0700, Larry Wall wrote:
 If you want to help, earn a billion dollars and write me into your
 will.  And then peg out.  Nothing personal.  :-)
 
 Larry

Darn.  So far, I'm, 0 for 3 on that plan.

However, I promise that item two will follow very shortly in
time from item one.  No promises about the delay between items
two and three, though; nor any assurance of my ever achieving
item one (it's failure, in fact, is virtually assured).

-- 


Re: New S29 draft up

2005-03-21 Thread John Macdonald
On Mon, Mar 21, 2005 at 03:31:53PM +0100, Juerd wrote:
 [...] (The symmetry is slightly broken, though, because if you push
 foo once, you have to pop three times to get it back. I don't think
 this is a problem.))

That's not a new break to the symmetry of push and pop:

@b = (1,2,3);
push( @a, @b ); # needs 3 pops to get all of 1,2,3 back

and in both the original array form and in the treat a string
as an array form, you can retrieve a multi-element thing that
was push'ed in a single operation by using a single invokation
of splice.

-- 


Re: New S29 draft up

2005-03-18 Thread John Macdonald
On Thu, Mar 17, 2005 at 09:18:45PM -0800, Larry Wall wrote:
 On Thu, Mar 17, 2005 at 06:11:09PM -0500, Aaron Sherman wrote:
 : Chop removes the last character from a string. Is that no longer useful,
 : or has chomp simply replaced its most common usage?
 
 I expect chop still has its uses.

I've had times when I wanted to be able to use chop at either
end of a string.

(I long ago suggested a chip operator to chop from the front of
a string.  Using chip and chop on a string is much like using
shift and pop on an array.)

Generally when I do this I am not only deleting the character
from the string, but also moving it to another scaler to use;
so substr isn't a simple replacement because you'd have to
use it twice.  For chip, I use (perl5):

# $ch = chip $str;
$str =~ s/(.)//;
$ch = $1;

# can be written as:
($ch) = ($str =~ s/(.)//);

If chop is removed, a similar s/// can replace it.

With the advent of rules and grammars in p6, there will likely
be less need for chip/chop type operations so this huffman
extended use of subtr instead of chip/chop would be ok.

-- 


Re: New S29 draft up

2005-03-18 Thread John Macdonald
On Fri, Mar 18, 2005 at 09:24:43AM -0800, Larry Wall wrote:
 [...]  And if
 chomp is chomping and returning the terminator as determined by the
 line input layer, then chimp would have to return the actual line and
 leave just the terminator. :-)

With the mnemonic Don't monkey around with my terminator.  :-)

-- 


Re: s/true/better name/

2005-03-16 Thread John Macdonald
On Wednesday 16 March 2005 15:40, Autrijus Tang wrote:
 On Wed, Mar 16, 2005 at 12:09:40PM -0800, Larry Wall wrote:
  So I'm thinking we'll just go back to true, both for that reason,
  and because it does syntactically block the naughty meaning of true as
  a term (as long as we don't default true() to $_), as Luke reminded us.
 
 But true() reads weird, and it does not read like an unary (or list)
 operator at all to me. As the bikeshedding is still going on, may I
 suggest aye()?  It is the same length as not(), both are adverbs,
 and is rare enough to not conflict with user-defined subs.

A shotgun brainstorming of possible operator names:

determine
ponder
query
consider
examine
veracity
inquire
bool
boolean
bin
binary
propriety


Re: Junction Values

2005-02-17 Thread John Macdonald
On Thu, Feb 17, 2005 at 09:06:47AM -0800, Larry Wall wrote:
 Junctions can short circuit when they feel like it, and might in some
 cases do a better job of picking the evaluation order than a human.

Hmm, yes, there is an interesting interaction with lazy
evaluation ranges here.

$x = any( 1 .. 1_000_000_000 );

if( $y == $x ) { ... }

It would be nice if the junction equality test here was much
smarter than a for loop (regardless of whether the for loop
short circuited - suppose $y happens to be -1!).

A range need not enumerate all of its components to be used
in a useful way.

-- 


Re: Perl 6 Summary for 2005-01-31 through 2004-02-8

2005-02-09 Thread John Macdonald
On Wed, Feb 09, 2005 at 11:57:17AM -0800, Ovid wrote:
 --- Matt Fowles [EMAIL PROTECTED] wrote:
 
 Logic Programming in Perl 6
  Ovid asked what logic programming in perl 6 would look like. No
  answer
  yet, but I suppose I can pick the low hanging fruit: as a
  limiting case
  you could always back out the entire perl 6 grammar and insert
  that of
  prolog.
 
 I dunno about that.  The predicate calculus doesn't exactly translate
 well to the sort of programming that Perl 6 is geared for.  I don't
 think it's a matter of redefining a grammar.  Maybe unification can be
 handled with junctions, but backtracking?  I am thinking that some
 serious work down at the Parrot level would be necessary, but I would
 be quite happy to be corrected :)
 
 Cheers,
 Ovid

This kind of ties in to the 4  $x  2 issue with junctions.
As long as junctions retain state to determine such relations
correctly, they should be able to be used for logic programming
too.

I'd kind of like there to be a version of junctions that acted
as a Quantum Superposition. So:

$x = ( 1|3|5 );
4  $x  2;

would keep track of both the truth values and the the
corresponding subsets of the junction.  So, as the
interpretor evaluated 4  $x it would give a result of
(true=(5),false=(1|3)); then the evaluation of $x  2 would
modify that to (true=(),false=(1|3|5)).  That type of junction
would have the same result of false even if the statement was
written as:

4  $x and $x  2;

That would be a good thing, because I don't think that the
chain comparisons are the only place where juctions will give
ridiculous answers because multually exclusive subsets of the
junction value are found to fulfil different conditions in a
sequence of steps.

Unfortunately, doing this properly would lead to having the
program form into multiple processes and would lead to problems
with irreversible actions that occur while the superposition is
still multi-determinate.

The basic problem is that a junction does not work well with
boolean operations, because the answer is usually sometimes
yes and sometimes no and until you resolve which of those is
the one you want, you have to proceed with both conditions.
The all/any/none/one (but we're missing not-all and all-but-one
from a full list of operators) give a local scale resolution
to this discrepancy, but sometimes you want a larger scale
selection.

If code is being evaluated tentatively (we don't know whether
the current member of the junction will be considered a true
element of the result) you really need to limt your actions
to side-effect-free operations.

I'm afraid that I have too much fluff in my very little brain
to have a good solution to this.

-- 


Re: S05 question

2004-12-10 Thread John Macdonald
On Thu, Dec 09, 2004 at 11:18:34AM -0800, Larry Wall wrote:
 On Wed, Dec 08, 2004 at 08:24:20PM -0800, Ashley Winters wrote:
 : I'm still going to prefer using :=, simply as a good programming
 : practice. My mind sees a big difference between building a parse-tree
 : object and just grepping for some word I want in a string. Within a
 : rule{} block, there is no place except the rule object to keep your
 : data (hypothetically -- haha), so it makes sense to have everything
 : capture unless otherwise specified. There's no such limitation in a
 : regular code block, so I don't see the need.
 
 Since regex results are lexically scoped in Perl 6, in a regular
 code block we can do static analysis and determine whether there's
 any possibility that $foo is referenced at all, and optimize it
 away in many cases, if it turns out to be high overhead.  But as Patrick
 points out, so far capture seems pretty cheap.

It might turn out to be worth optimizing only when ALL of the
capture blocks are unused - the saving from avoiding setup
costs together with avoiding the (too small to be a bother
by themselves) incremental costs, might be significantwhen
taken together.

-- 


Re: Arglist I/O [Was: Angle quotes and pointy brackets]

2004-12-04 Thread John Macdonald
On Sat, Dec 04, 2004 at 11:08:38PM +0300, Alexey Trofimenko wrote:
 On Sat, 04 Dec 2004 11:03:03 -0600, Rod Adams [EMAIL PROTECTED] wrote:
 
 Okay, this rant is more about the \s\s than \s=\s. To me, it is easier  
 to understand the grouping of line 1 than line 2 below:
 
 if( $a$b  $c$d ) {...}
 if( $a  $b  $c  $d ) {...}
 
 In line2, my mind has to stop and ask: is that ($a  $b)  ($c  $d),  
 or $a  ($b  $c)  $d. It quickly comes to the right answer, but the  
 question never comes up in the first line. If I wanted to use more  
 parens for clarity, I'd use LISP.
 
 
 I've got used to write it as
if( $a  $b and $c  $d) {...}
 already. if it could help.. :)

I agree with Rod - it is much more readable when there are
no blanks around the  and there are blanks around the .
Typing is not the problem as much as reading, however, I choose
the spacing for readability when I type it, deciding what the
base chunks are and putting blanks aound the base chunks but
not within them.  Having a few operators that require spacing
will be an extra gotcha to consider in that process, so it
will occassionably lead to syntax errors when I don't consider
the special rule; but it will still lead to less readable code
when I do remember the rule and leave the extra spaces.

-- 


Re: Angle quotes and pointy brackets

2004-11-30 Thread John Macdonald
On Tue, Nov 30, 2004 at 02:26:06PM -0800, Brent 'Dax' Royal-Gordon wrote:
: Larry Wall [EMAIL PROTECTED] wrote:
:  * Since we already stole angles from iterators, «$fh» is not
:  how you make iterators iterate.  Instead we use $fh.fetch (or
:  whatever) in scalar context, and $fh.fetch or @$fh or $fh[]
:  or *$fh in list context.
: 
: I believe you tried this one a couple years ago, and people freaked
: out.  As an alternative, could we get a different operator for this? 
: I propose one of:
: 
: $fh -
: $fh» (and $fh)
: $fh
: 
: All three have connotations of the next thing.  The first one might
: interfere with pointy subs, though, and the last two would be
: whitespace-sensitive.  (But it looks like that isn't a bad thing
: anymore...)

In lines with the '...' ... and ... ... progressions,
the following progression has a nice symmetry:

$iter --#extract next (one) element from iterator $iter
$iter ==#pipeline all elements (lazy) in turn from iterator $iter

However, I haven't been paying a lot of attention, to the current state
of affairs, so it is probably broken in some way.

-- 


Re: qq:i

2004-11-30 Thread John Macdonald
On Tue, Nov 30, 2004 at 05:54:45PM -0700, Luke Palmer wrote:
 Jim Cromie writes:
  
  since the qq:X family has recently come up, Id like to suggest another.
  
  qq:i {}  is just like qq{} except that when it interpolates variables,
  those which are undefined are preserved literally.
 
 Eeeew.  Probably going to shoot this down.  But let's see where you're
 going with it :-)

Bang! :-)

The problem with interpolate if you can or leave it alone for
later is that when later comes around you're in a quandry.
Is the string $var that is in the final result there because
it was $var in the original and couldn't be interpolated,
or was it a $foo that had its value of $var injected into
its place?

The maybe do it now, finish up later what wasn't done the
first round approach runs the risk of double interpolation.
(Or single interpolation, or non-interpolation, whichever
it happened to roll on the dice.) If you're Randal Schwartz
discovering a

s/Old Macdonald/$had a $farm/eieio

accidental feature, that can be useful; but for mere portals,
it is just a bug waiting to surface.

-- 


Re: Angle quotes and pointy brackets

2004-11-28 Thread John Macdonald
On Sat, Nov 27, 2004 at 08:21:06PM +0100, Juerd wrote:
 James Mastros skribis 2004-11-27 11:36 (+0100):
  Much more clear, saves ` for other things
 
 I like the idea. But as a earlier thread showed, people find backticks
 ugly. Strangely enough, only when used for something other than
 readpipe.
 
 The idea of being able to write
 
 %hash{'foo'}{'bar'}{$foo}[0]{$bar}
 
 as
 
 %hash`foo`bar`$foo`0`$bar
 
 still works very well for me. At least on all keyboards that I own, it
 is easier to type. And in all fonts that I use for terminals (that'd be
 only misc-fixed and 80x24 text terminals), it improves legibility too.

Doesn't that cause ambiguity between:

 %hash{'foo'}{'bar'}{$foo}[0]{$bar}
and
 %hash{'foo'}{'bar'}{$foo}{0}{$bar}
  ^ ^   hash instead of subscript

-- 


Re: Angle quotes and pointy brackets

2004-11-28 Thread John Macdonald
On Sun, Nov 28, 2004 at 12:24:08PM -0500, John Macdonald wrote:
 On Sat, Nov 27, 2004 at 08:21:06PM +0100, Juerd wrote:
  James Mastros skribis 2004-11-27 11:36 (+0100):
   Much more clear, saves ` for other things
  
  I like the idea. But as a earlier thread showed, people find backticks
  ugly. Strangely enough, only when used for something other than
  readpipe.
  
  The idea of being able to write
  
  %hash{'foo'}{'bar'}{$foo}[0]{$bar}
  
  as
  
  %hash`foo`bar`$foo`0`$bar
  
  still works very well for me. At least on all keyboards that I own, it
  is easier to type. And in all fonts that I use for terminals (that'd be
  only misc-fixed and 80x24 text terminals), it improves legibility too.
 
 Doesn't that cause ambiguity between:
 
  %hash{'foo'}{'bar'}{$foo}[0]{$bar}
 and
  %hash{'foo'}{'bar'}{$foo}{0}{$bar}
   ^ ^ hash instead of subscript

Hmm, I guess it is usually not ambiguous, only when it is
causing auto-vivification of the hash-or-array with `0` is
there an ambiguity between whether that means [0] and {'0'}.

-- 


Re: What Requires Core Support (app packaging)

2004-09-17 Thread John Macdonald
On Fri, Sep 17, 2004 at 10:46:36AM -0400, Jonadab the Unsightly One wrote:
 Juerd [EMAIL PROTECTED] writes:
 
  Most worlds don't use file extensions, except for humans. 
 
 You exaggerate their lack of importance.  File extensions don't matter
 to most operating system *kernels*, but they are nevertheless
 important for more than just Windows:
 
  * They are of critical importance on Apache-based webservers.

  * They instruct command-line tab completion for some shells.  This
IMO is a biggie, and would be even bigger if more shells were
smarter.  (eshell has a leg up here.)

  * They matter somewhat to many *nix applications, such as Emacs and
Gimp.  When I say matter somewhat, I mean that the app
understands what the extension means, and so in the absense of the
extension you have to give the app additional information to
compensate.

make is an important example here

  * They matter to most GUI file managers in the *nix world.  I
personally don't use GUI file managers, but some people do.
 
  * They matter somewhat in the VMS world, though not as much as under
Windows I think.
  
  * They matter in the OS/2 world, if anyone is still using that.  Also
DOS, with the same caveat.

  * On Mac OS X the extension matters for files that don't have
filetype/creator codes attached to them yet (unless the file is
coming from a source that supplies content-type, such as from a web
server or as an email attachment, in which case the content-type
instructs the addition of filetype/creator codes).
 
 The only OS I know of where file extensions are *totally* not used is
 Archimedes.  It doesn't allow them at all, from what I understand.

-- 


Re: Synopsis 9 draft 1

2004-09-09 Thread John Macdonald
On Thu, Sep 09, 2004 at 03:09:47PM +0200, Michele Dondi wrote:
 On Thu, 2 Sep 2004, Larry Wall wrote:
 
  And yes, an Cint1 can store only -1 or 0.  I'm sure someone'll think of
  a use for it...
 
 Probably OT, but I've needed something like that badly today: working on
 a japh that turned out to require mostly golfing skills (and not that I
 have many, I must admit)... well, it would have been useful to have, say,
 a pack template for 2-bits unsigned integers...

As an array index -1 and 0 give you the 2 ends.  The perl5
code to alternately extract elements from the two eds of an
array can be something like:

my $end = 0;# -1 to start with right end

while( @array ) {
my $next = splice( @array, $end, 1 );
# use the $next element
$end = -1 - $end;
}

Using int1 for $end, that last line can be changed in a variety
of ways, such as:

$end ^= 1;

(except that the p5 ^= operator is written differently in p6)

This is not a *good* use of int1 though. :-)

-- 


Re: Reverse .. operator

2004-09-07 Thread John Macdonald
Hmm, this would suggest that in P6 the comment that unlike ++,
the -- operator is not magical should no longer apply.

On Fri, Sep 03, 2004 at 08:09:23AM -0400, Joe Gottman wrote:
 
 
  -Original Message-
  From: Larry Wall [mailto:[EMAIL PROTECTED]
  Sent: Thursday, September 02, 2004 8:41 PM
  To: Perl6
  Subject: Re: Reverse .. operator
  
  On Thu, Sep 02, 2004 at 08:34:22PM -0400, Joe Gottman wrote:
  : Is there similar shorthand to set @foo = (5, 3, 3, 2, 1) ?  I know you
  can
  : go
  :
  : @foo = reverse (1 ..5);
  :
  : but this has the major disadvantage that it cannot be evaluated lazily;
  : reverse has to see the entire list before it can emit the first element
  of
  : the reversed list.
  
  I don't see any reason why it can't be evaluated lazily.  The .. produces
  a range object that gets shoved into the lazy list that gets bound to
  the slurp array of reverse().  If you pop that, there's no reason it
  couldn't go out and ask the end of the lazy list for its last element.
  Just have to make .. objects smart enough to deal off either end of
  the deck.
 
I get it.  One way to implement this would to give the .. object a
 .reverse member iterator that lazily iterates from right to left, and have
 the reverse subroutine call the .reverse member on the list that was passed
 to it (if this member exists).  The advantage of this is that it can be
 extended for other types, or even to arrays returned from functions.  For
 instance,
  
 multi sub grep(Code $f, [EMAIL PROTECTED] does reverse)  returns (Array does
 reverse {grep $f, @array.reverse;}) #reverse input
 
multi sub map(Code $f, [EMAIL PROTECTED] does reverse) returns (Array does reverse
 {map {reverse $f($_)} @array.reverse;}) #reverse input and result of each
 call to $f
 
 If it isn't possible to overload a multi sub on a .does property, we can
 achieve the same effect by creating a ReversableArray class that inherits
 from Array and overloading on that.
 
 Joe Gottman
 
 
 

-- 


Re: This fortnight's summary

2004-08-25 Thread John Macdonald
On Wed, Aug 25, 2004 at 08:19:06PM +0100, The Perl 6 Summarizer wrote:
   A small task for the interested
 Dan posted another of his small tasks for the interested (maybe we
 should start calling them STFTIs?). This time he's after source tests to
 test the embedding interface and some fixing of the auto-prefix scheme.

Hmm...  I suppose that this acronym would be pronounced stuff-it.

-- 


Re: Synopsis 2 draft 1 -- each and every

2004-08-19 Thread John Macdonald
On Thu, Aug 19, 2004 at 12:31:42PM -0700, Larry Wall wrote:
 So let's rewrite the table (assuming that all the hash methods are just
 variants of .values), where N and D are non-destructing and destructive:
 
 next D  next N  all D   all N   
 ==  ==  =   =
 $iter   $iter.read  ?1  $iter.read  ?2
 @array  @array.shift@array.for  @array.splice   @array
 $array  $array.shift$array.for  $array.splice   @$array
 %hash   ?3  %hash.values?4  %hash.values
 $hash   ?3  $hash.values?4  $hash.values
 
 Hmm.  Ignore ?1 and ?2, since it's not clear that iterators can be
 read non-destructively.

In scalar context a non-destructive read of an iterator might
be called $iter.peek and the next .read will get (and remove)
the same value that .peek returns.  Implementation would be
fairly simple - the control info for an iterator would have a
field containing the next value and a flag to specify whether
that value had been determined yet.

sub peek {
unless ($iter.flag) {
$iter.nextval = $iter.getnext();
$iter.flag = true;
}
return $iter.nextval;
}

sub read {
if ($iter.flag) {
$iter.flag = false;
return $iter.nextval;
}
return $iter.getnext();
}


-- 


Re: idiom for filling a counting hash

2004-05-18 Thread John Macdonald
On Tue, May 18, 2004 at 11:14:30PM +0200, Stéphane Payrard wrote:
 I thought overloading the += operator
 
%a += @a;

There's been lots of discussion of this, but:

 Probably that operator should be smart enough to be fed with
 a mixed list of array and hashes as well:
 
   %a += ( @a, %h);  #  would mean %a += ( @a, keys %h)

I would like this sort of addition of hashes to allow
for two meanings.  A counting hash can be used for testing
whether a key exists, but also for how many times it
has occurred.

So (in Perl 5):

add( %a, %b )

could work with:

++$a{$_} for keys %b

only if you are only interested in existance, but it should
work as:

while( my($k,$v) = each %b ) {
$a{$k} += $v;
}

so that you can merge one set of counts into another.
In Perl 6, this could be a function (perhaps wrapped in
an operator, I'm not especially concerned about that)
that runs through its list of keys and increments the
ones that are scalar, but for pairs it increments the key
by the value.

-- 


Re: Adding deref op [Was: backticks]

2004-04-21 Thread John Macdonald
On Wed, Apr 21, 2004 at 09:19:12PM +0200, Matthijs van Duin wrote:
 On Wed, Apr 21, 2004 at 01:02:15PM -0600, Luke Palmer wrote:
macro infix:\ ($cont, $key)
is parsed(/$?key := (-?letter\w* | \d+)/)
{
if $key ~~ /^\d+$/ {
($cont).[$key];
}
else {
($cont).«$key»;
}
}
 
 That does all the magic at compile time.
 
 True, but what about  $x\$y ? :-)

What about $x\n?  The backslash already has
meaning in strings, yet interpolating hashed
values is a common need.  There's a similar
problem inside a regex, where backslash would
be ambiguous.  The ambiguity can be resolved,
perhaps, but the confusion would make \ a poor
choice.

-- 


Re: backticks

2004-04-16 Thread John Macdonald
On Thu, Apr 15, 2004 at 12:27:12PM -0700, Scott Walters wrote:
 * Rather than eliciting public comment on %hash`foo (and indeed %hashfoo)
 the proposal is being rejected out of hand (incidentally, the mantra of the Java
 community Process seems to be you don't need X, you've got Y, and it took 
 .Net before they woke up and realized that maybe they should consider their
 community in the community process - after ignoring a universal call for
 generics for over 5 years it's little wonder .Net ate their cake)

Is that:

X = `command args`
Y = qx/command args/

or:

X = %hash'foo
Y = %hashfoo

I'm not sure which camp you consider to be the pot
and which is the kettle.  Anyhow, both are grey, not
black.  X is useful. and Y is an alternative to X.
lead to the questions like How useful? and Is there
value in having both?.  However, the first is an
argument to remove a feature that is already present
and the second is arguing to add a new feature, so
a case can be made for requiring different standards
of acceptance for the argument in the two cases.

(For the record, I find `command` extremely useful,
especially in short scripts, which is where huffman
encoding is most valuable.  I've never used qx//
at all.  Nor, in shells, have I ever used $(...) in
place of `...`.  My fingers got trained long ago and
I don't see sufficient benefit to go to the bother
of retraining them.  I can backwack embedded `'s,
and while pulling nested command invokations out
into a separate variable assignment is necessary with
`` syntax, it is much easier to read even when $( )
syntax makes embedding possible.)

-- 


Re: backticks

2004-04-16 Thread John Macdonald
On Fri, Apr 16, 2004 at 09:16:15PM +0200, Juerd wrote:
 However, I could be guessing badly. It could be that someone who says
 Perl 6 should not have a third syntax because there are already two
 really has thought about it. We have many ways of saying foo() if not
 $bar in Perl 5 and I use most of them. I like that in Perl, and hope
 that in Perl 6 there will still be more than one way to do it.

Three variations of syntax that are used in the same
syntactical context for slightly varying meanings
suggests that at least one of them is wrong.  Of the
many variotions of foo() if not $bar, there are
block level (if-statement), statement level (statement
modifiers) and expression level ( || and or;
perhaps you can argue that ? : is also a variant to
the same extent that an if statement is).  However,
the set of characters following %foo to denote the
hash index are all happening in the same sort of
expression level context, and three variations seems
like too many.  That said, in perl5 I use the bareword
hash subscript *very* often, and having to quote them
would be a major regression to perl3.  I far less
often use a function call as a hash subscript (at
least an order of magnitude less often, maybe two).

-- 


Re: Semantics of vector operations

2004-02-02 Thread John Macdonald
On Mon, Feb 02, 2004 at 09:59:50AM +, Simon Cozens wrote:
 [EMAIL PROTECTED] (Andy Wardley) writes:
  Sure, make Perl Unicode compliant, right down to variable and operator 
  names.  But don't make people spend an afternoon messing around with mutt, 
  vim, emacs and all the other tools they use, just so that they can read, 
  write, email and print Perl programs correctly.
 
 To be honest, I don't think that'll be a problem, but only because by the
 time Perl 6 is widely deployed, people will have got themselves sorted out
 as far as Unicode's concerned. I suspect similar things were said when C
 decided to use 7 bit characters.

Don't be so sure.  I've been seeing the  and 
characters properly sometimes, as ??? sometimes,
and I think there were some other variants (maybe for
other extended characters) - depending upon whether
I'm reading the messages locally at home or remotely
through a terminal emulator.  Those emulators are
not about to be replaced for any other reason in the
near future.

I'll be able to work it out if I have to, but it'll
be an annoyance, and probably one that shows up
many times with different bits of software, and
often those bits will not be under my control and
will have to be worked around rather than fixed.
(In the canine-ical sense, it is the current software
that is fixed, i.e.  it has limited functionality.)

 That doesn't mean I think Unicode operators are a good idea, of course.

They will cause problems for sure.


Re: Semantics of vector operations

2004-01-29 Thread John Macdonald
On Thu, Jan 29, 2004 at 11:52:04AM +0100, Robin Berjon wrote:
 I have nothing against using the Unicode names for other entities for 
 instance in POD. The reason I have some reserve on using those for 
 entitised operators is that ELEFT LOOKING TRIPLE WIGGLY LONG WUNDERBAR 
 RIGHTWARDS, COMBINING isn't very readable. Or rather, it's readable 
 like a totally different plot with its own well-carved out characters, 
 intrigues, and subplots in the middle of a book.

The book of Perl with an embedded copy of the book of COBOL.


Re: Vocabulary

2003-12-16 Thread John Macdonald
On Wed, Dec 17, 2003 at 12:15:04AM +, Piers Cawley wrote:
 There's still a hell of a lot of stuff you can do with 'cached'
 optimization that can be thrown away if anything changes. What the
 'final' type declarations would do is allow the compiler to throw away
 the unoptimized paths and the checks for dynamic changes that mean the
 optimization has to be thrown out and started again.

As Luke pointed out in an earlier message,
you can encounter grave difficulty (i.e. halting
problem unsolvable sort of difficulty) in trying to
unoptimize a piece of code that is in the middle of
being executed.  Just about any subroutine call might
(but almost always won't :-) happen to execute code
that makes the current subroutine have to revert
to unoptimized (or differently optimized) form.
When that subroutine call returns after such a rare
occurrence, it can't return to the unoptimized code
(because there could be missing context because the
calling routine got this far using the optimized code
and may have skipped stuff that is (now) necessary)
and it can't return to the old code (because its
optimization might now be wrong).


Re: Threads and Progress Monitors

2003-05-30 Thread John Macdonald
On Thu, May 29, 2003 at 10:47:35AM -0700, Dave Whipp wrote:
 OK, we've beaten the producer/consumer thread/coro model to death. Here's a
 different use of threads: how simple can we make this in P6:
 
   sub slow_func
   {
   my $percent_done = 0;
   my $tid = thread { slow_func_imp( \$percent_done ) };
   thread { status_monitor($percent_done) and sleep 60 until
 $tid.done };
   return wait $tid;
   }

At first glance, this doesn't need a thread - a
coroutine is sufficient.  Resume the status update
coroutine whenever there has been some progress.
It doesn't wait and poll a status variable, it just
let the slow function work at its own speed without
interruption until there is a reason to change the
display.

In fact, it probably doesn't need to be a coroutine
either.  A subroutine - display_status( $percent ) -
should't require any code state to maintain, just a
bit if data so all it needs is a closure or an object.

At second glance, there is a reason for a higher
powered solution.  If updating the display to a new
status takes a significant amount of time, especially
I/O time, it would both block the slow function
unnecessarily and would update for every percent
point change.  Using a separate process or thread
allows the function to proceed without blocking, and
allows the next update to jmp ahead to the current
actual level, skipping all of the levels that occurred
while the previous display was happening.  Instead of
sleep, though, I'd use a pipeline and read it with
a non-blocking read until there is no data.  Then,
if the status has changed since the last update, do
a display update and repeat the non-blocking read.
If the status has not changed, do a blocking read to
wait for the next status change.


Re: Cothreads

2003-05-27 Thread John Macdonald
Wow, what a flood.

The idea of keep the various degrees of code
parallelism similar in form yet distinct in detail
sounds good to me.

I would like to suggest a radically different
mechanism, that there be operators: fork, tfork, and
cfork to split off a process, thread, or coroutine
respectively.  (The first might be called pfork
if we're willing to give up history for forward
consistancy.)

Each would return a reference to the original code
flow unit, the child code flow unit, and an indication
of which one you are or if there was an error.
(This is much like the current fork, except that I'd
like the child to be provided with the ID of the
parent, as well as the parent being provided with
the ID of the child.)

Fork, of course, would result in totally separate
process encapsulation (or something transparently
equivalent as is done in Perl 5 on Windows, I
believe, where they use threads to simulate fork).
All variables are independent copies after the fork,
system calls from one process should not block the
other, etc.  Switching execution from one process to
the other is at the whim of the operating system and
is not necessarily going to happen at perl operation
boundaries.  Data transfer must be done external
(either an already open pipeline or through the file
system or such).

Tfork would result in separate threads.  If the OS
provides them, they would likely be used, otherwise
the interpreter might have to simulate them.  All my
variables are independent copies after the tfork, but
our variables are shared.  Operations that access
our variables must be managed by the interpreter
so that no language level actions are seen as broken
up into separate portions by an ill-timed process
switch to another thread.  Especially if OS threads
are used, the interpreter may have to take special
action to ensure that switches do not expose an our
update that is partially complete.  While pipelines
might work (depends upon whether the OS is supporting
threads) and file system communication would work,
normally communication would be done through our
variables.

Cfork would result in separate coroutines.  They would
have separate call stacks, and a separate copy of the
current stack frame, but otherwise all my amd our
variables would be fully shared.  Process switching
only occurs under control of the process themselves,
so no operation level protection must be done.

I'll assume the cfork is defined to return to the parent.

The auto forking coroutine that Damian described could
be a syntactic sugar based on this mechanism (while not
being the only way of getting coroutines) so that:

cosub func {
# func body ...
yield
# func body ...
}

is treated as a macro that gets turned into:

{
my $parent = 0;
my $child = 0;

sub func {
unless $state {
my ( $p, $c, $which ) = cfork;
if( $which eq IS_PARENT ) {
$child.next;
return $child;
} elsif( $which eq IS_CHILD ) {
@_ - sub {
yield;
# func body
yield;
# func body
};
yield undef while 1;
}
}
$child.next( @_ );
}
}

which takes care of the magic first time starts the
coroutine, subsequent times resume it from where it
left off semantic for this special case, without
requiring that anything that is a coroutine has to
have that magic behaviour.

The next method would be code of the form:

next( $dest, @stuff ) {
resume( $dest, @stuff );
}

But resume needs a bit of magic.  I'll write as
a special variable $CUR_COROUTINE which always
contains that object for the currently executing
coroutine (one has to be generated for the initial
mainline, either when perl starts or the first time
that a cfork occurs).  Every coroutine object has an
attribute caller.  When resume is called, it updates
the caller attribute of the coroutine that is being
resumed with a reference to $CUR_COROUTINE.

Within a coroutine, then, you can always determine
the ID of the coroutine that last resumed you with
$CUR_ROUTINE.caller.

This means that the yield operator could be macro
that expands to:

# yield( stuff )
resume( $CUR_COROUINE.caller, stuff )

Providing resume as the underlying mechanism for
next/yield allows for non subordinate coroutine flow,
like a round robin if you use resume (which loses the
advantage/restriction of yield in which the coroutine
that is the target is implicitly that coroutine that
called you last); while still providing the simpler
subordinate viewpoint for the more common simple cases
(like generators).


Re: RFC 328 (v2) Single quotes don't interpolate \' and \\

2000-09-29 Thread John Macdonald

Perl6 RFC Librarian wrote :
|| =head1 TITLE
|| 
|| Single quotes don't interpolate \' and \\
|| 
|| =head1 VERSION
|| 
||   Maintainer: Nicholas Clark [EMAIL PROTECTED]
||   Date: 28 Sep 2000
||   Last Updated: 29 Sep 2000
||   Mailing List: [EMAIL PROTECTED]
||   Number: 328
||   Version: 2
||   Status: Developing

   [ ... ]

|| =head1 DISCUSSION
|| 
|| Limited discussion so far because I wrongly issued the RFC to
|| [EMAIL PROTECTED] The only responses were from two people both
|| of whom valued their ability to use single quotes to make strings including
|| making strings containing single quotes. Here docs already provide a means to
|| get "quote nothing, everything is literal". One argued that
|| 
||  Single-quoted strings need to be able to contain single quotes,
||  which means some escape mechanism is required.
|| 
|| which I did not agree with.

Being able to create strings that contain arbitrary characters is
essential for program generators.  With the current quoting method,
you can take a string, preface any ' or \ with a \, wrap the result
with '' and you're ready to insert it into the code you're creating.

With the proposed definiton, you'd have to use a q## form, after
scanning the string in hopes of finding a character, like #, that
wasn't contained in the string.  With a here document, you'd instead
have to ensure that you create an end of here doc token that doesn't
happen to be contained as a line in the string.  (Here doc strings
are also awkward for some purposes, having the actual content of the
string on a separate line interferes with the normal flow for reading
the code, unless the string is multi-line data.)

-- 
Sleep should not be used as a substitute    | John Macdonald
for high levels of caffeine  -- FPhlyer |   [EMAIL PROTECTED]