Re: Regex syntax

2008-03-18 Thread Jon Lang
Moritz Lenz wrote:
  I noticed that in larger grammars (like STD.pm and many PGE grammars in
  the parrot repo) string literals are always quoted for clarity

  regex foo {
 'literal' subregex
  }

  Since I think this is a good idea, we could just as well enforce that, and
  drop the ... around method/regex calls:

  regex foo {
 'literal' subregex
  }

  This way we'll approximate normal Perl 6 syntax, and maybe even improve
  huffman coding.

  I guess such a syntax wouldn't be acceptible in short regexes, which are
  typically written as m/.../, rx/.../ or s[...][..], so we could preseve
  the old syntax for these regexes.

Nitpick: s[...][..] isn't valid syntax anymore; the correct P6
equivalent is s[...] = qq[..], for reasons that were hashed out on
this list some time ago.  However, s/.../../ is still valid.

I'm not in favor of the so-called short forms having a different
syntax from the long forms, and I personally like the current syntax
for both.  That said, all's fair if you predeclare: I could see
someone creating a module that allows you to tweak the regex syntax in
a manner similar to what you're proposing, if there's enough of a
demand for it.

-- 
Jonathan Dataweaver Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Larry Wall wrote:
  So here's another question in the same vein.  How would mathematicians
  read these (assuming Perl has a factorial postfix operator):

 1 + a(x)**2!
 1 + a(x)²!

The 1 + ... portion is not in dispute: in both cases, everything to
the right of the addition sign gets evaluated before the addition
does.  As such, I'll now concentrate on the right term.

As TsA pointed out, mathematicians wouldn't write the former case at
all; they'd use a superscript, and you'd be able to distinguish
between a(x) to the two-factorial power and (a(x) squared)
factorial based on whether or not the factorial is superscripted.  So
the latter would be (a(x) squared) factorial.

OTOH, you didn't ask how mathematicians would write this; you asked
how they'd read it.  As an amateur mathematician (my formal education
includes linear algebra and basic differential equations), I read the
former as a(x) to the two-factorial power: all unary operators, be
they prefix or postfix, should be evaluated before any binary operator
is.

-- 
Jonathan Dataweaver Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Larry Wall wrote:
  Now, I think I know how to make the parser use precedence on either
  a prefix or a postfix to get the desired effect (but perhaps not going
  both directions simulatenously).  But that leads me to a slightly
  different parsing question, which comes from the asymmetry of postfix
  operators.  If we make postfix:! do the precedence trick above with
  respect to infix:** in order to emulate the superscript notation,
  then the next question is, are these equivalent:

 1 + a(x)**2!
 1 + a(x)**2.!

To me, both of these should raise a(x) to the 2-factorial power; so
yes, they should be equivalent.

  likewise, should these be parsed the same?

 $a**2i
 $a**2.i

In terms of form, or function?  Aesthetically, my gut reaction is to
see these parse the same way, as raise $a to the power of 2i; in
practice, though, 2i on a chalkboard means 2 times i, where i is
unitary.  Hmm...

Again, though, this isn't a particularly fair example, as mathematical
notation generally uses superscripting rather than ** to denote
exponents, allowing the presence or absence of superscripting on the
i to determine when it gets evaluated.  That is, the mathematical
notation for exponents includes an implicit grouping mechanism.

  and if so, how to we rationalize a class of postfix operators that
  *look* like ordinary method calls but don't parse the same.  In the
  limit, suppose some defines a postfix say looser than comma:

 (1,2,3)say
 1,2,3say
 1,2,3.say

  Would those all do the same thing?

Gut reaction: the first applies say to the list 1, 2, 3; the
second and third apply say to 3.

  Or should we maybe split postfix
  dot notation into two different characters depending on whether
  we mean normal method call or a postfix operator that needs to be
  syntactically distinguished because we can't write $a.i as $ai?

  I suppose, if we allow an unspace without any space as a degenerate
  case, we could write $a\i instead, and it would be equivalent to ($a)i.
  And it resolves the hypothetical postfix:say above:

 1,2,3.say   # always a method, means 1, 2, (3.say)
 1,2,3\ say  # the normal unspace, means (1, 2, 3)say
 1,2,3\say   # degenerate unspace, means (1, 2, 3)say

  This may also simplify the parsing rules inside double quoted
  strings if we don't have to figure out whether to include postfix
  operators like .++ in the interpolation.

I'm in favor of the minimalist unspace concept, independent of the
current concerns.  In effect, it lets you insert the equivalent of
whitespace anywhere you want (except in the middle of tokens), even if
whitespace would normally be forbidden; and it does so in a way that
takes up as little space as possible.

  It does risk a visual
  clash if someone defines postfix:t:

 $x\t# means ($x)t
 $x\t  # means $x ~ \t

  I deem that to be an unlikely failure mode, however.

:nod: Alphanumeric postfixes ought to be rare compared to symbolic postfixes.

-- 
Jonathan Dataweaver Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
TSa wrote:
  Jon Lang wrote:
   all unary operators, be
   they prefix or postfix, should be evaluated before any binary operator
   is.

  Note that I see ** more as a parametric postscript then a real binary.
  That is $x**$y sort of means $x(**$y).

That's where we differ, then.  I'm having trouble seeing the benefit
of that perspective, and I can clearly see a drawback to it - namely,
you have to think of infix:** as being a different kind of thing
than infix:.«+ - * /, despite having equivalent forms.

  Note also that for certain
  operations only integer values for $y make sense. E.g. there's no
  square root of a function.

...as opposed to a square root of a function's range value.  That is,
you're talking in terms of linear algebra here, where D²(x) means
D(D(x)), as opposed to basic algebra, where f²(x) means (f(x))².
 This is similar to your earlier the other Linear comment.

This is a case where the meaning of an operator will depend on the
system that you're dealing with.  Math is full of these, especially
when it comes to superscripts and subscripts.  I'd recommend sticking
to the basic algebraic terminology for the most part (e.g., f²(x) :=
(f(x))²), and apply all's fair if you predeclare if you intend to
use a more esoteric paradigm.  So if you want:

  D²(x) + 2D(x) + x

to mean:

  (D(D(x)) + 2 * D(x) + x

You should say:

  use linearAlgebra;
  D²(x) + 2D(x) + x

-- 
Jonathan Dataweaver Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
On Wed, Mar 26, 2008 at 12:03 PM, Larry Wall [EMAIL PROTECTED] wrote:
 On Wed, Mar 26, 2008 at 11:00:09AM -0700, Jon Lang wrote:

 : all unary operators, be they prefix or postfix, should be evaluated
  : before any binary operator is.

  And leaving the pool of voting mathematicians out of it for the moment,
  how would you parse these:

 sleep $then - $now
 not $a eq $b
 say $a ~ $b
 abs $x**3

Those don't strike me as being unary operators; they strike me as
being function calls that have left out the parentheses.  Probably
because they're alphanumeric rather than symbolic in form.

  These all work only because unaries can be looser than binaries.
  And Perl 5 programmers will all expect them to work, in addition to
  -$a**2 returning a negative value.

True enough.  Perhaps I should have said as a rule of thumb...

  And we have to deal with unary
  precedence anyway, or !%seen{$key}++ doesn't work right...

  Larry




-- 
Jonathan Dataweaver Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Mark J. Reed wrote:
 Jon Lang wrote:
Those don't strike me as being unary operators; they strike me as
being function calls that have left out the parentheses.

  At least through Perl5, 'tain't no difference between those two in Perl land.

True enough - though the question at hand includes whether or not
there should be.

Perhaps a distinction should be made between prefix and postfix.
After all, postfix already forbids whitespace.  True, the original
reason for doing so was to distinguish between infix and postfix; but
it tends to carry the implication that postfix operators are kind of
like method calls, while prefix operators arekind of like function
calls.

Personally, I'd like to keep that parallel as much as possible.
Unless it can be shown to be unreasonable to do so. :)

  As for binary !, you could say posit that the second operand is the
  degree of multifactorialhood, defaulting to 1; e.g. x!2 for what
  mathematicians would write as x‼, which definitely does not mean
  (x!)!.

Wouldn't that be x ! 2?  Mandatory whitespace (last I checked),
since infix:! had better not replace postfix:!.

  Oh, and I've always mentally filed -x as shorthand for (-1*x); of
  course, that's an infinitely recursive definition unless -1 is its own
  token rather than '-' applied to '1'.

It's also only guaranteed to work if x is numeric; even in math,
that's not certain to be the case. :)

-- 
Jonathan Dataweaver Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Thom Boyer wrote:
  But the main point I was trying to make is just that I didn't see the
  necessity of positing

  1,2,3\say

  when (if I understand correctly) you could write that as simply as

  1,2,3 say

Nope.  This is the same situation as the aforementioned '++' example,
in that you'd get into trouble if anyone were to define an infix:say
operator.

  That seems better to me than saying that there's no tab character in

  say blah $x\t blah

Whoever said that?

  Backslashes in double-quotish contexts are already complicated enough!

...and they'd remain precisely as complicated as they are now, because
backslashes in interpolating quotes and in patterns would continue to
behave precisely as they do now.  Backslash-as-unspace would remain
unique to code context, as it is now, changing only in that it gets
followed by \s* instead of \s+.  In particular, if you were to define
a postfix:t operator, you'd embed it in a string as:

say blah {$x\t} blah

-- 
Jonathan Dataweaver Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
John M. Dlugosz wrote:
 Here is a first look at the ideas I've worked up concerning the Perl 6 type
 system.  It's an overview of the issues and usage of higher-order types in
 comparison with traditional subtyping subclasses.

  http://www.dlugosz.com/Perl6/

Very interesting, if puzzling, read.

I'm having some difficulty understanding the business with £.  I
_think_ that you're saying that £ sort of acts as a prefix operator
that changes the meaning of the type with which it is associated; and
the only time that a change in meaning occurs is if the type in
question makes use of ::?CLASS or a generic parameter.

You say that in Perl 6, a role normally treats ::?CLASS as referring
to the role.  Perhaps things have changed while I wasn't looking; but
I was under the impression that Perl 6 roles try to be as transparent
as possible when it comes to the class hierarchy.  As such, I'd expect
::?CLASS to refer to whatever class the role is being composed into,
rather than the role that's being composed.  If you have need to
reference the role itself, I'd expect something like ::?ROLE to be
used instead.  Of course, without something like your '£', the only
way to decide whether you actually want the class type or the role
type would be within the definition of the role itself; and without a
sort of reverse '£', there would be no way to take a role that
refers to ::?ROLE and make it refer to ::?CLASS instead.  That said,
I'm not convinced that this is a problem.

As for classes and roles that have generic parameters: here, you've
completely lost me.  How does your proposed '£' affect such classes
and roles?

-- 
Jonathan Dataweaver Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
TSa wrote:
  The use of £ in

   sub foo (£ pointlike ::PointType $p1, PointType $p2 -- PointType)

  is that of *structural* subtyping. Here FoxPoint is found to be
  pointlike. In that I would propose again to take the 'like' operator
  from JavaScript 2. Doing that the role should be better named Point
  and foo reads:

   sub foo (like Point ::PointType $p1, PointType $p2 -- PointType)

  This is very useful to interface between typed and untyped code.
  With the 'like' the role Point has to be *nominally* available
  in the argument. There's no problem with 'like'-types being more
  expensive than a nominal check.

Ah; that clears things up considerably.  If I understand you
correctly, John is using '£' to mean use Duck Typing here.  _That_,
I can definitely see uses for.  As well, spelling it as 'like' instead
of '£' is _much_ more readable.  With this in mind, the above
signature reads as $p1 must be like a Point, but it needn't actually
be a Point.  Both $p2 and the return value must be the same type of
thing that $p1 is.

What, if anything, is the significance of the fact that pointlike (in
John's example; 'Point' in TSa's counterexample) is generic?

-- 
Jonathan Dataweaver Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
chromatic wrote:
 Jon Lang wrote:
   Ah; that clears things up considerably.  If I understand you
   correctly, John is using '£' to mean use Duck Typing here.  _That_,
   I can definitely see uses for.  As well, spelling it as 'like' instead
   of '£' is _much_ more readable.  With this in mind, the above
   signature reads as $p1 must be like a Point, but it needn't actually
   be a Point.  Both $p2 and the return value must be the same type of
   thing that $p1 is.

  That was always my goal for roles in the first place.  I'll be a little sad 
 if
  Perl 6 requires an explicit notation to behave correctly here -- that is, if
  the default check is for subtyping, not polymorphic equivalence.

By my reading, the default behavior is currently nominal typing, not
duck-typing.  That said, my concern isn't so much about which one is
the default as it is about ensuring that the programmer isn't stuck
with the default.  Once it's decided that Perl 6 should support both
duck-typing and nominal typing, _then_ we can argue over which
approach should be the default, and how to represent the other
approach.

-- 
Jonathan Dataweaver Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
John M. Dlugosz wrote:
 TSa wrote:
  Jon Lang wrote:
   I'm having some difficulty understanding the business with £.  I
   _think_ that you're saying that £ sort of acts as a prefix operator
   that changes the meaning of the type with which it is associated; and
   the only time that a change in meaning occurs is if the type in
   question makes use of ::?CLASS or a generic parameter.
 
  The difference seems to be the two definitions of bendit
 
   sub bendit (IBend ::T $p --T)
   {
  IBend $q = get_something;
  my T $result= $p.merge($q);
  return $result;
   }
 
   sub bendit (£ IBend ::T $p --T)
   {
  T $q = get_something;
  my T $result= $p.merge($q);
  return $result;
   }
 
  The interesting thing that is actually left out is the return type
  of get_something. I think in both cases it does the IBend role but
  in the second definition it is checked against the actual type T
  which is Thingie if called with a Thingie for $p. So the advantage
  of this code is that the compiler can statically complain about the
  return type of get_something. But I fail to see why we need £ in
  the signature to get that.

  In the top example, merge has to be declared with invariant parameter
 types, so the actual type passed isa IBend.  That means merge's parameter
 is IBend.  If get_something returned the proper type, it would be lost.

  In the lower example, the merge parameter is allowed to be covariant.  The
 actual type is not a subtype of IBend.  The parameter to merge is checked to
 make sure it is also T.  The £ means use the higher-order £ike-this rather
 than isa substitutability.

  The issue is how to give covariant parameter types =and= minimal type
 bounds for T at the same time.

Perhaps it would be clearer if you could illustrate the difference between

   sub bendit (£ IBend ::T $p --T)
   {
  T $q = get_something;
  my T $result= $p.merge($q);
  return $result;
   }

and

   sub bendit (IBend ::T $p --T)
   {
  T $q = get_something;
  my T $result= $p.merge($q);
  return $result;
   }

Or perhaps it would be clearer if I actually understood what
covariant means.

  The use of £ in
 
   sub foo (£ pointlike ::PointType $p1, PointType $p2 -- PointType)
 
  is that of *structural* subtyping. Here FoxPoint is found to be
  pointlike. In that I would propose again to take the 'like' operator
  from JavaScript 2. Doing that the role should be better named Point
  and foo reads:
 
   sub foo (like Point ::PointType $p1, PointType $p2 -- PointType)
 
  This is very useful to interface between typed and untyped code.
  With rthe 'like' the role Point has to be *nominally* available
  in the argument. There's no problem with 'like'-types beeing more
  expensive than a nominal check.

  Yes, with Point would work for matching as well as pointlike.  When the
 covariant parameter type destroys the isa relationship between Point and
 Point3D, £ Point will still indicate conformance to the like rules.

  I like like as the ASCII synonym to £, but didn't want to get into that
 in the whitepaper.  I wanted to concentrate on the need for a higher-order
 type check, not worry about how to modify the grammar.

OK; how does higher-order type checks vs. isa relationships differ
from duck typing vs. nominal typing?

-- 
Jonathan Dataweaver Lang


Re: treatment of isa and inheritance

2008-04-30 Thread Jon Lang
Brandon S. Allbery KF8NH wrote:
 TSa wrote:
  I totally agree! Using 'isa' pulls in the type checker. Do we have the
  same option for 'does' e.g. 'doesa'? Or is type checking always implied
  in role composition? Note that the class can override a role's methods
  at will.

  It occurs to me that this shouldn't be new keywords, but adverbs, i.e. ``is
 :strict Dog''.

Agreed.  I'm definitely in the category of people who find the
difference between is and isa to be, as Larry put it, eye-glazing.
 I can follow it, but that's only because I've been getting a crash
course in type theory.

Brandon's alternative has the potential to be less confusing given the
right choice of adverb, and has the added bonus that the same adverb
could apply equally well to both 'is' and 'does'.

On a side note, I'd like to make a request of the Perl 6 community
with regard to coding style: could we please have adverbal names that
are, well, adverbs?  is :strict Dog brings to my mind the English
Fido is a strict dog, rather than Fido is strictly a dog.  Not
only is is :strictly Dog more legible, but it leaves room for the
possible future inclusion of adjective-based syntax such as big Dog
(which might mean the same thing as Dog but is big or Dog where
.size  Average).  To misquote Einstein, things should be as simple
as is reasonable, but not simpler.

-- 
Jonathan Dataweaver Lang


Re: OK, ::?CLASS not virtual

2008-04-30 Thread Jon Lang
John M. Dlugosz wrote:
  And you can use CLASS in a role also, confidant that it will be looked up
 according to the normal rules when the class is composed using that role,
 just like any other symbol that is not found when the role is defined.
 Using ::?CLASS in a role is an error (unless you mean the class surrounding
 this role's definition, in which case it is a warning).

Can a role inherit from a class?  If so, to which class does CLASS
refer: the inherited one, or the one into which the role is composed?

-- 
Jonathan Dataweaver Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-30 Thread Jon Lang
On Wed, Apr 30, 2008 at 9:58 PM, Brandon S. Allbery KF8NH
[EMAIL PROTECTED] wrote:

  On May 1, 2008, at 0:53 , chromatic wrote:


  correctness sense.  Sadly, both trees and dogs bark.)
 

  Hm, no.  One's a noun, the other's a verb.  Given the linguistic
 orientation of Perl6, it seems a bit strange that the syntax for both is the
 same:  while accessors and mutators are *implemented* as verbs, they should
 *look* like nouns.

In defense of chromatic's point, both people and syrup run.

-- 
Jonathan Dataweaver Lang


Re: treatment of isa and inheritance

2008-05-02 Thread Jon Lang
John M. Dlugosz wrote:
 Jon Lang dataweaver-at-gmail.com |Perl 6| wrote:

  IIRC, the supertyping proposal involved being able to anti-derive
  roles from existing roles or classes, working from subtypes to
  supertypes (or from derived roles to base roles) instead of the other
  way around.  The proposal got hung up on terminology issues,
  specifically a discussion involving intensional sets vs. extensional
  sets.  I find this unfortunate, as I see a lot of potential in the
  idea if only it could be properly (read: unambiguously) presented.
 
  From an intensional set perspective, a supertype would be a role
  that includes the structure common to all of the classes and/or roles
  for which it's supposed to act as a base role.  From an extensional
  set perspective, the range of values that it covers should span the
  range of values that any of its pre-established subtypes cover.  A
  proposal was put forward to use set operations to create anonymous
  supertypes, and then to provide them with names via aliasing; where it
  got hung up was whether it should be based on a union of extensional
  sets (i.e., combining the potential set of values) or on an
  intersection of intensional sets (i.e., identifying the common
  attributes and methods).

  I agree that is unfortunate.
  Perhaps, although you didn't show me that specific proposal (and reopen the
 arguments), you explained the ideas behind them enough that I see some of
 that description in the algorithm I used for the £ operator.

I just reviewed the threads in question; they're from December 2006.
Search the list archives for supertyping to find them.

The bottom line was that there were two competing visions of what
supertyping was supposed to do.  My own view was that supertyping
should be used as a way of extracting subsets of behavior out of a
preexisting role as new roles in such a way that the original role
would count as a subtype of the extracted roles, nominally as well as
structurally.  The other view involved using these extracted roles to
back-edit new behavior into existing roles.  Larry's final word on the
subject was to suggest a versioning mechanic whereas instead of
mutating a role to add new behavior, one would be able to create newer
versions of the role that included the new behavior while preserving
the older version for cases where the new behavior was inappropriate.

-- 
Jonathan Dataweaver Lang


Re: What does 'eqv' do exactly?

2008-05-03 Thread Jon Lang
John M. Dlugosz wrote:
 I've searched the archives, but did not see a good explanation of what eqv 
 does,
 and what is meant by snapshotting in the description of the synopses.

Try this: http://markmail.org/message/vub5hceisf6cuemk

  Can anyone explain it (with examples?) or point to an existing treatment, 
 please?

Try this: http://markmail.org/message/mincjarbbfqhblnj

-- 
Jonathan Dataweaver Lang


Re: What does 'eqv' do exactly?

2008-05-03 Thread Jon Lang
I suspect that at the core of John's question is the fact that nobody
has ever actually said what 'snapshot semantics' is: it's a term
that's been tossed around with the assumption that people already know
its meaning.

My own understanding of it is that snapshot semantics involves
looking at an immutable copy of an object (a snapshot of it) instead
of looking at the object itself.  That said, my understanding may be
flawed.

-- 
Jonathan Dataweaver Lang


Re: nested 'our' subs - senseless?

2008-05-05 Thread Jon Lang
On Mon, May 5, 2008 at 6:01 AM, John M. Dlugosz
[EMAIL PROTECTED] wrote:
 TSa Thomas.Sandlass-at-barco.com |Perl 6| wrote:

 
  No, because {...} is just a declaration. You can give a
  definition later in the surrounding module/package/class.
  Within that scope there can be only one definition, of course.
 

  I did not mean to use { ... } to mean declaration only, but to show that I
 omitted the good stuff.  In Perl 6, it is not declaration only but a body
 that doesn't complain when it is redefined, so that should not matter.

Given that Perl 6 assigns a specific meaning to '{ ... }', it's
recommended that examples that omit code instead be written as '{ doit
}' or the like.

-- 
Jonathan Dataweaver Lang


Re: my TypeName $x;

2008-05-06 Thread Jon Lang
My thoughts:
.HOW returns information concerning the implementation type; .WHAT
returns information concerning the value type.
.HOW and .WHAT stringify to something resembling the declarations that
would be used to create them.

Also bear in mind that Perl 6 uses prototype-based object orientation,
except where it doesn't.  As such, when I say that $x is a Foo
below, what I mean is that $x is an object with Foo as its
implementation type.

Jonathan Worthington wrote:
 Hi,

  I'm looking for answers/clarification on what (if taken as individual
 programs) $x is in each case.

  my Int $x; # $x is Int protoobject
  say $x;  # Int
  say $x.WHAT; # Int

.WHAT is an Int; stringifies to Int.

  class Foo { }
  my Foo $x;   # $x is Foo protoobject
  say $x;  # Foo
  say $x.WHAT; # Foo
  # This means we can only assign to $x something that isa Foo?

.WHAT is a Foo; stringifies to Foo.

  role Bar { }
  my Bar $x;   # $x is ???
  say $x;  # ???
  say $x.WHAT; # ???
  # This means we can only assign to $x something that does Bar?

This one's tricky: roles cannot be instantiated, so .WHAT cannot be a
Bar.  My gut reaction is that it's an anonymous class that does Bar
and stringifies to Bar.  As long as you don't have both a role and a
class named Bar, this should not be a problem, since the value type
isn't concerned with the implementation.  If it is possible to have
both a role and a class with the same name, you might be able to look
at .WHAT.HOW (or .^WHAT) to figure out with which you're dealing.
That is, the implementation type of the value type ought to provide
more elaborate detail as to the nature of the value type.

  subset EvenInt of Int where { $_ % 2  == 0 };
  my EvenInt $x;  # $x is ???
  say $x;  # ???
  say $x.WHAT;  # Failure?
  # This means we can only assign to $x something that matches the constraint

my guess: .WHAT is an EvenInt that stringifies to EvenInt.  If you
want to know about the structure of EvenInt (e.g., which class is it
based on, and which subset restrictions have been applied to it), you
would refer to .WHAT.HOW.

  class Dog { }
  class Cat { }
  my Dog|Cat $x;  # $x is ???
  say $x;   # ???
  say $x.WHAT;  # Failure?
  # This means we can only assign to $x something that isa Dog or isa Cat

my guess: .WHAT is a Junction that stringifies to Dog|Cat.  To
access the details as to which classes are in the junction and how
they are joined, refer to appropriate methods (I believe that Junction
includes a method that returns a Set of its members) or maybe to
.WHAT.HOW.

As well:

  class Dog { }
  role Pet { }
  my Pet Dog $x;
  say $x;
  say $x.WHAT;

My guess is that .WHAT would be a (dis)Junction of a Dog and an
anonymous class; the anonymous class does Pet and stringifies to
Pet, while the Junction stringifies to Pet Dog.

-- 
Jonathan Dataweaver Lang


Re: my TypeName $x;

2008-05-06 Thread Jon Lang
Upon further review:

It might be possible that $x.WHAT returns a Signature object, with the
value type of $x as the invocant (or return type?), and everything
else empty.  But that merely begs the question of how to introspect a
Signature object.  If I tell Perl 6 to say a Signature, what gets
displayed?  Can I ask Perl 6 to look at a particular parameter in a
Signature?  If so, what kind of object does the Signature object
return if I ask it to give me its invocant?  Surely not another
Signature object?  Whatever it is that Perl 6 returns in that case
would probably work better as the return type of the .WHAT method,
too.

-- 
Jonathan Dataweaver Lang


Re: my TypeName $x;

2008-05-06 Thread Jon Lang
TSa wrote:
  Jon Lang wrote:

  My thoughts:
  .HOW returns information concerning the implementation type; .WHAT
  returns information concerning the value type.

BTW, S12 more or less confirms the above: .WHAT returns a prototype
object that stringifies to its short name, while .HOW allows
introspection of the attributes and methods and composed roles and
inherited classes and...

  My addition to these thoughts is that the WHAT and HOW are
  cascaded. Let's say we start at level 0 with the objects,
  thingies or however we want to name the atoms of our discourse.
  At the zero level HOW, WHAT etc. are vacuously identical with
  the object. They fall apart on level 1 from each other and among
  them selfs but they are all sets of level-0 objects. In general
  a level n concept contains level 0..^n objects and is the name
  for a something they share.

In English, please?

I _think_ that you're saying that you can use WHAT and HOW
recursively: that is, you can talk about WHAT.WHAT, WHAT.HOW,
HOW.WHAT, HOW.HOW, etc.  If so, note a few caveats:
first, if $x is a Foo, then $x.WHAT will be a Foo.  As such, there is
no difference between $x.WHAT and $x.WHAT.WHAT.  Likewise, forget
about my comments about $x.WHAT.HOW; since $x is of exactly the same
type as $x.WHAT, you can get the same information by saying $x.HOW as
you could about $x.WHAT.HOW.

Second, $x.HOW.WHAT gives you a prototype of the metaclass object.  If
you're ever doing type checking that refers to the metaclass class,
this might be useful; otherwise, don't bother.

Third, $x.HOW.HOW would let you look at the inner workings of the
metaclass class.  Going beyond that would be redundant, since you'd
be using a metaclass object to look at the metaclass class.

role Bar { }
my Bar $x;   # $x is ???
say $x;  # ???
say $x.WHAT; # ???
# This means we can only assign to $x something that does Bar?
  
 
  This one's tricky: roles cannot be instantiated, so .WHAT cannot be a
  Bar.


  What? I would say it is a level 1 WHAT. The next level concept
  $x.WHAT.WHAT is role. BTW, there are not very many levels up.

Again, English please?

As I said, roles cannot be instantiated.  $x is not a Bar; it is an
object of an anonymous class that does Bar.  $x.WHAT is a prototype
object of $x, meaning that it, too, is an object of an anonymous class
that does Bar.  The only difference between them is that $x may
eventually have a value assigned to it; $x.WHAT never will.  Well...
that, and $x.WHAT stringifies differently than $x does.

subset EvenInt of Int where { $_ % 2  == 0 };
my EvenInt $x;  # $x is ???
say $x;  # ???
say $x.WHAT;  # Failure?
# This means we can only assign to $x something that matches the
# constraint
 
  my guess: .WHAT is an EvenInt that stringifies to EvenInt.  If you
  want to know about the structure of EvenInt (e.g., which class is it
  based on, and which subset restrictions have been applied to it), you
  would refer to .WHAT.HOW.

  I would argue that $x.WHAT.WHAT is subset. I think the
  idea of introspection is to have $x.WHAT.of return Int
  as a level 1 WHAT and $x.WHAT.where return the constraint
  closure. That said I would expect $x.HOW to be the identical
  level 1 HOW object that $x.WHAT.of.HOW returns. When Int.WHAT.WHAT
  is a role then this is undef of Int.

Ah; _that's_ what you mean by cascading levels.  Ugh.

By my understanding, $x and $x.WHAT are both the same kind of thing:
they're both EvenInt.  Again, the only difference is that $x.WHAT will
always be undefined and will stringify to EvenInt.

class Dog { }
class Cat { }
my Dog|Cat $x;  # $x is ???
say $x;   # ???
say $x.WHAT;  # Failure?
# This means we can only assign to $x something that isa Dog or isa Cat
 
  my guess: .WHAT is a Junction that stringifies to Dog|Cat.  To
  access the details as to which classes are in the junction and how
  they are joined, refer to appropriate methods (I believe that Junction
  includes a method that returns a Set of its members) or maybe to
  .WHAT.HOW.

  On level 1 the WHATs of Dog and Cat are distinct. Dog|Cat is a level 2
  WHAT. The level 2 $x.WHAT.HOW is a union. $x.WHAT.WHAT is class.

Huh?

-- 
Jonathan Dataweaver Lang


Re: my TypeName $x;

2008-05-06 Thread Jon Lang
Larry Wall wrote:
 Jonathan Worthington wrote:
   role Bar { }
   my Bar $x;   # $x is ???
   say $x;  # ???
   say $x.WHAT; # ???
   # This means we can only assign to $x something that does Bar?

  Correct, and for the same reason.  The container checks the role--it
  has little to do with what's in $x currently, which *cannot* have
  the type of Bar, since you can't instantiate that.


   subset EvenInt of Int where { $_ % 2  == 0 };
   my EvenInt $x;  # $x is ???
   say $x;  # ???
   say $x.WHAT;  # Failure?
   # This means we can only assign to $x something that matches the constraint

  Yes, again, checked by the container, not by $x.  $x cannot have the
  type of a subset either!  The actual type of $x is Int.  Objects may
  only be blessed as valid storage types of some kind or other.  Subsets
  are merely extra constraints on a storage type (here Int).


   class Dog { }
   class Cat { }
   my Dog|Cat $x;  # $x is ???
   say $x;   # ???
   say $x.WHAT;  # Failure?
   # This means we can only assign to $x something that isa Dog or isa Cat

  Well, maybe $x is just Object, or whatever is the closest type that
  encompasses both Dog and Cat.  But note that Object is just the most
  generic kind of undef, so this is more or less equivalent to doing
  no initialization in p5-think.  I don't think I'm interested in making
  the run-time system calculate a Dog|Cat storage type, so it comes out
  to just a constraint.

  Really, the main reason we initialize with an undef of the correct sort
  is so that you can say

 my Dog $x .= new();

  But what would (Dog|Cat).new() do?  Constructors are not required to
  know about subset types or roles.  Constructors just create plain ordinary
  classes, I expect.  Composition and constraint checking happen
  elsewhere.

Two questions:

1. Apparently, my presumption that $x.WHAT was for retrieving the
value type was wrong; from the above, it's sounding like it is
supposed to retrieve the implementation type.  Is this correct?  If
so, for what purpose does $x.WHAT exist that you can't do just as well
with $x itself?  If it's for the stringification of the container's
name, couldn't that be accomplished just as easily by means of a
$x.HOW method, such as $x.^name?

2. How _do_ you retrieve the value type of $x?  That is, how can the
program look at $x and ask for the constraints that are placed on it?
I don't see $x.HOW being useful for this, since HOW is essentially
there to let you look at the inner workings of the container; and none
of the other introspection metamethods in S12 come close to addressing
this question.

-- 
Jonathan Dataweaver Lang


Re: my TypeName $x;

2008-05-06 Thread Jon Lang
On Tue, May 6, 2008 at 8:09 PM, Larry Wall [EMAIL PROTECTED] wrote:
 On Tue, May 06, 2008 at 07:01:29PM -0700, Jon Lang wrote:
  : 1. Apparently, my presumption that $x.WHAT was for retrieving the

 : value type was wrong; from the above, it's sounding like it is
  : supposed to retrieve the implementation type.

  I don't know what you mean by those terms.

Oh?  I took them straight out of S2:

Explicit types are optional. Perl variables have two associated
types: their value type and their implementation type. (More
generally, any container has an implementation type, including
subroutines and modules.) The value type is stored as its of property,
while the implementation type of the container is just the object type
of the container itself. The word returns is allowed as an alias for
of.

The value type specifies what kinds of values may be stored in the variable.

By my reading of this: given my Dog|Cat $x, the value type would be
Dog|Cat, since you've specified that $x can store values that are
Dogs or Cats.

  .WHAT gives you an value
  of undef that happens to be typed the same as the object in $x,
  presuming your metaobject believes in types.  .HOW gives you the
  metaobject that manages everything else for this object.  It might
  or might not believe in classes or types.

OK; that's what I thought you were saying.

  : Is this correct?  If
  : so, for what purpose does $x.WHAT exist that you can't do just as well
  : with $x itself?

  You can do type reasoning with the WHAT type just as you can with
  a real value of the same type.  Int is a prototypical integer from
  the standpoint of the type system, without actually having to *be*
  any particular integer.

Yes; but what can you do with $x.WHAT that you _can't_ do just as
easily with $x itself?  After all, you've already got $x; and it has
already been initialized with an undefined Int upon creation, making
it a prototypical integer until such time as you assign a value to it.
 Furthermore, whether or not it has a value is utterly immaterial for
type-checking purposes.  So why would I ever say $x.WHAT instead of
just $x?

  In contrast, the .HOW object for Int is very much *not* an Int.
  Perl doesn't know or care what the type of the .HOW object is, as
  long as it behaves like a metaclass in a duck-typed sort of way.
  It's probably a mistake to try to use the type of the .HOW object in
  any kind of type-inferential way.

That's more or less what I understood.

  : If it's for the stringification of the container's
  : name, couldn't that be accomplished just as easily by means of a
  : $x.HOW method, such as $x.^name?

  No, that's not what it's for at all.  It's only convenient that it
  generally stringifies to something that looks like a package name,
  assuming it's not an anonymous type.  But the primary purpose is to
  provide a closed system of types that is orthogonal to whether the
  object is defined or not.  In Haskell terms, they're all Maybe types,
  only with a little more ability to carry error information about
  *why* they're undefined, if they're undefined.

Right.  I didn't think that the stringification was a big deal; but it
was the only thing that I could find that differentiated $x from
$x.WHAT.

  : 2. How _do_ you retrieve the value type of $x?

  Do you mean the variable, or its contents?  The contents returns its
  object type via $x.WHAT, which for an Int is always Int, regardless
  of any extra constraints imposed by the container.

I've been assuming that the value type is associated to the variable;
otherwise, a simple assignment of a completely different container to
the variable would completely bypass any and all constraints.  Am I
wrong about this?

  : That is, how can the
  : program look at $x and ask for the constraints that are placed on it?

  Placed by the variable, or by the type of the contents?  The
  constraints on the value are solely the concern of whatever kind of
  object it is.  If you mean the constraints of the container, then
  VAR($x).WHAT will be, say, Scalar of Int where 0..*, and VAR($x).of
  probably can tell you that it wants something that conforms to Int.
  I'm not sure if the of type should include any constraints, or just
  return the base type.  Probably a .where method would return extra
  constraints, if any.

OK; I'll have to look up what I can about VAR.

  : I don't see $x.HOW being useful for this, since HOW is essentially

 : there to let you look at the inner workings of the container; and none
  : of the other introspection metamethods in S12 come close to addressing
  : this question.

  $x.HOW is not the inner workings of the container, but of the value.
  You'd have to say VAR($x).HOW to get the inner workings of the
  container.  Scalar containers always delegate to their contents.  In
  contrast, @x.HOW would be equivalent to VAR(@x).HOW, because composite
  objects used as scalars assume you're talking about the container.

...and you appear to be using

ordinal access to autosorted hashes

2008-05-27 Thread Jon Lang
From S09: The default hash iterator is a property called .iterator
that can be user replaced. When the hash itself needs an iterator for
.pairs, .keys, .values, or .kv, it calls %hash.iterator() to start
one. In item context, .iterator returns an iterator object. In list
context, it returns a lazy list fed by the iterator.

Would it be reasonable to allow hashes to use .[] syntax as something
of a shortcut for .iterator in list context, thus allowing
autosorted hashes to partake of the same sort of dual cardinal/ordinal
lookup capabilities that lists with user-defined array indices have?
e.g.:

  my %point{Int};
  %point{3} = Star;
  say %point[0]; # same as 'say %point{3}'
  say %point[1]; # error: asking for the second of one key is nonsense.
  %point{-2} = Base;
  say %point[0]; # same as 'say %point{-2}'
  say %point[1]; # same as 'say %point{3}'

Mind you, this could get messy with multidimensional keys:

  my %grid{Int;Int};
  %grid{3;-2} = Star;
  %grid{-2;3} = Base;
  say %grid[0;1]; # Base
  say %grid[1;0]; # Star
  say %grid[0;0]; # error? %grid{-2;-2} has never had anything assigned to it.

Still, it could be useful to be able to access an autosorted hash's
keys in an ordinal fashion.

Side issue: S09 doesn't specify whether or not you need to explicitly
declare a hash as autosorted.  I'm assuming that the parser is
supposed to figure that out based on whether or not .iterator is ever
used; but it isn't immediately obvious from reading the Autosorted
hashes section.  As well, there might be times when explicitly
declaring a hash as autosorted (or not) might be useful for
optimization purposes.

-- 
Jonathan Dataweaver Lang


Re: ordinal access to autosorted hashes

2008-06-01 Thread Jon Lang
David Green wrote:
 Jon Lang wrote:
 Would it be reasonable to allow hashes to use .[] syntax as something
 of a shortcut for .iterator in list context, thus allowing
 autosorted hashes to partake of the same sort of dual cardinal/ordinal
 lookup capabilities that lists with user-defined array indices have?

 I thought it already did, but apparently it's something that we discussed
 that didn't actually make it into S09.  I agree that .[] should apply to
 hashes just as .{} can apply to arrays.  The hashes don't even need to be
 sorted -- %h[$n] would basically be a shorter way of saying
 @(%h.values)[$n], in whatever order .values would give you.

I believe that the order that .values (or is that :v?) would give you
is determined by .iterator - which, if I'm understanding things
correctly, means that any use of :v, or :k, :p, or :kv, for that
matter, would autosort the hash (specifically, its keys).

Or am I reading too much into autosorting?

Bear in mind that keys are not necessarily sortable, let alone
autosorted.  For instance, consider a hash that stores values keyed by
complex numbers: since there's no way to determine .before or .after
when comparing two complex numbers, there's no way to sort them -
which necessarily means that the order of :v is arbitrary, making
%h[0] arbitrary as well.  This is why I was suggesting that it be
limited to autosorted hashes: it's analogous to how @a{'x'} is only
accessible if you've actually defined keys (technically,
user-defined indices) for @a.

-- 
Jonathan Dataweaver Lang


Re: assignable mutators (S06/Lvalue subroutines)

2008-06-01 Thread Jon Lang
Jon Lang wrote:
 This approach could be functionally equivalent to the proxy object
 approach, but with a potentially more user-friendly interface.  That
 is,

  sub foo (*$value) { yadda }

 might be shorthand for something like:

  sub foo () is rw {
return new Proxy:
  FETCH = method { return .doit() },
  STORE = method ($val) { .doit($val) },
  doit = method ($value?) { yadda }
  }

Correction:

  sub foo (*$value) { yadda }

might be shorthand for something like:

  sub foo () is rw {
return new Proxy:
  FETCH = method { return .() },
  STORE = method ($val) { .($val) },
  postcircumfix:( ) = method ($value?) { yadda }
  }

i.e., it can be called like a regular function as well as via
assignment semantics.

-- 
Jonathan Dataweaver Lang


Re: assignable mutators (S06/Lvalue subroutines)

2008-06-01 Thread Jon Lang
David Green wrote:
 It seems overly complex to me, but perhaps I'm missing good reasons for such
 an approach.  I see lvalue subs mainly as syntactic sugar:

foo(42);  # arg using normal syntax
foo == 42;   # arg using feed syntax
foo = 42; # arg using assignment syntax

 Feeds are a way of passing values to a function, but work like assignment
 when used on a variable; assignment is a way of giving a value to a
 variable, so it should work like passing args when used on a function.  Then
 you can easily do whatever you want with it.

 In fact, it could work just like a feed, and pass values to the slurpy
 params, but I think assignment is special enough to be worth treating
 separately.  Maybe something like:

sub foo ($arg1, $arg2, [EMAIL PROTECTED], =$x) {...}

foo(6,9) = 42;   # the 42 gets passed to $x

 That example uses a leading = for the assigned param (parallel to the
 leading * for the slurpy param), but I'm not crazy about it for various
 reasons (and =$x refers to iteration in other contexts).

OK; my take on it:

An lvalue sub is a sub that can be assigned to - the operative word
being can.  There's a reason why Perl 6 talks about is rw and is
ro so much, but has yet to (and may never) officially approach the
idea of is wo.  You don't _have_ to assign to an lvalue sub in order
to use it.  As such, an lvalue sub needs to be written in such a way
that it can be called to assign a value or to return a value.  The
current proxy object approach makes this explicit by mandating
separate FETCH and STORE methods to handle the two uses.  This has the
benefit of a fully consistent mechanism for handling assignable
routines: return an assignable object; if a value was going to be
assigned to the subroutine, it is instead assigned to the returned
object.  Simple, and straightforward.

And sometimes messy.  Let's consider the following alternative,
inspired by David's suggestion:

(I'm thinking aloud here, exploring possibilities and looking for
problems as I go.  Please bear this in mind.)

If a routine is rw, you may optionally define a single slurpy scalar
(e.g., '*$value') in its signature.  This scalar counts as the last
positional parameter, much like slurpy arrays and hashes must be
declared after all of the positional parameters have been declared.
You do not need to pass an argument to it; but if you do, you may do
so in one of two ways: through the usual arguments syntax, or via
assignment syntax.

If an assignable routine does not have a slurpy scalar in its
signature, it operates exactly as currently described in S06: it
returns something that is assignable, which in turn is used as the
lvalue of the assignment operator.  If the slurpy scalar is present in
the signature, then an attempt to assign a value to the sub passes the
value in through the slurpy scalar, while an attempt to read the value
that the sub represents doesn't pass anything to the slurpy scalar.
The routine then replaces the normal assignment operation and returns
a value in much the same way that the assignment operator would.

This approach could be functionally equivalent to the proxy object
approach, but with a potentially more user-friendly interface.  That
is,

  sub foo (*$value) { yadda }

might be shorthand for something like:

  sub foo () is rw {
return new Proxy:
  FETCH = method { return .doit() },
  STORE = method ($val) { .doit($val) },
  doit = method ($value?) { yadda }
  }

-- 
Jonathan Dataweaver Lang


Re: The Inf type

2008-06-02 Thread Jon Lang
TSa wrote:
 John M. Dlugosz wrote:
 The sqrt(2) should be a Num of 1.414213562373 with the precision of the
 native floating-point that runs at full speed on the platform.

 That makes the Num type an Int with non-uniform spacing. E.g. there
 are Nums where $x++ == $x. And the -Inf and +Inf were better called
 Min and Max respectively. IOW, the whole type based aproach to Inf
 is reduced to mere notational convenience.

Please give an example value for a Num where $x++ == $x.  Other than
Inf, of course.

As well, I can easily buy into the idea that +Inf _is_ conceptually
equivalent to Max, in that '$x before +Inf' will be true for all
non-Inf values of $x that are Nums.  Likewise, -Inf is conceptually
equivalent to Min in the same way.  But just because they're
conceptually equivalent, that doesn't mean that they're better off
renamed.

If you intend to come up with a more encompassing definition of Inf,
please do so in a way that preserves the above behavior when you apply
that definition to Num.

-- 
Jonathan Dataweaver Lang


Re: The Inf type

2008-06-02 Thread Jon Lang
Ryan Richter wrote:
 Jon Lang wrote:
 TSa wrote:
  John M. Dlugosz wrote:
  The sqrt(2) should be a Num of 1.414213562373 with the precision of the
  native floating-point that runs at full speed on the platform.
 
  That makes the Num type an Int with non-uniform spacing. E.g. there
  are Nums where $x++ == $x. And the -Inf and +Inf were better called
  Min and Max respectively. IOW, the whole type based aproach to Inf
  is reduced to mere notational convenience.

 Please give an example value for a Num where $x++ == $x.  Other than
 Inf, of course.

 All floats run out of integer precision somewhere, e.g. in p5

 $ perl -le '$x=1; while($x+1 != $x) { $x *= 2; } print $x'
 1.84467440737096e+19

 Arbitrary precision mantissas aren't practically useful, since they have
 a strong tendency to consume all memory in a few iterations.

Ah; I see.  We just had a role-vs-class cognitive disconnect.
Officially, Num is the autoboxed version of the native floating point
type (i.e., 'num').  Somehow, I had got it into my head that Num was a
role that is done by all types that represent values on the real
number line, be they integers, floating-point, rationals, or
irrationals.  And really, I'd like Num to mean that.  I'd rather see
what is currently called 'num' and 'Num' renamed to something like
'float' and 'Float', and leave 'Num' free to mean 'any real number,
regardless of how it is represented internally'.  Either that, or
continue to use Num as specified, but also allow it to be used as a
role so that one can create alternate representations of real numbers
(or various subsets thereof) that can be used anywhere that Num can be
used without being locked into its specific approach to storing
values.

-- 
Jonathan Dataweaver Lang


Re: ordinal access to autosorted hashes

2008-06-02 Thread Jon Lang
David Green wrote:
 Jon Lang wrote:
 Bear in mind that keys are not necessarily sortable, let alone autosorted.
  For instance, consider a hash that stores values keyed by complex numbers:
 since there's no way to determine .before or .after when comparing two
 complex numbers, there's no way to sort them - which necessarily means that
 the order of :v is arbitrary, making %h[0] arbitrary as well.

 Aha -- I guess you're thinking in terms of autosorting that can be turned
 on to return the values ordered by  and .

Not the values; the keys.

 But .iterator can do whatever
 it wants, which in the default case is presumably to walk a hash-table
 (technically not an arbitrary order, although it might as well be as far as
 a normal user is concerned).

...which is my point.  When I speak of arbitrary order, I mean that
the programmer should not be expected to know in what order the keys
will be returned.  As opposed to a meaningful order, whereas the
programmer can look at a set of the current keys and figure out which
index corresponds with which key.

 This is why I was suggesting that it be limited to autosorted hashes: it's
 analogous to how @a{'x'} is only accessible if you've actually defined
 keys (technically, user-defined indices) for @a.

 Do you see that as a psychological aid?  That is, if the order is arbitrary,
 %h[0] makes sense technically, but may be misleading to a programmer who
 reads too much into it, and starts thinking that the order is meaningful.

Something like that.  Really, it's more about asking that the order
_be_ meaningful before you start using array-style indices, since
doing the latter carries the presumption of the former.

 My feeling is that it's fine for all hashes to support []-indexing.  But if
 the order isn't important, then you probably wouldn't need to use [] in the
 first place (you'd use for %h:v, etc.)... so maybe it should be limited.
 Hm.

That's my thought.  That said, I'm wiling to consider the prospect
that such a restriction is excessive and/or unnecessary.

 P.S. Everywhere I said  and  I really meant .before and .after.  =P

:) OK.

-- 
Jonathan Dataweaver Lang


Re: assignable mutators (S06/Lvalue subroutines)

2008-06-02 Thread Jon Lang
David Green wrote:
 Jon Lang wrote:
 If a routine is rw, you may optionally define a single slurpy scalar
 (e.g., '*$value') in its signature.  This scalar counts as the last
 positional parameter, much like slurpy arrays and hashes must be declared
 after all of the positional parameters have been declared. You do not need
 to pass an argument to it; but if you do, you may do so in one of two ways:
 through the usual arguments syntax, or via assignment syntax.

 The only objection I have to making it a positional parameter is that then
 you can't have other optional positionals before it.  (Also, doesn't the
 '*$value' notation conflict with a possible head of the slurpy array?)

Good points.  I had overlooked the head of the slurpy array concept.
 That said, it might be reasonable to insist on something like
'*($head, @body)' in a signature to represent a slurpy array with
specific variables assigned to the leading elements.  Or not...

The bit about optional positionals is well taken.

 I'd rather make it named so it doesn't interfere with other args.  Or have
 it separate from named/positional args altogether; if something has a
 meaning of assignment, then it should always look like foo = 42 and not
 like foo(42). (The latter is just an ugly workaround for languages with
 incompetent syntax -- we don't want Perl code to look like that because Perl
 isn't like that.)

OTOH, TIMTOWTDI.  I'm not inclined to force one particular syntax - be
it 'foo = 42' or 'foo(42)' - on the programmer; let him choose which
dialect he prefers.  As such, it would probably be better to provide
some means of identifying a single named or positional parameter as
the gateway for an assignment.  Perhaps this could be handled through
rw?

  sub foo($value?) is rw($value) { ... }

Or something to that general effect.  The parameter in question must
be optional and cannot have a default value, so that a test of whether
or not the parameter is actually there can be used to determine
whether or not to operate like FETCH or like STORE.

 If an assignable routine does not have a slurpy scalar in its signature,
 it operates exactly as currently described in S06: it returns something that
 is assignable, which in turn is used as the lvalue of the assignment
 operator.

 Does this [with no slurpy scalar to pick up the rvalue]:

sub foo () { ...; $proxy }

 give us anything that you couldn't get from:

sub foo ($rval is rvalue) { ...; $proxy = $rval }

(I'm assuming that both of these subs are rw.)

Yes.  infix:= isn't the only operator that assigns values to rw
objects: so do the likes of infix:+= and prefix:++.  As well, any
function that accepts a rw parameter will presumably end up writing to
it somewhere in its code; otherwise, why declare the parameter as rw?

Note, BTW, the difference between passing a proxy object as a
parameter and passing a mutating sub as a parameter: in the former
case, you call the lvalue sub once in order to generate the proxy
object, and then you pass the proxy object as a parameter; in the
latter case, you don't call the mutating sub: you essentially pass a
coderef to it, and then call it each time the function in question
attempts to read from or write to it.

(By mutating sub, I mean a routine that handles the assignment
itself, along the lines of what you and I are discussing.)

 [...] sub foo () is rw {
   return new Proxy:
 FETCH = method { return .() },
 STORE = method ($val) { .($val) },
 postcircumfix:( ) = method ($value?) { yadda }
  }

 Incidentally, now that Perl is all OO-y, do we need tying/proxying like
 that, or can we just override the = method on a class instead?  Is there
 something different about it, or is it just an alternative (pre-OO) way of
 looking at the same thing?

FETCH and STORE are not pre-OO; they're pre-'user-defined operators'.

It's possible that in perl6, 'STORE' could be spelled 'infix:='; but
then how would 'FETCH' be spelled?  And doing so would require that
your storage routine return a value, even if you never intend to use
it.  No; I prefer 'STORE' and 'FETCH' as the basic building blocks of
anything that's capable of working like a rw item.

-- 
Jonathan Dataweaver Lang


Re: ordinal access to autosorted hashes

2008-06-02 Thread Jon Lang
Mark J. Reed wrote:
 The point is that %h[0] should be legal syntax that works on any hash,
 returning the first element. It doesn't matter if the elements are
 sorted or even sortable.  You get an element.  More to the point, if
 you don't add any elements to (or remove any elements from) the hash,
 you get the *same* element every time you ask for %h[0], which is
 different from %h[1], etc - all this is by default, of course.

True enough.

 But is %h[0] a value or a pair?  Or does it depend on context?

%h[0] should return a value; if you want a pair, say '%h[0]:p'.  (The
same should - and does? - hold true for arrays: @a[0] returns a value,
while @a[0]:p returns an index/value pair.)

The single messiest part of this concept is that %h[] should be aware
of what the hash's keys are, and %h{} should be aware of what the
indices are.  This is so that you can mix subscripts with hashes in
the same way that you can with arrays: see S09, Mixing subscripts.

The biggest difference between %h and @a, then, is that @a is much
more rigid when it comes to indices: the same index always points to
the same item, and keys have to be pre-established.  With %h, keys can
be added or removed at will; but each time you do so, the hash gets
re-indexed: so %h[0] potentially shifts to another item every time the
keys are changed.  This latter point is an example of what the
Autosorted hashes section in S09 is addressing with its talk about
the downside of autosorting a hash.

-- 
Jonathan Dataweaver Lang


Re: The Inf type

2008-06-03 Thread Jon Lang
Brandon S. Allbery wrote:
 John M. Dlugosz wrote:
 Jon Lang wrote:
 type (i.e., 'num').  Somehow, I had got it into my head that Num was a
 role that is done by all types that represent values on the real
 number line, be they integers, floating-point, rationals, or
 irrationals.  And really, I'd like Num to mean that.  I'd rather see

 Would you care to muse over that with me:  what Roles should we decompose
 the various numeric classes into?  Get a good list for organizing the
 standard library functions and writing good generics, and =then= argue over
 huffman encoding for the names.  Call them greek things for now so we don't
 confuse anyone g.

 Learn from the Haskell folks, who are still trying to untangle the mess they
 made of their numeric hierarchy (see
 http://haskell.org/haskellwiki/Mathematical_prelude_discussion).

I'll look it over.  That said, note that we're not dealing with a
class hierarchy here; we're dealing with role composition, which
needn't be organized into an overarching hierarchal structure to work
properly.

-- 
Jonathan Dataweaver Lang


Re: Building Junctions from Junctions

2008-06-23 Thread Jon Lang
I'd say that this ought to be implemented using :v (as in, 'values';
cf. :k, :kv, and :p for lists and hashes): this should let you look at
the values within the Junction as if they were merely a list of
values, at which point you can construct a new Junction from them.

-- 
Jonathan Dataweaver Lang


Re: step size of nums

2008-07-10 Thread Jon Lang
Mark J. Reed wrote:
 All of this is IMESHO, of course, but I feel rather strongly on this
 issue.  ++  means  += 1 .

Agreed.  Anything else violates the principle of least surprise.

Mind you, this is only true for numerics, where the concept of 1
potentially has meaning.  For non-numerics, something closer to the
next item in the series is a reasonable fall-back definition,
assuming that we define the operator at all.  So if '++' works on
strings, it might be reasonable to have bob++ == boc.

-- 
Jonathan Dataweaver Lang


Re: meta_postfix:*

2008-07-13 Thread Jon Lang
So you're suggesting that

  A op* n

should map to

  [op] A xx n

?

-- 
Jonathan Dataweaver Lang


Re: meta_postfix:*

2008-07-15 Thread Jon Lang
Dave Whipp wrote:

 Jon Lang wrote:

 So you're suggesting that

  A op* n

 should map to

  [op] A xx n


 I don't think that that mapping works for Thomas' proposal of a repetition
 count on post-increment operator. I.e.

  $a ++* 3

 is not the same as

  [++] $a xx 3

 (which I think is a syntax error)


It is.


 Also, he's suggesting getting rid of the xx operator, and replacing it
 with ,* -- I'm sure I could get used to that


Currently, it's being assumed that the repetition meta-operator will be
appended to the operator, followed by the repetition count:

  $value op* $count

This makes it difficult to apply the replication meta-operator to a prefix
operator.  However, a second option could be provided, where the
meta-operator gets prepended:

  $count *op $value

So:

  5 *, $n === $n ,* 5 === $n, $n, $n, $n, $n
  $n ++* 5 === $n++)++)++)++)++
  5 *++ $n === ++(++(++(++(++$n

And obviously the metaoperator is nonsensical when applied to a binary
operator with different types of values on its left and right sides.

As with other meta-operators, it should be possible to explicitly define a
symbol that would otherwise be interpreted as a meta'd operator, because of
efficiency; because the operator in question has capabilities above and
beyond what the meta-operator would indicate; or because the operator in
question doesn't bear any resemblance to the replicated use of a shorter
operator.  In particular, ** would be overloaded in this manner: to make
reasonable sense, the count of a repetition meta-operator must be an
unsigned integer of some sort, whereas exponents can be any type of number.
Heck, they don't even have to be real.

-- 
Jonathan Dataweaver Lang


Re: Complex planes

2008-07-16 Thread Jon Lang
Larry Wall wrote:
 On Tue, Jul 15, 2008 at 03:30:24PM +0200, Moritz Lenz wrote:
 : Today bacek++ implement complex logarithms in rakudo, and one of the
 : tests failed because it assumed the result to be on a different complex
 : plane. (log(-1i) returned 0- 1.5708i, while 0 + 3/2*1i was expected).
 :
 : Should we standardize on one complex plane (for example -pi = $c.angle
 :  pi like Complex.angle does)? Or simply fix the test to be agnostic to
 : complex planes?

 Standardizing on one complex plane is the normal solution, though
 this being Perl 6, there's probably a better solution using infinite
 Junctions if we can assume them to be both sufficiently lazy and
 sufficiently intelligent... :)

By the principle of least surprise, I'd recommend against this.  Most
programmers, when they see 'sqrt(1)', will expect a return value of 1,
and won't want to jump through the hurdles involved in picking '1' out
of 'any(1, -1)'.  That said, I'm not necessarily opposed to these
functions including something like an ':any' or ':all' adverb that
causes them to return a junction of all possible answers; but this
should be something that you have to explicitly ask for.

And even then, I'm concerned that it might very quickly get out of
hand.  Consider:

  pow(1, 1/pi() ) :any - 1

(I think I got that right...)

Since pi is an irrational number, there are infinitely many distinct
results to raising 1 to the power of 1/pi.  (All but one of them are
complex numbers, and all of them have a magnitude of 1, differing only
in their angles.)  Thus, pow(1, 1/pi() ) :any would have to return a
junction of an indefinitely long lazy list.  Now subtract 1 from that
junction.  Do you have to flatten the list in order to do so,
subtracting one from each item in the list?  Or is there a reasonable
way to modify the list generator to incorporate the subtraction?

Or how about:

  sqrt(1):any + sqrt(1):any

--

In any case, there's the matter of what to do when you only want one
answer, and not a junction of them.  IMHO, we should standardize the
angles on '-pi ^.. pi'.  My reasoning is as follows: if the imaginary
component is positive, the angle should be positive; if the imaginary
component is negative, the angle should be negative.  If the imaginary
component is zero and the real component is not negative, the angle
should be zero.  And the square root of -1 should be i, not -i; so if
the imaginary component is zero and the real component is negative,
the angle should be positive, not negative.

-- 
Jonathan Dataweaver Lang


Re: Complex planes

2008-07-16 Thread Jon Lang
Moritz Lenz wrote:
 Jon Lang wrote:
 By the principle of least surprise, I'd recommend against this.  Most
 programmers, when they see 'sqrt(1)', will expect a return value of 1,

 And that's what they get unless they write it as sqrt(1 + 0i).

I suppose that you _could_ use the programmer's choice of whether or
not to use complex numbers in the argument list as the indicator of
whether to return one answer or a junction of them.  Of course, this
could lead to subtle bugs where the programmer assigns a complex value
to $x and later takes the sqrt($x), but forgets that he assigned a
complex number earlier.  This may or may not be sufficient grounds for
requiring an explicit declaration that you want junctions.

 and won't want to jump through the hurdles involved in picking '1' out
 of 'any(1, -1)'.

 1 and -1 aren't just separated by a complex plane, they are really
 distinct numbers

True enough.  I fail to see how that invalidates my point, though: if
you're going to mess with multiple complex planes, why wouldn't you
also address the issue of distinct numbers as well?  The latter issue
is intimately connected to the former, as I demonstrate below.

 And even then, I'm concerned that it might very quickly get out of
 hand.  Consider:

   pow(1, 1/pi() ) :any - 1

 (I think I got that right...)

 Not quite. Afaict the only functions that might return a junction are
 Complex.angle and Complex.log.

Why is that?  Complex numbers can exist on multiple complex planes
even if you don't explicitly look at the angle.  One example of this
phenomenon in action takes the form of a 'proof' that 1 == -1:

  1 == sqrt(1) == sqrt(-1 * -1) == sqrt(-1) * sqrt(-1) == i * i == -1
# Assumes complex numbers throughout. 

The equality between the first and second steps means that the 1
inside the sqrt can only have an angle that is a multiple of 4pi.
Because of this, the -1's that appear in the third step cannot exist
on the same complex plane with each other: e.g., if the first one has
an angle of pi, the second has to have an angle of -pi, 3pi, 7pi,
11pi, ...  As a result of this, the signs of the sqrts in the fourth
step must be opposed: if the first sqrt(-1) returns i, the second
sqrt(-1) must return -i, and vice versa.  That means that there's a
negative term missing in the fifth step, which would cancel out the
negative term that appears in the final step.  At the very least, we
need to add infix:** and all related functions (e.g., sqrt) to the
list of functions that might return a junction.

And when and if Perl 6 adds constraint programming to its repertoire,
it will have to be smart enough to properly constrain complex planes
as well as complex values.

--

Bringing this back down a bit closer to Earth: if you supply a complex
number using rectilinear coordinates, a case could be made that you've
provided insufficient information, and that the complex number ought
to be stored as a junction of all of the different complex plane
representations for that otherwise-distinct value.  If you supply a
complex number using polar coordinates, you have been able to supply
the choice of complex plane as well as a distinct value; so only one
representation should be stored.  That is:

  (1 + 0 * i).angle == any (0, 2 * pi, 4 * pi, ...);
  exp(0 * i).angle == 0;
  exp(2 * pi * i).angle == 2 * pi;

So:

  (1 + 0 * i) == any (exp(0), exp(2 * pi * i), exp(4 * pi * i), ...);

Extending this further: exp($C) effectively reinterprets a complex
number's rectilinear coordinates as polar coordinates, and log($C)
does the inverse.  So as long as $C contains a single value, exp($C)
should always return a complex number that exists on a single complex
plane, established by $C's imaginary component; conversely, log($C)
ought to return a complex value that is represented on every possible
complex plane, since neither the angle nor the magnitude of $C
provides enough information to determine which plane to use.

Of course, there may be (and probably are) technical difficulties that
make this unworkable.

 Since pi is an irrational number, there are infinitely many distinct
 results to raising 1 to the power of 1/pi.

 No. exp($x) is a single, well-defined value.

True, as long as $x is a single, well-defined value.

But I wasn't talking about exp($x); I was talking about pow($x, $y),
$x ** $y, sqrt($x), and so on.  Just as:

  sqrt(1 + 0 * i) == sqrt(any(exp(0), exp(2 * pi * i), exp(4 * pi *
i), ...)) == any(exp(0), exp(pi * i), exp(2 * pi * i), ...)

it is also the case that:

  (1 + 0 * i) ** pi == any(exp(0), exp(2 * pi * i), exp(4 * pi * i),
...) ** pi == any(exp(0), exp(2 * pi * pi * i), exp(4 * pi * pi * i),
...)

And all of those answers are distinct values.

 But you do have a point that we can't really use infinite junctions
 unless we can ensure that we can do all sorts of arithmetics with it
 without losing laziness. And I don't think we can prove that (but I
 might give it it shot if I have some spare time)

Just noting that we're

Re: Complex planes

2008-07-16 Thread Jon Lang
Moritz Lenz wrote:
 If the programmer errs on what he thinks is in a variable, it'll always
 be a bug.

Yes; but some bugs are easier to make, and harder to catch, than others.

 Principle of least surprise:

 Suppose sqrt(1) returns any(1, -1):
 if sqrt($x)  0.5 { do something }

 I can see the big, fat WTF written in the face of programmer who tries
 to debug that code, and doesn't know about junctions. It just won't DTRT.

This is closely related to my original point.  In particular, if
you're unwilling to have sqrt return junctions of distinct values, you
don't really want to mess with junctions of a single complex value on
different planes, either.

 And even then, I'm concerned that it might very quickly get out of
 hand.  Consider:

   pow(1, 1/pi() ) :any - 1

 (I think I got that right...)

 Not quite. Afaict the only functions that might return a junction are
 Complex.angle and Complex.log.

 Why is that?

 As I pointed out above it's insane to return a junction of logically
 distinct values.

It's only insane if the programmer isn't expecting it - which goes
back to my first point of making sure that the programmer explicitly
asked for it before giving it to him.

 It might even be insane to do it for Complex.log:

Agreed: if you are uncomfortable with sqrt(1) returning a junction of
distinct values, then Complex.log should likewise not return a
junction of distinct values.

 I think that I don't have to comment on the rest of the mail to make
 clear that Larry's proposal, although being quite interesting, is a very
 bad idea to actually implement (and very hard to implement as well)
 (unless somebody comes to its rescue with a really clever idea on how to
 resolve all these weirdnesses).

Well... yes and no.  Remember, I started off by recommending against
Larry's proposal.  I haven't changed my mind, although I think that
it's worth exploring whether or not an alternate treatment of complex
numbers is doable.

-- 
Jonathan Dataweaver Lang


Re: Complex planes

2008-07-16 Thread Jon Lang
Mark Biggar wrote:
 Let's worry about getting principal values, branch cuts and handling signed 
 zeros correct before dealing with the interaction of junctions and 
 multi-valued complex functions.

Indeed.

 BTW, two good references on this that we might want to plagiarizer.I mean 
 borrow from are the Ada Refernece Manual and the Common LISP Reference 
 Manual.  Both go into great detail on this subject.

I've just reviewed the Ada Reference Manual's take on this topic, and
it did indeed address a few wrinkles that I overlooked.

Basic rule: Complex.modulus = 0; Complex.arg ~~ -pi .. pi.

I'm not fully up on the lingo; so I'm going to invent some:  Number $x
is 'indefinite' if it is defined but has a value of Inf, NaN, or the
like.  It is 'definite' if it is defined and not indefinite.

If either .re or .im is indefinite, the complex number is indefinite.

If signed zeroes are used, then a definite complex number can always
be assigned to one of the four quadrants.  (In particular, if .im ==
-0, the number falls in one of the lower quadrants, and .arg == -pi or
-0, depending on whether .re is negative or positive, respectively; if
.im == +0, the number falls in one of the upper quadrants, and .arg ==
pi or +0.  It is impossible for .arg to be indefinite if both .re and
.im are definite: complex zeroes are signed by a definite .arg.)
Conjecture: signed zeroes should be accompanied by signed infinities:
as with zeroes, the sign of a complex infinity is a definite
argument.

Without signed zeroes, a definite complex number can be assigned to
one of the four quadrants, to the positive or negative real number
axes, to the positive or negative imaginary number axes, or to the
origin.  Under this scheme, some modifications and caveats need to be
stated: .arg ~~ -pi ^.. pi.  (So if the number falls on the negative
real number line, .arg == pi.) If the number is zero, .arg is
indefinite.  If the number is indefinite, .arg is indefinite.  If .re
is indefinite, then .im is indefinite, and vice versa.

The first paradigm has fewer special cases than the second one, but
has several redundancies of the same nature as signed zeroes; the
second paradigm more closely reflects what a mathematician would
expect when seeing a complex number, but has several incongruities
that might give a programmer headaches.

-- 
Jonathan Dataweaver Lang


Re: Another adverb on operator question

2008-08-07 Thread Jon Lang
Perhaps I'm missing something; but why couldn't you say '[lt]:lc $a, $b, $c'?

That is, I see the reducing meta-operator as a way of taking a
list-associative infix operator and treating it as a function call,
albeit one with a funny name.  As such, you should be able to do
things with a reduced operator in the same way that you could with any
function that has a signature of ([EMAIL PROTECTED], *%adverb).  Am I wrong?

-- 
Jonathan Dataweaver Lang


Re: [svn:perl6-synopsis] r14574 - doc/trunk/design/syn

2008-08-08 Thread Jon Lang
But are 'twas and -x valid identifiers?  IMHO, they should not be.

-- 
Jonathan Dataweaver Lang


Re: Differential Subscripts

2008-08-08 Thread Jon Lang
On Fri, Aug 8, 2008 at 7:41 PM, John M. Dlugosz
[EMAIL PROTECTED] wrote:
 How is @array[*-2] supposed to be implemented?

 S09v28
 // reported again 8-Aug-2008

 Is this magic known to the parser at a low level, or is it possible to
 define your own postcircumfix operators that interact with the
 interpretation of the argument?

IMHO, this can best be answered by the implementors: it depends on how
difficult it would be to enable user-defined code.  My own bias would
be toward the latter; but I'm not an implementor.

 Does this use of * apply to any user-defined postcurcumfix:[ ] or just
 those defining the function for the Positional role, or what?  And whether
 or not it always works or is particular to the built-in Array class, we need
 to define preciely what the * calls on the object, because Ithose may be
 overridden. For example, it is obvious that a lone * in an expression calls
 .length.  But the *{$idx} and *[$idx] work by calling some method on the
 Array object, right?  Something like @array.user_defined_indexMay to
 discover that May maps to 5, or @array.user_defined_index[5] to produce May.
  That is, the accessor .user_defined_index() returns an object that supports
 [] and {} just like an Array.  It may or might not be an Array populated
 with the inverse mappings. But it behaves that way.

 But that doesn't work for multidimensional arrays.  The meaning of *{} is
 dependent on its position in the list.  This means that it must be able to
 count the semicolons!  If the slice is composed at runtime, it needs to
 operate at runtime as well, and this implies that the called function is
 told which dimension it is operating upon!

This is also true when using a lone * in an expression: the number of
elements that it represents depends on which dimension it appears in.
As well, a similar point applies to **.

IIRC, there's supposed to be a 'shape' method that provides
introspection on the index.  Exactly what it's capable of and how it
works has not been established AFAIK.

 Is it possible to write:
@i = (1, 3, *-1);
say @data[$i];
 and get the same meaning as
say @data[1, 3, *-1]?

I believe so.  IIRC, evaluation of * is lazy; it gets postponed until
there's enough context to establish what its meaning is supposed to
be.

-- 
Jonathan Dataweaver Lang


Re: Multiple Return Values - details fleshed out

2008-08-09 Thread Jon Lang
John M. Dlugosz wrote:
 I wrote http://www.dlugosz.com/Perl6/web/return.html to clarify and
 extrapolate from what is written in the Synopses.

A few comments:

1. I was under the impression that identifiers couldn't end with - or '.
2. While list context won't work with named return values, hash
context ought to.
3. The positional parameters of a Capture object essentially act as a
list with a user-defined index, don't they?  There _is_ some crossover
between the named parameter keys and the positional parameter
user-defined index, in that you ought to be able to provide a name and
have the Capture figure out which one you're asking for.  This works
because there should be no overlap between the keys of the hash and
the user-defined indices of the list.

-- 
Jonathan Dataweaver Lang


Re: The False Cognate problem and what Roles are still missing

2008-08-20 Thread Jon Lang
On Wed, Aug 20, 2008 at 3:16 PM, Aristotle Pagaltzis [EMAIL PROTECTED] wrote:
 Hi $Larry et al,

 I brought this up as a question at YAPC::EU and found to my
 surprise that no one seems to have thought of it yet. This is
 the mail I said I'd write. (And apologies, Larry. :-) )

 Consider the classic example of roles named Dog and Tree which
 both have a `bark` method. Then there is a class that for some
 inexplicable reason, assumes both roles. Maybe it is called
 Mutant. This is standard fare so far: the class resolves the
 conflict by renaming Dog's `bark` to `yap` and all is well.

 But now consider that Dog has a method `on_sound_heard` that
 calls `bark`.

 You clearly don't want that to suddenly call Tree's `bark`.

 Unless, of course, you actually do.

 It therefore seems necessary to me to specify dispatch such that
 method calls in the Dog role invoke the original Dog role methods
 where such methods exist. There also needs to be a way for a
 class that assumes a role to explicitly declare that it wants
 to override that decision. Thus, by default, when you say that
 Mutant does both Dog and Tree, Dog's methods do not silently
 mutate their semantics. You can cause them to do so, but you
 should have to ask for that.

 I am, as I mentioned initially, surprised that no one seems to
 have considered this issue, because I always thought this is what
 avoiding the False Cognate problem of mixins, as chromatic likes
 to call it, ultimately implies at the deepest level: that roles
 provide scope for their innards that preserves their identity and
 integrity (unless, of course, you explicitly stick your hands in),
 kind of like the safety that hygienic macros provide.

My thoughts:

Much of the difficulty comes from the fact that Mutant [i]doesn't[/i]
rename Dog::bark; it overrides it.  That is, a conflict exists between
Dog::bark and Tree::bark, so a class or role that composes both
effectively gets one that automatically fails.  You then create an
explicit Mutant::bark method that overrides the conflicted one; in its
body, you call the Tree::bark method (or the Dog::bark method, or both
in sequence, or neither, or...)  As such, there's no obvious link
between Mutant::bark and Tree::bark.  Likewise, you don't rename
Dog::bark; you create a new Mutant::yap that calls Dog::bark.

One thing that might help would be a trait for methods that tells us
where it came from - that is, which - if any - of the composed methods
it calls.  For instance:

  role Mutant does Dog does Tree {
method bark() was Tree::bark;
method yap() was Dog::bark;
  }

As I envision it, was sets things up so that you can query, e.g.,
Mutant::yap and find out that it's intended as a replacement for
Dog::bark.  Or you could ask the Mutant role for the method that
replaces Dog::bark, and it would return Mutant::yap.

It also provides a default code block that does nothing more than to
call Dog::bark; unless you override this with your own code block, the
result is that Mutant::yap behaves exactly like Dog::bark.

By default, this is what other methods composed in from Dog do: they
ask Mutant what Dog::bark is called these days, and then call that
method.  All that's left is to decide how to tell them to ask about
Tree::bark instead, if that's what you want them to do.

-- 
Jonathan Dataweaver Lang


Re: adverbial form of Pairs notation question

2008-09-08 Thread Jon Lang
TSa wrote:
 Ahh, I see. Thanks for the hint. It's actually comma that builds lists.
 So we could go with () for undef and require (1,) and (,) for the single
 element and empty list respectively. But then +(1,2,3,()) == 4.

Actually, note that both infix:, and circumfix:[ ] can be used to
build lists; so [1] and [] can be used to construct single-element and
empty lists, respectively.

 On the other hand, there's just that sort of double discontinuity in
 English pluralization, where we say 2 (or more) items, then 1
 item, but then no items.  So perhaps it's justifiable in Perl6 as
 well.

 I would also opt for () meaning empty list as a *defined* value. Pairs
 that shall receive the empty list as value could be abbreviated from
 :foo(()) to :foo(,). As long as the distinction between Array and List
 doesn't matter one can also say :foo[], of course.

Personally, I'd like to see '()' capture the concept of nothing in
the same way that '*' captures the concept of whatever.  There _may_
even be justification for differentiating between this and something
that is undefined (which 'undef' covers).  Or not; I'm not sure of
the intricacies of this.  One possibility might be that '1, 2, undef'
results in a three-item list '[1, 2, undef]', whereas '1, 2, ()'
results in a two-item list '[1, 2]' - but that may be a can of worms
that we don't want to open.

-- 
Jonathan Dataweaver Lang


Re: Should $.foo attributes without is rw be writable from within the class

2008-09-19 Thread Jon Lang
Daniel Ruoso wrote:
 TSa wrote:
 May I pose three more questions?

 1. I guess that even using $!A::bar in methods of B is an
 access violation, right? I.e. A needs to trust B for that
 to be allowed.

 Yes

 2. The object has to carry $!A::bar and $!B::bar separately, right?

 Yes

 3. How are attribute storage locations handled in multiple inheritance?
 Are all base classes virtual and hence their slots appear only once
 in the object's storage?

 In SMOP, it is handled based on the package of the Class, the private
 storage inside the object is something like

   $obj.^!private_storageA::$!bar

 and

   $ojb.^!private_storageB::$!bar

Note that this ought only be true of class inheritance; with role
composition, there should only be one $!bar in the class, no matter
how many roles define it.

-- 
Jonathan Dataweaver Lang


Re: Should $.foo attributes without is rw be writable from within the class

2008-09-19 Thread Jon Lang
Daniel Ruoso wrote:
 Jon Lang wrote:
 Note that this ought only be true of class inheritance; with role
 composition, there should only be one $!bar in the class, no matter
 how many roles define it.

 er... what does that mean exactly?

Unless something has drastically changed since I last checked the
docs, roles tend to be even more ethereal than classes are.  You can
think of a class as being the engine that runs objects; a role, OTOH,
should be thought of as a blueprint that is used to construct a class.
 Taking your example:

   role B {
   has $!a;
   }

   role C {
   has $!a;
   }

   class A does B, C {
   method foo() {
  say $!a;
   }
   }

 I think in this case $!B::a and $!C::a won't ever be visible, while the
 reference to $!a in class A will be a compile time error.

:snip:

 Or does that mean that

class A does B, C {...}

 actually makes the declarations in B and C as if it were declared in the
 class A?

Correct.  Declarations in roles are _always_ treated as if they were
declared in the class into which they're composed.  And since only
classes are used to instantiate objects, the only time that a role
actually gets used is when it is composed into a class.

-- 
Jonathan Dataweaver Lang


Re: XPath grammars (Was: Re: globs and trees in Perl6)

2008-10-02 Thread Jon Lang
For tree-oriented pattern matching syntax, I'd recommend for
inspiration the RELAX NG Compact Syntax, rather than XPath.
Technically, RELAX NG is an XML schema validation language; but the
basic principle that it uses is to describe a tree-oriented pattern,
and to consider the document to be valid if it matches the pattern.

XPath, by contrast, isn't so much about pattern matching as it is
about creating a tree-oriented addressing scheme.

Also note that S05 includes an option near the end about matching
elements of a list rather than characters of a string; IMHO, a robust
structured data-oriented pattern-matching technology for perl6 ought
to use that as a starting point.

-- 
Jonathan Dataweaver Lang


Re: globs and rules and trees, oh my! (was: Re: XPath grammars (Was: Re: globs and trees in Perl6))

2008-10-03 Thread Jon Lang
Timothy S. Nelson wrote:
 TimToady note to treematching folks: it is envisaged that signatures in
 a rule will match nodes in a tree

My question is, how is this expected to work?  Can someone give an
 example?

I'm assuming that this relates to Jon Lang's comment about using
 rules to match non-strings.

Pretty much - although there are some patterns that one might want to
use that can't adequately be expressed in this way - at least, not
without relaxing some of the constraints on signature definition.
Some examples:

A signature always anchors its positional parameters pattern to the
first and last positional parameters (analogous to having implicit '^'
and '$' markup at the start and end of a textual pattern), and does
not provide any sort of zero or more/one or more qualifiers, other
than a single tail-end slurpy list option.  Its zero or one
qualifier is likewise constrained in that once you use an optional
positional, you're limited to optionals and slurpies from that point
on.  This makes it difficult to set up a pattern that matches, e.g.,
any instance within the list of a string followed immediately by a
number.

The other issue that signatures-as-patterns doesn't handle very well
is that of capturing and returning matches.  I suppose that this could
be handled, to a limited extent, by breaking the signature up into
several signatures joined together by ,, and then indicating which
sub-signatures are to be returned; but that doesn't work too well
once hierarchal arrangements are introduced.

Perhaps an approach more compatible with normal rules syntax might be
to introduce a series of xml-like tags:

[ ... ] lets you denote a nested list of patterns - analogous to
what [ ... ] does outside of rules.  Within its reach, '^' and '$'
refer to just before the first element and just after the last
element, respectively.  Otherwise, this works just like the list of
objects and/or strings patterns currently described in S05.

{ ... } does likewise with a nested hash of values, with standard
pair notation being used within in order to link key patterns to value
patterns.  Since hashes are not ordered, '^' and '$' would be
meaningless within this context.  Heck, order in general is
meaningless within this context.

item replaces elem as the object-based equivalent of '.' ('elem'
is too list-oriented of a term).  I'd recommend doing this even if you
don't take either of the suggestions above.

You might even do a [[ ... ]] pairing to denote a list that is
nested perhaps more than one layer down.  Or perhaps that could be
handled by using '[+' or the like.

 But how would it be if I wanted to search a tree for all nodes
 whose readonly attribute were true, and return an array of
 those nodes?

This can already be done, for the most part:

/ (.does(ro)) /

Mind you, this only searches a list; to make it search a tree, you'd
need a drill-down subrule such as I outline above:

/ [* (.does(ro)) ]* /

-- 
Jonathan Dataweaver Lang


Re: [svn:perl6-synopsis] r14586 - doc/trunk/design/syn

2008-10-05 Thread Jon Lang
[EMAIL PROTECTED] wrote:
 Log:
 Add missing series operator, mostly for readability.

Is there a way for the continuing function to access its index as well
as, or instead of, the values of one or more preceding terms?  And/or
to access elements by counting forward from the start rather than
backward from the end?

There is a mathematical technique whereby any series that takes the
form of F(n) = A*F(n-1) + B*F(n-2) + C*F(n-3) can be reformulated as
a function of n, A, B, C, F(0), F(1), and F(2).  (And it is not
limited to three terms; it can be as few as one or as many as n-1 -
although it has to be the same number for every calculated term in the
series.)  For the Fibonacci series, it's something like:

F(n) = (pow((sqrt(5) + 1)/2, n) + pow((sqrt(5) - 1)/2, n))/sqrt(5)

...or something to that effect.  It would be nice if the programmer
were given the tools to do this sort of thing explicitly instead of
having to rely on the optimizer to know how to do this implicitly.

-- 
Jonathan Dataweaver Lang


Re: [svn:perl6-synopsis] r14586 - doc/trunk/design/syn

2008-10-06 Thread Jon Lang
Larry Wall wrote:
 On Sun, Oct 05, 2008 at 08:19:42PM -0700, Jon Lang wrote:
 : [EMAIL PROTECTED] wrote:
 :  Log:
 :  Add missing series operator, mostly for readability.
 :
 : Is there a way for the continuing function to access its index as well
 : as, or instead of, the values of one or more preceding terms?  And/or
 : to access elements by counting forward from the start rather than
 : backward from the end?

 That's what the other message was about.  @_ represents the entire list
 generated so far, so you can look at its length or index it from the
 beginning.  Not guaranteed to be as efficient though.

If I understand you correctly, an All even numbers list could be written as:

  my @even = () ... { 2 * [EMAIL PROTECTED] }

And the Fibonacci series could be written as:

  my @fib = () ... { (pow((1 + sqrt(5))/2, [EMAIL PROTECTED]) - pow((1 - 
sqrt(5))/2,
[EMAIL PROTECTED])) / sqrt(5)) }

Mind you, these are bulkier than the versions described in the patch.
And as presented, they don't have any advantage to offset their
bulkiness, because you still have to determine every intervening
element in sequential order.  If I could somehow replace '[EMAIL PROTECTED]' in 
the
above code with an integer that identifies the element that's being
asked for, it would be possible to skip over the unnecessary elements,
leaving them undefined until they're directly requested.  So:

  say @fib[4];

would be able to calculate the fifth fibonacci number without first
calculating the prior four.

It's possible that the '...' series operator might not be the right
way to provide random access to elements.  Perhaps there should be two
series operators, one for sequential access (i.e., 'infix:...') and
one for random access (e.g., 'infix:...[]').  This might clean
things up a lot: the sequential access series operator would feed the
last several elements into the generator:

  0, 1 ... - $a, $b { $a + $b }

while the random access series operator would feed the requested index
into the generator:

  () ...[] - $n { (pow((1 + sqrt(5))/2, $n) - pow((1 - sqrt(5))/2,
$n)) / sqrt(5)) }

I'd suggest that both feed the existing array into @_.
__
 : It would be nice if the programmer
 : were given the tools to do this sort of thing explicitly instead of
 : having to rely on the optimizer to know how to do this implicitly.

 Um, I don't understand what you're asking for.  Explicit solutions
 are always available...

This was a reaction to something I (mis)read in the patch, concerning
what to do when the series operator is followed by a 'whatever'.
Please ignore.
__
On an additional note: the above patch introduces some ambiguity into
the documentation.  Specifically, compare the following three lines:

X  List infixZ minmax X X~X X*X XeqvX ...
R  List prefix   : print push say die map substr ... [+] [*] any $ @

N  Terminator; ==, ==, ==, ==, {...}, unless, extra ), ], }

On the first line, '...' is the name of an operator; on the second and
third lines, '...' is documentation intended to mean ...and so on
and yadda-yadda, respectively.  However, it is not immediately
apparent that this is so: a casual reader will be inclined to read the
first line as ...and so on rather than 'infix:...', and will not
realize his error until he gets down to the point where the series
operator is defined.
__
Another question: what would the following do?

  0 ... { $_ + 2 } ... infix:+ ... *

If I'm reading it right, this would be the same as:

  infix:... (0; { $_ + 2 }; infix:+; *)

...but beyond that, I'm lost.
__
Jonathan Dataweaver Lang


Re: [svn:perl6-synopsis] r14598 - doc/trunk/design/syn

2008-10-17 Thread Jon Lang
also seems to be the junctive equivalent of andthen.  Should there
be a junctive equivalent of orelse?

-- 
Jonathan Dataweaver Lang


Re: Are eqv and === junction aware?

2008-11-13 Thread Jon Lang
Larry Wall wrote:
 eqv and === autothread just like any other comparisons.  If you really
 want to compare the contents of two junctions, you have to use the
 results of some magical .eigenmumble method to return the contents
 as a non-junction.  Possibly stringification will be sufficient, if
 it canonicalizes the output order.

Perhaps there should be a way of disabling the autothreading?
Something similar to the way that Lisp can take a block of code and
tag it as do not execute at this time.  The idea is that there may
be some cases where one might want to look at a Junction as an object
in and of itself, rather than as a superposition of other objects; and
simply extracting its contents into a list or set won't always do,
since it might be Junction-specific details at which you want to look.
 And as long as Junctions autothread by default, I don't see them
losing any power.

-- 
Jonathan Dataweaver Lang


Re: [perl #60732] Hash indexes shouldn't work on array refs

2008-11-24 Thread Jon Lang
On Mon, Nov 24, 2008 at 7:19 AM, Rafael Garcia-Suarez
[EMAIL PROTECTED] wrote:
 Moritz Lenz wrote in perl.perl6.compiler :
 jerry gay wrote:
 On Fri, Nov 21, 2008 at 10:43, via RT Moritz Lenz
 [EMAIL PROTECTED] wrote:
 # New Ticket Created by  Moritz Lenz
 # Please include the string:  [perl #60732]
 # in the subject line of all future correspondence about this issue.
 # URL: http://rt.perl.org/rt3/Ticket/Display.html?id=60732 


 From #perl6 today:

 19:33  moritz_ rakudo: my $x = [ 42 ]; say $x0
 19:33  p6eval rakudo 32984: OUTPUT[42␤]

 I don't think that should be allowed.

 the real test is:

 (8:52:47 PM) [particle]1: rakudo: my $x = [42]; say $x0_but_true;
 (8:52:49 PM) p6eval: rakudo 32998: OUTPUT[42␤]
 (8:53:38 PM) [particle]1: rakudo: my $x = [42]; say $xtrue_but_0;
 (8:53:40 PM) p6eval: rakudo 32998: OUTPUT[42␤]
 (8:53:50 PM) [particle]1: rakudo: my $x = [42]; say $xXXX;
 (8:53:52 PM) p6eval: rakudo 32998: OUTPUT[42␤]
 (8:54:37 PM) [particle]1: rakudo: my $x = ['a', 42]; say $xXXX;
 (8:54:39 PM) p6eval: rakudo 32998: OUTPUT[a␤]
 (8:58:41 PM) [particle]1: rakudo: my $x = ['a', 42]; say $x1.4;
 (8:58:44 PM) p6eval: rakudo 32998: OUTPUT[42␤]
 (8:58:48 PM) [particle]1: rakudo: my $x = ['a', 42]; say $x0.4;
 (8:58:50 PM) p6eval: rakudo 32998: OUTPUT[a␤]

 so, the index is coerced to an integer. is that really wrong?
 ~jerry

 IMHO yes, because Perl explicitly distinguishes between arrays and
 hashes (and it's one of the things we never regretted, I think ;-). Any
 intermixing between the two would only lead to confusion, especially if
 somebody writes a class whose objects are both hashe and array.

 Yes, that leads to confusion. (And confusion leads to anger, and so on)
 Which is why we removed pseudo-hashes from Perl 5.10.

Perl 6 explicitly _does_ allow hash-reference syntax, for the specific
purpose of customized indices.  That said, the sample code would not
work, since you must explicitly define your custom index before you
use it.  Examples from the Synopses include a custom index that's
based on the months of the year, so that you can say, e.g., @xJan
instead of @x[0].

-- 
Jonathan Dataweaver Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-03 Thread Jon Lang
Aristotle Pagaltzis wrote:
 * Bruce Gray [EMAIL PROTECTED] [2008-12-03 18:20]:
 In Perl 5 or Perl 6, why not move the grep() into the while()?

 Because it's only a figurative example and you're supposed to
 consider the general problem, not nitpick the specific example…

But how is that not a general solution?  You wanted something where
you only have to set the test conditions in one place; what's wrong
with that one place being inside the while()?

-- 
Jonathan Dataweaver Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-03 Thread Jon Lang
Mark J. Reed wrote:
 Mark J. Reed wrote:
 loop
 {
   doSomething();
   last unless someCondition();
   doSomethingElse();
 }

 That is, of course, merely the while(1) version from Aristotle's
 original message rewritten with Perl 6's loop keyword.  As I said, I'm
 OK with that, personally, but it's clearly not what he's looking for.

But maybe it is.  I suspect that the difficulty with the while(1)
version was the kludgey syntax; the loop syntax that you describe does
the same thing (i.e., putting the test in the middle of the loop block
instead of at the start or end of it), but in a much more elegant
manner.  The only thing that it doesn't do that a more traditional
loop construct manages is to make the loop condition stand out
visually.

-- 
Jonathan Dataweaver Lang


Re: how to write literals of some Perl 6 types?

2008-12-03 Thread Jon Lang
Darren Duncan wrote:
 Now, with some basic types, I know how to do it, examples:

  Bool # Bool::True

Please forgive my ignorance; but are there any cases where
'Bool::True' can be spelled more concisely as 'True'?  Otherwise, this
approach seems awfully cluttered.

-- 
Jonathan Dataweaver Lang


Re: Roles and IO?

2008-12-11 Thread Jon Lang
Leon Timmermans wrote:
 What I propose is using role composition for *everything*. Most
 importantly that includes the roles Readable and Writable, but also
 things like Seekable, Mapable, Pollable, Statable, Ownable, Buffered
 (does Readable), Socket, Acceptable (does Pollable), and more.

 That may however make some interfaces is a bit wordy. I think that can
 be conveyed using a subset like this (though that may be abusing the
 feature).

 subset File of Mapable  Pollable  Statable  Ownable;

subset is the wrong approach: a subset is about taking an existing
role and restricting the range of objects that it will match.  What
you're really asking for are composite roles:

  role File does Mappable does Pollable does Statable does Ownable {}

One of the things about roles is that once you have composed a bunch
of them into another role, they're considered to be composed into
whatever that role is composed into.  So does File would be
equivalent to does Mappable does Pollable does Statable does Ownable
(barring potential conflicts between Mappable, Pollable, Statable, and
Ownable which File would presumably resolve).

-- 
Jonathan Dataweaver Lang


Re: List.end - last item and last index mixing

2008-12-12 Thread Jon Lang
Moritz Lenz wrote:
 From S29:

 : =item end
 :
 :  our Any method end (@array: ) is export
 :
 : Returns the final subscript of the first dimension; for a one-dimensional
 : array this simply the index of the final element.  For fixed dimensions
 : this is the declared maximum subscript.  For non-fixed dimensions
 (undeclared
 : or explicitly declared with C*), the actual last element is used.


 The last sentence  seems to suggest that not the index of the last
 element is returned, but the element itself. (Which I think is pretty weird)


 And S02:

 : The C$#foo notation is dead.  Use C@foo.end or C@foo[*-1] instead.
 : (Or C@foo.shape[$dimension] for multidimensional arrays.)

 That doesn't clean it up totally either.

 So what should a b c.end return? 2 or 'c'?
 (Currently pugs and elf return 2, rakudo 'c').

@foo[*-1] would return 'c'.  @foo[*-1]:k would return 2.  So the
question is whether @foo.end returns @foo[*-1] or @foo[*-1]:k.

You might also allow 'end' to take an adverb the way that 'postfix:[]'
does, allowing you to explicitly choose what you want returned; but
that still doesn't answer the question of what to return by default.

-- 
Jonathan Dataweaver Lang


Re: Roles and IO?

2008-12-12 Thread Jon Lang
Leon Timmermans wrote:
 I assumed a new role makes a new interface. In other words, that a
 type that happens to do Pollable, Mappable, Statable and Ownable
 wouldn't automatically do File in that case. If I was wrong my abuse
 of subset wouldn't be necessary. Otherwise, maybe there should be a
 clean way to do that.

Hmm... true enough.

There was another contributer here who proposed an is like modifier
for type matching - I believe that he used £ for the purpose, in that
'File' would mean 'anything that does File', while '£ File' would mean
'anything that can be used in the same sort of ways as File'.  That
is, perl 6 uses nominative typing by default, while '£' would cause it
to use structural typing (i.e., duck-typing) instead.

FWIW, I'd be inclined to have anonymous roles use duck-typing by
default - that is, $Foo.does role {does Pollable; does Mappable; does
Statable; does Ownable} would be roughly equivalent to $Foo.does £
role {does Pollable; does Mappable; does Statable; does Ownable} -
the theory being that there's no way that a role that you generate on
the fly for testing purposes will ever match any of the roles composed
into a class through nominative typing; so the only way for it to be
at all useful is if it uses structural typing.

(I say roughly equivalent because I can see cause for defining £
Foo such that it only concerns itself with whether or not the class
being tested has all of the methods that Foo has, whereas the
anonymous role {does Foo} would match any role that does Foo.  As
such, you could say things like:

  role {does Foo; does £ Bar; has $baz}

to test for an object that .does Foo, has all of the methods of Bar,
and has an accessor method for $baz.

-- 
Jonathan Dataweaver Lang


Re: What does a Pair numify to?

2008-12-15 Thread Jon Lang
Mark Biggar wrote:
 The only use case I can think of is sorting a list of pairs;
  should it default to sort by key or value?

But this isn't a case of numifying a Pair, or of stringifying it - or
of coercing it at all.  If you've got a list of Pairs, you use a
sorting algorithm that's designed for sorting Pairs (which probably
sorts by key first, then uses the values to break ties).  If you've
got a list that has a mixture of Pairs and non-Pairs, I think that the
sorting algorithm should complain: it's clearly a case of being asked
to compare apples and oranges.

When are you going to be asked to stringify or numify a Pair?  Actual
use-cases, please.  Personally, I can't think of any.

-- 
Jonathan Dataweaver Lang


Re: What does a Pair numify to?

2008-12-16 Thread Jon Lang
On Mon, Dec 15, 2008 at 10:26 PM, Larry Wall la...@wall.org wrote:
 On Mon, Dec 15, 2008 at 04:43:51PM -0700, David Green wrote:
 I can't really think of a great example where you'd want to numify a
 pair, but I would expect printing one to produce something like a =
 23 (especially since that's what a one-element hash would print,
 right?).

 Nope, would print a\t23\n as currently specced.

The point, though, is that stringification of a pair incorporates both
the key and the value into the resulting string.  This is an option
that numification doesn't have.

As well, I'm pretty sure that a\t23\n doesn't numify.  I'm beginning
to think that Pairs shouldn't, either; but if they do, they should
definitely do so by numifying the value of the pair.

-- 
Jonathan Dataweaver Lang


Re: What does a Pair numify to?

2008-12-16 Thread Jon Lang
TSa wrote:
 I see no problem as long as say gets a pair as argument. Then it can
 print the key and value separated with a tab. More problematic are
 string concatenations of the form

   say the pair is:  ~ (foo = $bar);

 which need to be written so that say sees the pair

   say the pair is: , (foo = $bar);

 and not a string that only contains the value of the pair. I'm not
 sure if the parens are necessary to pass the pair to say as argument
 to be printed instead of a named argument that influences how the
 printing is done.

That's a good point.  Is there an easy way to distinguish between
passing a pair into a positional parameter vs. passing a value into a
named parameter?  My gut instinct would be to draw a distinction
between the different forms that Pair syntax can take, with foo =
$bar being treated as an instance of the former and :foo($bar)
being treated as an instance of the latter.  That is:

  say 'the pair is: ', foo = $bar;

would be equivalent to:

  say 'the pair is: ', foo\t$bar\n;

while:

  say the pair is: , :foo($bar);

would pass $bar in as the value of the named parameter 'foo' (which, I
believe, would cause 'say' to squawk, as its signature doesn't allow
for a named parameter 'foo').

-- 
Jonathan Dataweaver Lang


Re: What does a Pair numify to?

2008-12-16 Thread Jon Lang
Moritz Lenz wrote:
 Off the top of my head, see S06 for the gory details:

 my $pair = a = 'b';

 named(a = 'b');
 named(:ab);
 named(|$pair);

 positional((a = 'b'));
 positional((:ab));
 positional($pair);

As you say: the gory details, emphasis on gory.  But if that's the way
of things, so be it.

-- 
Jonathan Dataweaver Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-16 Thread Jon Lang
How do you compute '*'?  That is, how do you know how many more
iterations you have to go before you're done?

Should you really be handling this sort of thing through an iteration
count mechanism?  How do you keep track of which iteration you're on?
 Is it another detail that needs to be handled behind the scenes, or
is the index of the current iteration available to the programmer?
(Remember, we're dealing with 'while' and 'loop' as well as 'for'.)

-- 
Jonathan Dataweaver Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-18 Thread Jon Lang
Aristotle Pagaltzis wrote:
 And it says exactly what it's supposed to say in the absolutely
 most straightforward manner possible. The order of execution is
 crystal clear, the intent behind the loop completely explicit.

If it works for you, great!  Personally, it doesn't strike me as being
as straightforward as putting a last unless clause into the middle
of an otherwise-infinite loop; but then, that's why Perl (both 5 and
6) works on the principle of TIMTOWTDI: you do it your way, and I'll
do it mine.

-- 
Jonathan Dataweaver Lang


Re: returning one or several values from a routine

2009-01-05 Thread Jon Lang
Daniel Ruoso wrote:

 Hi,

 As smop and mildew now support ControlExceptionReturn (see
 v6/mildew/t/return_function.t), an important question raised:

  sub plural { return 1,2 }
  sub singular { return 1 }
  my @a = plural();
  my $b = plural();
  my @c = singular();
  my $d = singular();

 What should @a, $b, @c and $d contain?

 Note that the spec says explicitly that a Capture should be returned,
 delaying the context at which the value will be used, this allows

  sub named { return :x1 }
  my $x := |(named);

 So, this also means that assigning

  my @a = plural();
  my @c = singular();

 forces list context in the capture, which should return all positional
 parameters, as expected. But

  my $b = plural();
  my $d = singular();

 would force item context in the capture, and here is the problem, as a
 capture in item context was supposed to return the invocant.

If item context is supposed to return the invocant, then it would seem
to me that returning a single value from a sub would put that value
into the capture object's invocant.  This would mean that the problem
crops up under 'my @c = singular()' instead of 'my $b = plural()'.

The idea in the spec is that the capture object can hold an item, a
distinct list, and a distinct hash all at once.  The problem that
we're encountering here is that there are times when the difference
between an item and a one-item list is fuzzy.  We _could_ kludge it by
saying that when a sub returns an item $x, it gets returned as a
capture object ($invocant := $x: $param1 := $x) or some such; but this
_is_ a kludge, which has the potential for unexpected and unsightly
developments later on.

Another option would be to change the way that applying item context
to a capture object works in general, to allow for the possibility
that a single-item list was actually intended to be a single item: if
there's no invocant, but there is exactly one positional parameter,
return the positional parameter instead:

  $a = |(title: 1)
  $b = |(title:)
  $c = |(1)

  $x = item $a; # $x == title
  $x = item $b; # $x == title
  $x = item $c; # $x == 1

  $x = list $a; # $x == [1]
  $x = list $b; # $x == []
  $x = list $c; # $x == [1]

With this approach, return values would return values as positional
parameters unless a conscious effort was made to do otherwise.

But let's say that you really wanted to get the invocant of a capture
object.  You can still do so:

  |($x:) = $a; # $x == title
  |($x:) = $b; # $x == title
  |($x:) = $c; # $x == undef

Likewise, you could specify that you want the first positional
parameter of the capture object by saying:

  |($x) = $a; # $x == 1
  |($x) = $b; # $x == undef
  |($x) = $c; # $x == 1

This isn't as clean as a straight mapping of invocant to item,
positional to list, and named to hash; but I think that it's got
better dwimmery.

--
Jonathan Dataweaver Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
Daniel Ruoso wrote:
 I've just realized we were missing some spots, so remaking the list of
 possibilities

  my $a = sub s1 { return a = 1 }
  my $b = sub s2 { return a = 1, b = 2 }
  my $c = sub s3 { return 1, 2, 3, a = 1, b = 2 }
  my $d = sub s4 { return 1 }
  my $e = sub s5 { return 1, 2, 3 }
  my $f = sub s6 { return 1: # it doesnt matter  }
  my $g = sub s7 { return }

 But while writing this email, I've realized that a Capture object is
 supposed to implement both .[] and .{}, so maybe we can just simplify...

Bear in mind that some list objects can use .{} (for customized
indices) as well as .[] (for standard indices).  As such, $e ought to
get a List rather than a Capture.  And if you're going to go that far,
you might as well go the one extra step and say that $b gets a Hash
rather than a Capture.

But I agree about $a, $c, $d, $f, and $g.

  $g is an undefined Object
  $f is 1
  $d is 1
  $a is a Pair
  everything else is the Capture itself

Of course, that's only a third of the problem.  What should people
expect with each of these:

  my @a = sub s1 { return a = 1 }
  my @b = sub s2 { return a = 1, b = 2 }
  my @c = sub s3 { return 1, 2, 3, a = 1, b = 2 }
  my @d = sub s4 { return 1 }
  my @e = sub s5 { return 1, 2, 3 }
  my @f = sub s6 { return 1: }
  my @g = sub s7 { return }

  my %a = sub s1 { return a = 1 }
  my %b = sub s2 { return a = 1, b = 2 }
  my %c = sub s3 { return 'a', 1, 'b', 2, a = 1, b = 2 }
  my %d = sub s4 { return 1 }
  my %e = sub s5 { return 'a', 1, 'b', 2 }
  my %f = sub s6 { return 1: }
  my %g = sub s7 { return }

Should @a == (), or should @a == ( a = 1 )?  Or maybe even @a == ( 'a', 1 )?
Likewise with @b and @f.

Should %e == {} or { a = 1, b = 2 }?

-- 
Jonathan Dataweaver Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
Daniel Ruoso wrote:
 Hmm... I think that takes the discussion to another level, and the
 question is:

  what does a capture returns when coerced to a context it doesn't
 provide a value for?

 The easy answer would be undef, empty array and empty hash, but that
 doesn't DWIM at all.

 The hard answer is DWIM, and that can be:

  1) in item context, without an invocant
   a) if only one positional argument, return it
   b) if only one named argument, return it as a pair
   c) if several positional args, but no named args,
  return an array
   d) if several named args, but no positional args,
  return a hash
   e) if no args at all, return undefined Object
   f) return itself otherwise
  2) in list context, without positional arguments
   a) if one or more named arguments,
  return a list of pairs
   b) return an empty list otherwise
  3) in hash context, without named arguments
   a) if there are positional arguments,
  return a hash taking key,value.
  if an odd number of positional arguments,
  last key has an undef Object as the
  value and issue a warning.
   b) return an empty hash otherwise

Elaborate further to account for the possibility of an invocant.  For
example, if you have a capture object with just an invocant and you're
in list context, should it return the invocant as a one-item list, or
should it return an empty list?  Should hash context of the same
capture object give you a single Pair with an undef value and a a
warning, or should it give you an empty hash?

I don't mind the dwimmery here, even though that means that you can't
be certain that (e.g.) 'list $x' will return the positional parameters
in capture object $x.  If you want to be certain that you're getting
the positional parameters and nothing else, you can say '$x.[]'; if
you want to be certain that you're getting the named parameters, you
can say '$x.{}'.  The only thing lost is the ability to ensure that
you're getting the invocant; and it wouldn't be too hard to define,
say, a postfix:_ operator for the Capture object that does so:

  item($x) # Dwimmey use of item context.
  list($x) # Dwimmey use of list context.
  hash($x) # Dwimmey use of hash context.
  $x._ # the Capture object's invocant, as an item.
  $x.[] # the Capture object's positional parameters, as a list.
  $x.{} # the Capture object's named parameters, as a hash.

-- 
Jonathan Dataweaver Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
TSa wrote:
 Jon Lang wrote:
   item($x) # Dwimmey use of item context.

 IIRC this is the same as $$x, right? Or does that
 extract the invocant slot without dwimmery?

Umm... good question.  This is a rather nasty paradox: on the one
hand, we want to be able to stack $, @, and % with capture objects in
analogy to Perl 5's references, which would indicate that they should
tie directly to the invocant, positional parameters, and named
parameters, respectively.  OTOH, the intuitive meaning for these
symbols would seem to be item context, list context, and hash
context, respectively, which would argue for the dwimmery.

The question is which of these two sets of semantics should be
emphasized; once that's answered, we need to be sure to provide an
alternative syntax that gives us the other set.  IOW, which one of
these is make common things easy, and which one is make uncommon
things possible?

   $x._ # the Capture object's invocant, as an item.

 How about $x.() here? That looks symmetric to the other
 postfix operators and should be regarded as a method
 dispatched on the invocant or some such.

Symmetric, but ugly.  I suggested the underscore because '.foo' is the
same as '$_.foo'; so you can think of the invocant of a capture object
as being roughly analogous to a topic.

-- 
Jonathan Dataweaver Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
Dave Whipp wrote:
 Daniel Ruoso wrote:
 Hmm... I think that takes the discussion to another level, and the
 question is:

  what does a capture returns when coerced to a context it doesn't
 provide a value for?

 I'd like to take one step further, and ask what it is that introduced
 capture semantics in the first place. And I suggest that the answer should
 be the use of a signature

 I'd also suggest that we get rid of the use of backslash as a
 capture-creation operator (the signature of Capture::new can do that) and
 instead re-task it as a signature creation operator.

I believe that we already have a signature creation operator, namely
:( @paramlist ).

Note also that retasking '\' destroys the analogy that currently
exists between perl 6 captures and perl 5 references.

 If we do that, then I think we can reduce the discussion of the semantics of
 multi-returns to the semantics of assignments:

 If the sub/method defines a return-signature then that is used (with
 standard binding semantics), otherwise the result is semantically a flat
 list.

 If the LHS is an assignment is a signature, then the rhs is matched to it:

 my  (@a, %b) = 1,2,3, b = 4; ## everything in @a; %b empty
 my \(@a, %b) = 1,2,3, b = 4; ## @a = 1,2,3; %b = (b=4)

Change that second line to:

  my :(*...@a, *%b) = 1, 2, 3, b = 4;

@a and %b have to be slurpy so that you don't get a signature
mismatch.  There's also the matter of how a signature with an invocant
would handle the assignment:

  my :($a: *...@b, *%c) = 1, 2, 3, b = 4;

Either $a == 1 and @b == (2, 3) or $a == undef and @b == (1, 2, 3).
Which one is it?  Probably the latter.

Regardless, the magic that makes this work would be the ability to
assign a flat list of values to a signature.  Is this wise?

-- 
Jonathan Dataweaver Lang


Writing to an iterator

2009-01-07 Thread Jon Lang
I was just reading through S07, and it occurred to me that if one
wanted to, one could handle stacks and queues as iterators, rather
than by push/pop/shift/unshift of a list.  All you'd have to do would
be to create a stack or queue class with a private list attribute and
methods for reading from and writing to it.  The first two parts are
easy: has @!list; handles the first, and method prefix:= { .pop
} handles the second (well, mostly).

How would I define the method for writing to an iterator?

-- 
Jonathan Dataweaver Lang


Re: r24819 - docs/Perl6/Spec

2009-01-08 Thread Jon Lang
Darren Duncan wrote:
 pugs-comm...@feather.perl6.nl wrote:

 Log:
 [S02] clarify that Pairs and Mappings are mutable in value, but not in key

 snip

 KeyHash Perl hash that autodeletes values matching default
 KeySet  KeyHash of Bool (does Set in list/array context)
 KeyBag  KeyHash of UInt (does Bag in list/array context)
 +PairA single key-to-value association
 +Mapping Set of Pairs with no duplicate keys

 snip

 +As with CHash types, CPair and CMapping are mutable in their
 +values but not in their keys.  (A key can be a reference to a mutable
 +object, but cannot change its C.WHICH identity.  In contrast,
 +the value may be rebound to a different object, just as a hash
 +element may.)

 Following this change, it looks to me like Mapping is exactly the same as
 Hash.  So under what circumstances should one now choose whether they want
 to use a Hash or a Mapping?  How do they still differ? -- Darren Duncan

I don't think they do.  IMHO, Mapping should definitely be immutable
in both key and value; it is to Hash as Seq is to Array.  (Side note:
why is List considered to be immutable?  Doesn't it change whenever
its iterator is read?)

The question in my mind has to do with Pair: if Pair is being
redefined as mutable in value, should it have an immutable
counterpart?  If so, what should said counterpart be called?

-- 
Jonathan Dataweaver Lang


Re: Extending classes in a lexical scope?

2009-01-12 Thread Jon Lang
Ovid wrote:
 Is it possible to modify the core Perl6Array class like that (without extra 
 keywords)?  If so, is it possible for each programmer to make such a change 
 so that it's lexically scoped?

AFAIK, it is not possible to modify a core class; however, I believe
that it _is_ possible to derive a new class whose name differs from
an existing class only in terms of version information, such that it
is substituted for the original class within the lexical scope where
it was defined, barring explicit inclusion of version information when
the class is referenced.

-- 
Jonathan Dataweaver Lang


Re: Not a bug?

2009-01-12 Thread Jon Lang
On Mon, Jan 12, 2009 at 2:15 AM, Carl Mäsak cma...@gmail.com wrote:
 Ovid ():
  $ perl6 -e 'my $foo = foo;say { ~ $foo ~ }'
   ~ foo ~

 Easy solution: only use double quotes when you want to interpolate. :)

 This is not really an option when running 'perl6 -e' under bash, though.

$ perl6 -e 'my $foo = foo;say q:qq({ ~ $foo ~ })'

...or something to that effect.

-- 
Jonathan Dataweaver Lang


Re: Not a bug?

2009-01-12 Thread Jon Lang
On Mon, Jan 12, 2009 at 1:08 PM, Larry Wall la...@wall.org wrote:
 On Mon, Jan 12, 2009 at 03:43:47AM -0800, Jon Lang wrote:
 : On Mon, Jan 12, 2009 at 2:15 AM, Carl Mäsak cma...@gmail.com wrote:
 :  Ovid ():
 :   $ perl6 -e 'my $foo = foo;say { ~ $foo ~ }'
 :~ foo ~
 : 
 :  Easy solution: only use double quotes when you want to interpolate. :)
 : 
 :  This is not really an option when running 'perl6 -e' under bash, though.
 :
 : $ perl6 -e 'my $foo = foo;say q:qq({ ~ $foo ~ })'
 :
 : ...or something to that effect.

 Assuming that's what was wanted.  I figgered they want something more
 like:

$ perl6 -e 'my $foo = foo; say q[{] ~ $foo ~ q[}];'

True enough.  Either one of these would be more clear than the
original example in terms of user intent.

As well, isn't there a way to escape a character that would otherwise
be interpolated?  If the intent were as you suppose, the original
could be rewritten as:

  $ perl6 -e 'my $foo = foo;say \{ ~ $foo ~ }'

(Or would you need to escape the closing curly brace as well as the
opening one?)

-- 
Jonathan Dataweaver Lang


Re: Extending classes in a lexical scope?

2009-01-12 Thread Jon Lang
Ovid wrote:
 Actually, I'd prefer to go much further than this:

  use Core 'MyCore';

 And have that override core classes lexically.

 That solves the but I want it MY way issue that many Perl and Ruby 
 programmers have, but they don't shoot anyone else in the foot.

Since 'use' imports its elements into the current lexical scope, the
version-based approach can do this.

The only catch that I can think of has to do with derived classes:
does the existence of a customized version of a class result in
same-way-customized versions of the classes that are derived from the
original class?  That is, if I added an updated version of Foo, and
Bar has previously been defined as being derived from Foo, would I get
a default updated version of Bar as well?  Or would I have to
explicitly update each derived class to conform to the updated base
class?

-- 
Jonathan Dataweaver Lang


Re: Extending classes in a lexical scope?

2009-01-12 Thread Jon Lang
Ovid wrote:
 - Original Message 

 From: Jon Lang datawea...@gmail.com

  Actually, I'd prefer to go much further than this:
 
   use Core 'MyCore';
 
  And have that override core classes lexically.
 
  That solves the but I want it MY way issue that many Perl and Ruby
 programmers have, but they don't shoot anyone else in the foot.

 Since 'use' imports its elements into the current lexical scope, the
 version-based approach can do this.

 The only catch that I can think of has to do with derived classes:
 does the existence of a customized version of a class result in
 same-way-customized versions of the classes that are derived from the
 original class?  That is, if I added an updated version of Foo, and
 Bar has previously been defined as being derived from Foo, would I get
 a default updated version of Bar as well?  Or would I have to
 explicitly update each derived class to conform to the updated base
 class?


 I'm not sure I understand you.  If 'Bar' inherits from 'Foo' and 'Foo' has 
 extended the core Array class to lexically implement a .shuffle method, then 
 I would expect 'Bar' to have that also.

No, you don't understand me.  The Foo/Bar example I was giving was
independent of your example.  Rephrasing in your terms, consider the
possibility of a class that's derived from Array, for whatever reason;
call it Ring.  Now you decide that you want to redefine Array to
include a shuffle method, and so you implement an Array version 2.0.
 Would you be given a Ring version 2.0 that derives from Array
version 2.0, or would you have to explicitly ask for it?

As long as you limit your use of class inheritance, the above remains
manageable.  But consider something like the Tk widgets implemented as
a class hierarchy; then consider what happens if you reversion one of
the root widgets.  If you manually have to reversion each and every
widget derived from it, and each and every widget derived from those,
and so on and so forth, in order for your changes to the root to
propagate throughout the class hierarchy...

Instead, I'd rather see an approach where you need only reversion the
base class and those specific derived classes where problems would
otherwise arise due to your changes.

-- 
Jonathan Dataweaver Lang


Re: design of the Prelude (was Re: Rakudo leaving the Parrot nest)

2009-01-15 Thread Jon Lang
Forgive my ignorance, but what is a Prelude?

-- 
Jonathan Dataweaver Lang


Re: design of the Prelude (was Re: Rakudo leaving the Parrot nest)

2009-01-15 Thread Jon Lang
On Thu, Jan 15, 2009 at 6:45 PM, Jonathan Scott Duff
perlpi...@gmail.com wrote:
 On Thu, Jan 15, 2009 at 8:31 PM, Jon Lang datawea...@gmail.com wrote:

 Forgive my ignorance, but what is a Prelude?

 --
 Jonathan Dataweaver Lang

 The stuff you load (and execute) to bootstrap the language into utility on
 each invocation.  Usually it's written in terms of the language you're
 trying to bootstrap as much as possible with just a few primitives to get
 things started.

OK, then.  If I'm understanding this correctly, the problem being
raised has to do with deciding which language features to treat as
primitives and which ones to bootstrap from those primitives.  The
difficulty is that different compilers provide different sets of
primitives; and you're looking for a way to avoid having to write a
whole new Prelude for each compiler.  Correct?

Note my use of the term language features in the above.  Presumably,
you're going to have to decide on some primitive functions as well as
some primitive datatypes, etc.  For instance: before you can use
meta-operators in the Prelude, you're going to have to define them in
terms of some choice of primitive functions - and that choice is
likely to be compiler-specific.  So how is that any easier to address
than the matter of defining datatypes?  Or is it?

-- 
Jonathan Dataweaver Lang


Re: Operator sleuthing...

2009-01-15 Thread Jon Lang
Mark Lentczner wrote:
 STD has sym; as both an infix operator ( -- Sequencer), and as a
 terminator.
 ?? Which is it? Since I think most people think of it as a statement
 terminator, I plan on leaving it off the chart.

It is both.  Examples where it is used as an infix operator include:

  loop (my $i = 1; $i  10; $i *= 2) { ... }

  my @@slice = (1, 2; 3, 4; 5, 6)

Presumably, Perl is capable of distinguishing the meanings based on
what the parser is expecting when it finds the semicolon.

-- 
Jonathan Dataweaver Lang


Re: r25060 - docs/Perl6/Spec src/perl6

2009-01-27 Thread Jon Lang
On Tue, Jan 27, 2009 at 9:43 AM,  pugs-comm...@feather.perl6.nl wrote:
 +=head2 Reversed comparison operators
 +
 +Any infix comparison operator returning type COrder may be transformed 
 into its reversed sense
 +by prefixing with C-.
 +
 +-cmp
 +-leg
 +-=
 +
 +To avoid confusion with the C-= operator, you may not modify
 +any operator already beginning with C=.
 +
 +The precedence of any reversed operator is the same as the base operator.

If there are only a handful of operators to which the new
meta-operator can be applied, why do it as a meta-operator at all?

This could be generalized to allow any infix operator returning a
signed type (which would include COrder) to reverse the sign.  In
effect, $x -op $y would be equivalent to -($x op $y).  (Which
suggests the possibility of a more generalized rule about creating
composite operators by applying prefix or postfix operators to infix
operators in an analogous manner; but that way probably lies madness.)

Also, wouldn't the longest-token rule cause C-= to take precedence
over C= prefixed with C-?  Or, in the original definition, the
fact that C= isn't a comparison operator?

-- 
Jonathan Dataweaver Lang


Re: r25060 - docs/Perl6/Spec src/perl6

2009-01-27 Thread Jon Lang
Larry Wall wrote:
 Jon Lang wrote:
 : If there are only a handful of operators to which the new
 : meta-operator can be applied, why do it as a meta-operator at all?

 As a metaoperator it automatically extends to user-defined comparison
 operators, but I admit that's not a strong argument.  Mostly I want
 to encourage the meme that you can use - to reverse a comparison
 operator, even in spots where the operator is named by strings, such
 as (potentially) in an OrderingPair, which currently can be written

extract_key = infix:-leg

 but that might be abbreviate-able to

:extract_key-leg

 or some such.  That's not a terribly strong argument either, but
 perhaps they're additive, if not addictive.  :)

So $a -= $b is equivalent to $b = $a, not -($a = $b).  OK.
 I'd suggest choosing a better character for the meta-operator (one
that conveys the meaning of reversal of order rather than opposite
value); but I don't think that there is one.

 : Also, wouldn't the longest-token rule cause C-= to take precedence
 : over C= prefixed with C-?  Or, in the original definition, the
 : fact that C= isn't a comparison operator?

 It would be a tie, since both operators are the same length.

I guess I don't understand the longest-token rules, then.  I'd expect
the parser to be deciding between operator -= (a single
two-character token) vs. meta-operator - (a one-character token)
followed by operator = (a separate one-character token).  Since
meta-op - is shorter than op -=, I'd expect op -= to win out.

-- 
Jonathan Dataweaver Lang


Re: r25102 - docs/Perl6/Spec

2009-01-30 Thread Jon Lang
Larry Wall wrote:
 So I'm open to suggestions for what we ought to call that envelope
 if we don't call it the prelude or the perlude.  Locale is bad,
 environs is bad, context is bad...the wrapper?  But we have dynamic
 wrappers already, so that's bad.  Maybe the setting, like a jewel?
 That has a nice static feeling about it at least, as well as a feeling
 of surrounding.

 Or we could go with a more linguistic contextual metaphor.  Argot,
 lingo, whatever...

 So anyway, just because other languages call it a prelude doesn't
 mean that we have to.  Perl is the tail that's always trying to
 wag the dog...

 What is the sound of one tail wagging?

whoosh, whoosh.

I tend to like setting, because it makes me think of the setting of
a play, in which the actors (i.e., objects) perform their assigned
roles in following the script.

-- 
Jonathan Dataweaver Lang


Re: r25200 - docs/Perl6/Spec t/spec

2009-02-04 Thread Jon Lang
pugs-comm...@feather.perl6.nl wrote:
 -=item --autoloop-split, -F *expression*
 +=item --autoloop-delim, -F *expression*

  Pattern to split on (used with -a).  Substitutes an expression for the 
 default
  split function, which is C{split ' '}.  Accepts unicode strings (as long as

Should the default pattern be ' ', or should it be something more like /\s+/?

-- 
Jonathan Dataweaver Lang


Re: r25200 - docs/Perl6/Spec t/spec

2009-02-05 Thread Jon Lang
On Thu, Feb 5, 2009 at 9:21 AM, Larry Wall la...@wall.org wrote:
 On Thu, Feb 05, 2009 at 07:47:01AM -0800, Dave Whipp wrote:
 Jon Lang wrote:
  Pattern to split on (used with -a).  Substitutes an expression for the 
 default
  split function, which is C{split ' '}.  Accepts unicode strings (as 
 long as

 Should the default pattern be ' ', or should it be something more like 
 /\s+/?

 /ws/ ?

 You guys are all doing P5Think.  The default should be autocomb, not 
 autosplit,
 and the default comb is already correct...

In that case, the -a option needs to be renamed to autocomb, rather
than autosplit as it is now.

-- 
Jonathan Dataweaver Lang


Re: 2 questions: Implementations and Roles

2009-02-06 Thread Jon Lang
Timothy S. Nelson wrote:
Also, is there a simple way to know when I should be using a class
 vs. a role?

If you plan on creating objects with it, use a class.  If you plan on
creating classes with it, use a role.

-- 
Jonathan Dataweaver Lang


Re: References to parts of declared packages

2009-02-11 Thread Jon Lang
On Wed, Feb 11, 2009 at 12:15 PM, Jonathan Worthington
jonat...@jnthn.net wrote:
 Hi,

 If we declared, for example:

 role A::B {};

 Then what should a reference to A be here? At the moment, Rakudo treats it
 as a post-declared listop, however I suspect we should be doing something a
 bit smarter? If so, what should the answer to ~A.WHAT be?

 Thanks,

I'd go with one of two possibilities:

* Don't allow the declaration of A::B unless A has already been declared.

or

* A should be treated as a post-declared package.

-- 
Jonathan Dataweaver Lang


Re: References to parts of declared packages

2009-02-11 Thread Jon Lang
Carl Mäsak wrote:
 * A should be treated as a post-declared package.

 Whatever this means, it sounds preferable. :)

It means that you can define package A without ever declaring it, by
declaring all of its contents using such statements as 'role A::B ',
'sub A::Foo', and so on.

-- 
Jonathan Dataweaver Lang


S03: how many metaoperators?

2009-02-11 Thread Jon Lang
With the addition of the reversing metaoperator, the claim that there
are six metaoperators (made in the second paragraph of the meta
operators section) is no longer true.  Likewise, the reduction
operator is no longer the fourth metaoperator (as stated in the first
sentence of the reduction operators section).  For now, the cross
operator _is_ still the final metaoperator, as it states in its first
paragraph; but it's possible that that might change eventually.

-- 
Jonathan Dataweaver Lang


Re: References to parts of declared packages

2009-02-13 Thread Jon Lang
TSa wrote:
 Does that imply that packages behave like C++ namespaces? That is
 a package can be inserted into several times:

   package A
   {
   class Foo {...}
   }
   # later elsewhere
   package A
   {
   class Bar {...}
   }

 I would think that this is just different syntax to the proposed
 form

   class A::Foo {...}
   class A::Bar {...}

Well, we _do_ have a mechanism in place for adding to an existing
class (e.g., class Foo is also { ... }), and classes are a special
case of modules; so I don't see why you shouldn't be able to do
likewise with modules and even packages.  That said, I'd be inclined
to suggest that if this is to be allowed at all, it should be done
using the same mechanism that's used for classes - that is, an is
also trait.

-- 
Jonathan Dataweaver Lang


Synopsis for Signatures?

2009-02-13 Thread Jon Lang
At present, signatures appear to serve at least three rather diverse
purposes in Perl 6:

* parameter lists for routines (can also be used to specify what a
given routine returns; explored in detail in S06).
* variable declaration (see declarators in S03).
* parametric roles (currently only addressed to any extent in A12;
presumably, this will be remedied when S14 is written).

Given that signatures have grown well beyond their origins as
subroutine parameter lists, and given that signatures have their own
syntax, perhaps they should be moved out of S06?  I could see S08
being retasked to address signatures (and perhaps captures, given the
intimate connection between these two), since its original purpose
(i.e., references) has been deprecated.

-- 
Jonathan Dataweaver Lang


Re: References to parts of declared packages

2009-02-13 Thread Jon Lang
Larry Wall wrote:
 Jon Lang wrote:
 : Well, we _do_ have a mechanism in place for adding to an existing
 : class (e.g., class Foo is also { ... }), and classes are a special
 : case of modules; so I don't see why you shouldn't be able to do
 : likewise with modules and even packages.  That said, I'd be inclined
 : to suggest that if this is to be allowed at all, it should be done
 : using the same mechanism that's used for classes - that is, an is
 : also trait.

 These are actually package traits, not class traits, so your reasoning
 is backwards, with the result that your wish is already granted. :)

Darn it... :)

 However, it's possible we'll throw out is also and is instead
 in favor of multi and only, so it'd be:

multi package A {
class Foo {...}
}

multi package A {
class Bar {...}
}

 to explicitly allow extended declarations.  Modifying a package:

package A { # presumed only
class Foo {...}
}
...
multi package A {
class Bar {...}
}

 would result in an error.  In that case, the pragma

use MONKEY_PATCHING;

 serves to suppress the redefinition error. It would also allow:

package A { # presumed only
class Foo {...}
}
...
only package A {
class Bar {...}
}

 to do what is instead now does.

 Apart from the parsimony of reusing an existing concept, the advantage
 of doing it with multi/only is that the parser knows the multiness
 at the point it sees the name, which is when it wants to stick A into
 the symbol table.  Whereas is also has to operate retroactively on
 the name.

 This also lets us mark a package as explicitly monkeyable by design,
 in which case there's no need for a MONKEY_PATCHING declaration.

And with package versioning, you may not need an is instead
equivalent: if you want to redefine a package, just create a newer
version of it in a tighter lexical scope than the original package was
in.  You can still access the original package if you need to, by
referring to its version information.

And that, I believe, would put the final nail in the coffin of the
MONKEY_PATCHING declaration; without that, you'd still need the
declaration for the purpose of allowing redefinitions.

-- 
Jonathan Dataweaver Lang


Re: References to parts of declared packages

2009-02-13 Thread Jon Lang
Larry Wall wrote:
 Jon Lang wrote:
 : And with package versioning, you may not need an is instead
 : equivalent: if you want to redefine a package, just create a newer
 : version of it in a tighter lexical scope than the original package was
 : in.  You can still access the original package if you need to, by
 : referring to its version information.
 :
 : And that, I believe, would put the final nail in the coffin of the
 : MONKEY_PATCHING declaration; without that, you'd still need the
 : declaration for the purpose of allowing redefinitions.

 Except that the idea of monkey patching is that you're overriding
 something for everyone, not just your own lexical scope.

...right.

 While taking a shower I refined the design somewhat in my head,
 thinking about the ambiguities in package names when you're redefining.
 By my previous message, it's not clear whether the intent of

multi package Foo::Bar {...}

 is to overload an existing package Foo::Bar in some namespace that
 we search for packages, or to be the prototype of a new Foo::Bar
 in the current package.  In the former case, we should complain if
 an existing name is not found, but in the latter case we shouldn't.
 So those cases must be differentiated.

 Which I think means that multi package always modifies an existing
 package, period.  To establish a new multi package you must use
 proto or some equivalent.  So our previous example becomes either:

proto package A {
class Foo {...}
}

multi package A {
class Bar {...}
}

 or

proto package A {...}

multi package A {
class Foo {...}
}

multi package A {
class Bar {...}
}

 Then we'd say that if you want to retro-proto-ize an existing class you
 must do it with something like:

proto class Int is MONKEY_PATCHING(CORE::Int) {...}
multi class Int {
method Str () { self.fmt(%g) }
}

 or some such monkey business.

By re-proto-ize, I'm assuming that you mean open an existing class
up to modification.  With that in mind, I'd recommend keeping
something to the effect of is instead around, with the caveat that
it can only be used in conjunction with the multi keyword.  So if I
wanted to redefine package Foo from scratch, I'd say something like:

  proto package Foo { ... }

  multi package Foo is instead { ... }

But:

  package Foo is instead { ... }

would result in an error.

-- 
Jonathan Dataweaver Lang


infectious traits and pure functions

2009-02-13 Thread Jon Lang
In reading over the Debugging draft (i.e., the future S20), I ran
across the concept of the infectious trait - that is, a trait that
doesn't just get applied to the thing to which it is explicitly
applied; rather, it tends to spread to whatever else that thing comes
in contact with.  Taint is the primary example of this, although not
the only one.

In reading about functional programming, I ran across the concept of
the pure function - i.e., a function that doesn't produce side
effects.  The article went on to say that While most compilers for
imperative programming languages detect pure functions, and perform
common-subexpression elimination for pure function calls, they cannot
always do this for pre-compiled libraries, which generally do not
expose this information, thus preventing optimisations that involve
those external functions. Some compilers, such as gcc, add extra
keywords for a programmer to explicitly mark external functions as
pure, to enable such optimisations.

It occurred to me that this business of marking functions as pure
could be done in perl by means of traits - that is, any function might
be defined as is pure, promising the compiler that said function is
subject to pure function optimization.  It further occurred to me that
a variation of the contagious trait concept (operating on code blocks
and their component statements instead of objects; operating at
compile time rather than run time; and spreading via all
participants rather than any participant) could be used to auto-tag
new pure functions, provided that all so-called primitive pure
functions are properly tagged to begin with.  The compiler could then
rely on this tagging to perform appropriate optimization.

Such auto-tagging strikes me as being in keeping with Perl's virtue of
laziness: writers of new code don't have to care about tagging pure
functions (or even know that they exist) in order for the tagging to
take place, leading to a potentially more robust library of functions
overall (in the sense of everything in the library is properly
annotated).  As well, I could see it having some additional benefits,
such as in concurrent programming: is critical could be similarly
infectious, in that any function that might call a critical code block
should itself be tagged as critical.  Indeed, is critical and is
pure seem to be mutually exclusive concepts: a given code block might
be critical, pure, or neither; but it should never be both.

Am I onto something, or is this meaningless drivel?

-- 
Jonathan Dataweaver Lang


Re: infectious traits and pure functions

2009-02-16 Thread Jon Lang
Darren Duncan wrote:
 There are ways to get what you want if you're willing to trade for more
 restrictiveness in the relevant contexts.

 If we have a way of marking types/values and routines somehow as being pure,
 in the types case marking it as consisting of just immutable values, and in
 the routines case marking it as having no side effects, then we are telling
 the compiler that it needs to examine these type/routine definitions and
 check that all other types and routines invoked by these are also marked as
 being pure, recursively down to the system-defined ones that are also marked
 as being pure.

 Entities defined as being pure would be restricted in that they may not
 invoke any other entities unless those are also defined as being pure.  This
 may mean that certain Perl features are off-limits to this code.  For
 example, you can invoke system-defined numeric or string etc operators like
 add, multiply, catenate etc, but you can't invoke anything that uses
 external resources such as check the current time or get a truly random
 number.  The latter would have to be fetched first by an impure routine and
 be given as a then pure value to your pure routine to work with.  Also, pure
 routines can't update or better yet can't even read non-lexical variables.

 If it is possible with all of Perl's flexibility to make it restrictive of
 certain features' use within certain contexts, then we can make this work.

True.  In effect, this would be a sort of design-by-contract: by
applying the is pure trait to an object or function, you're
guaranteeing that it will conform to the behaviors and properties
associated with pure functions and immutable values.  A programmer who
chooses to restrict himself to functions and values thus tagged would,
in effect, be using a functional programming dialect of Perl 6, and
would gain all of the optimization benefits that are thus entailed.

It would also help the compiler to produce more efficient executable
code more easily, since (barring a module writer who mislabels
something impure as pure) it would be able to simply check for the
purity trait to decide whether or not functional programming
optimization can be tried.  If the purity tag is absent, a whole
segment of optimization attempts could be bypassed.

As well, my original suggestion concerning the auto-tagging of pure
functions was never intended as a means of tagging _everything_ that's
pure; rather, the goal was to set things up in such a way that if it
can easily be determined that thus-and-such a function is pure, it
should be auto-tagged as such (to relieve module writers of the burdon
of having to do so manually); but if there's any doubt about the
matter (e.g., conclusively proving or disproving purity would be
NP-complete or a halting problem), then the auto-tagging process
leaves the function in question untagged, and the purity tag would
have to be added in by the module writer.  This is still a valid idea
so long as the population of autotaggable pure functions (and, with
Darren's suggestions, truly invariant objects, etc.) is usefully
large.

In short, I don't want the perfect to be the enemy of the good.  Set
up a good first approximation of purity (one that errs on the side of
false negatives), and then let the coder tweak it from there.

 Now Perl in general would be different, with the majority of a typical Perl
 program probably impure without problems, but I think it is possible and
 ideal for users to be able to construct a reasonably sizeable pure sandbox
 of sorts within their program, where it can be easier to get correct results
 without errors and better performing auto-threading etc code by default.  By
 allowing certain markings and restrictions as I mention, this can work in
 Perl.

Exactly what I was thinking.  The key is that we're not trying to
force programmers to use the pure sandbox if they don't want to;
rather, we're trying to delineate the pure sandbox so that if they
want to work entirely within it, they'll have a better idea of what
they have to work with.

-- 
Jonathan Dataweaver Lang


Re: Comparing inexact values (was Re: Temporal changes)

2009-02-24 Thread Jon Lang
Daniel Ruoso wrote:
 What about...

  if $x ~~ [..] $x ± $epsilon {...}

 That would mean that $x ± $epsilon in list context returned each value,
 where in scalar context returned a junction, so the reduction operator
 could do its job...

(I'm assuming that you meant something like if $y ~~ [..] $x ±
$epsilon {...}, since matching a value to a range that's centered on
that value is tautological.)

Junctions should not return individual values in list context, since
it's possible for one or more of said values to _be_ lists.  That
said, I believe that it _is_ possible to ask a Junction to return a
set of its various values (note: set; not list).  Still, we're already
at a point where:

  if $y ~~ $x within $epsilon {...}

uses the same number of characters and is more legible.  _And_ doesn't
have any further complications to resolve.

-- 
Jonathan Dataweaver Lang


Re: Comparing inexact values (was Re: Temporal changes)

2009-02-24 Thread Jon Lang
TSa wrote:
 Larry Wall wrote:
 So it might be better as a (very tight?) operator, regardless of
 the spelling:

     $x ~~ $y within $epsilon

 This is a pretty add-on to smartmatch but I still think
 we are wasting a valueable slot in the smartmatch table
 by making numeric $x ~~ $y simply mean $x == $y. What
 is the benefit?

Larry's suggestion wasn't about ~~ vs. ==; it was about within as an
infix operator vs. within as a method or an adverb.

-- 
Jonathan Dataweaver Lang


  1   2   3   4   >