Re: How much do we close over?

2005-06-13 Thread Chip Salzenberg
On Sun, Jun 12, 2005 at 11:26:49PM +0100, Piers Cawley wrote:
 sub foo { my $x = 1; return sub { eval $^codestring } }
 say foo()($x);

I'm pretty sure you meant single-quoted, and you perhaps might maybe
need a dot there:

 sub foo { my $x = 1; return sub { eval $^codestring } }
 say foo().('$x');

 I claim that that should print 1. Chip claims it should throw a warning about
 because of timely destruction.

More like an error from the eval: '$x: no such variable in scope'.
-- 
Chip Salzenberg [EMAIL PROTECTED]


Re: How much do we close over?

2005-06-13 Thread Autrijus Tang
On Mon, Jun 13, 2005 at 12:57:32AM +0200, Chip Salzenberg wrote:
 On Sun, Jun 12, 2005 at 11:26:49PM +0100, Piers Cawley wrote:
  sub foo { my $x = 1; return sub { eval $^codestring } }
  say foo()($x);
 
 I'm pretty sure you meant single-quoted, and you perhaps might maybe
 need a dot there:
 
  sub foo { my $x = 1; return sub { eval $^codestring } }
  say foo().('$x');

Just an aside, it's always okay to omit dot between brackets, as
long as there's no whitespace in between.  So `foo()()` is just fine.

Thanks,
/Autrijus/


pgpcGR6RJWe4Q.pgp
Description: PGP signature


Re: How much do we close over?

2005-06-13 Thread Piers Cawley
Rob Kinyon [EMAIL PROTECTED] writes:

 Piers Cawley said:
 in other words, some way of declaring that a subroutine wants to hang onto
 every lexical it can see in its lexical stack, not matter what static 
 analysis
 may say.

 I'm not arguing with the idea, in general. I just want to point out
 that this implies that you're going to hold onto every single
 file-scoped lexical, leading to quite a bit of action-at-a-distance.

Well, duh. If eval string isn't a hearty pointer to the This subroutine
deliberately takes advantage of action at a distance then I don't know what
is.


 Maybe, instead, you should say sub is lexical_stack(N) where N is
 the number of scoping levels it will hold onto in addition to any
 lexical it actually refers to. I would have 0 be the innermost scope,
 1 be the enclosing scope, etc.

Which is all very well, but you don't necessarily know how deep in the stack
you are. I want to be able to write something in such a way that evalling the
string works in exactly the same way as it would if I had just written a do
block in the first place.

sub foo { my $x; ...; return sub { do {...} } }

It's an introspection thing. Most of the time you don't want it, but sometimes
you do and we really shouldn't be making that impossible.


Re: How much do we close over?

2005-06-13 Thread Piers Cawley
Rod Adams [EMAIL PROTECTED] writes:

 Piers Cawley wrote:

Chip and I have been having a discussion. I want to write:

sub foo { my $x = 1; return sub { eval $^codestring } }
say foo()($x);

I claim that that should print 1. Chip claims it should throw a warning about
because of timely destruction. My claim is that a closure should close over 
the
entire lexical stack, and not simply those things it uses statically. Chip
claims the opposite, arguing that timely destruction implies that this is
absolutely the right thing to do. It's also quicker.
  

 I'm going to have to side with Piers on this one. My feeling is that having a
 reference to a closure accessible in memory should keep all the possible
 lexicals it can access in memory as well. That said, I can the compiler
 optimizing memory consumption by destroying all the outer lexicals that it can
 prove will never be used by the inner closure. However, the presence of an
 eval' in the closure makes such a proof tenuous at best.

 On the other hand, one could easily view the eval as constructing yet another
 closure, and it's unclear if we wish for that closure to be able to skip the
 first level outer closure to directly access the 2nd level outer lexicals. In
 that respect, I could see things Chip's way.

 As for the warning, it should only be a warning if strictures are on, for 
 using
 the now undeclared '$x' inside the eval.

But dammit, I'm doing runtime evaluation of code strings, I don't care about
quicker.

If it's not the default can it please be mandated that there be some way of
doing:

sub foo { my $x = 1; return sub is lexically_greedy {eval $^codestring} }

in other words, some way of declaring that a subroutine wants to hang onto
every lexical it can see in its lexical stack, not matter what static analysis
may say.

  

 Well, you could always do something like:

 sub foo { my $x = 1; return sub {my $x := $OUTER::x; eval $^codestring} }

 But I'm spending too much time in other languages lately to remember exactly
 how $OUTER::x is spelled for certain. One could probably even write a macro
 that auto-binds all the lexicals in the outer scope to the current scope.

Only if I actually know what variables which were within scope when that inner
sub was compiled are going to be used by my passed in code string. I really
don't want to have to write a macro to walk the OUTER:: chain in order to build
a something like:

   sub { my $foo = $OUTER::foo;
 my $bar = $OUTER::OUTER::bar;
 ...;
 eval $^codestring }

Just to pull everything into scope.

And even if I could do that, what I actually want to be able to do is something
like this:

$continuation.bindings.eval_in_this_scope($^codestring);

Which can be done today in Ruby and which is one of the enabling technologies
for tools like Rails.


Re: How much do we close over?

2005-06-13 Thread Luke Palmer
On 6/12/05, Piers Cawley [EMAIL PROTECTED] wrote:
 Chip and I have been having a discussion. I want to write:
 
 sub foo { my $x = 1; return sub { eval $^codestring } }
 say foo()($x);
 
 I claim that that should print 1. Chip claims it should throw a warning about
 because of timely destruction. My claim is that a closure should close over 
 the
 entire lexical stack, and not simply those things it uses statically. Chip
 claims the opposite, arguing that timely destruction implies that this is
 absolutely the right thing to do. It's also quicker.

I just have to say that it's really annoying running into
optimizations when I don't want them.  Back when I wrote an
back-chaining system in perl, I used tied variables in order to
determine when I needed to solve for something.  A standard idiom was:

rule \$var, sub {
$a and $b and $c;
$var = 1;
};

And damnit, $c would never be solved for, because it was optimized
away by Perl.  I ran into that problem rather quickly, and I knew a
no optimizations would do the trick, if it existed.  But no such
luck, so I always had to add an and 1 on the end.  Funny thing is,
that should have been optimized away too, but it wasn't, because Perl
wasn't that good at optimizations.  So now I have to know about the
capabilities of the language I'm using in order to program.

I ran into a related problem when I was writing Class::Closure, that
dealt with freeing lexical variables when they weren't used anymore. 
But I was closing over lexicals as my member variables, so things were
getting destroyed much too early.

To sum up, optimizations are nice, and it's nice to have optimizations
on by default for PR reasons, but you have to be able to turn them
off.

And yes, you could mark eval as lexically dirty so that you
wouldn't optimize when there's one in sight.  But that solution is
quite pessimistic, as you would have to mark every late-bound sub call
as dirty too, given caller introspection.

Luke


Optimisations (was Re: How much do we close over?)

2005-06-13 Thread Paul Johnson
On Mon, Jun 13, 2005 at 11:24:07AM +, Luke Palmer wrote:

 I just have to say that it's really annoying running into
 optimizations when I don't want them.

Isn't the whole point of optimisations that you shouldn't have to worry
about whether you hit one or not, otherwise the optimisation would seem
to be broken.

Back when I wrote an
 back-chaining system in perl, I used tied variables in order to
 determine when I needed to solve for something.  A standard idiom was:
 
 rule \$var, sub {
 $a and $b and $c;
 $var = 1;
 };
 
 And damnit, $c would never be solved for, because it was optimized
 away by Perl.

I'm not sure that short circuiting operators can be called an
optimisation.  Aren't they more part of the language definition?  I
assume Perl 6 isn't doing away with short circuiting operators.

I ran into that problem rather quickly, and I knew a
 no optimizations would do the trick, if it existed.  But no such
 luck, so I always had to add an and 1 on the end.  Funny thing is,
 that should have been optimized away too, but it wasn't, because Perl
 wasn't that good at optimizations.  So now I have to know about the
 capabilities of the language I'm using in order to program.

I'm not sure about the premise, but I agree with the conclusion.

 To sum up, optimizations are nice, and it's nice to have optimizations
 on by default for PR reasons, but you have to be able to turn them
 off.

One of the things that has been on the Perl 5 wishlist for a while is a
way to turn off the optimisations, but really that would only be for the
benefit of people and modules that mess with the op tree.  Again, I
submit that an optimisation that changes normal behaviour is broken and
that, in general, programmers shouldn't need to worry about what
optimisations are going on under the covers.

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net


Re: Optimisations (was Re: How much do we close over?)

2005-06-13 Thread Luke Palmer
On 6/13/05, Paul Johnson [EMAIL PROTECTED] wrote:
 On Mon, Jun 13, 2005 at 11:24:07AM +, Luke Palmer wrote:
 Back when I wrote an
  back-chaining system in perl, I used tied variables in order to
  determine when I needed to solve for something.  A standard idiom was:
 
  rule \$var, sub {
  $a and $b and $c;
  $var = 1;
  };
 
  And damnit, $c would never be solved for, because it was optimized
  away by Perl.
 
 I'm not sure that short circuiting operators can be called an
 optimisation.  Aren't they more part of the language definition?  I
 assume Perl 6 isn't doing away with short circuiting operators.

Oh, sorry, I was unclear.  Perl 6 will not do away with
short-circuiting operators, of course.  I meant that if $a and $b both
ended up being true, $c still would not be evaluated, because Perl
determined that I wasn't doing anything with the value once it was
returned.

  To sum up, optimizations are nice, and it's nice to have optimizations
  on by default for PR reasons, but you have to be able to turn them
  off.
 
 One of the things that has been on the Perl 5 wishlist for a while is a
 way to turn off the optimisations, but really that would only be for the
 benefit of people and modules that mess with the op tree.  Again, I
 submit that an optimisation that changes normal behaviour is broken and
 that, in general, programmers shouldn't need to worry about what
 optimisations are going on under the covers.

Yeah, but in a language with 'eval', and otherwise as dynamic as (or
moreso than) Perl 5, optimizations that don't change *somebody's*
semantics are hard to come by.  Most optimizations give you speed
benefit at the loss of some flexibility.  For instance, early binding,
which can have quite large benefits, but disables redefinition at
runtime.

Luke


How much do we close over?

2005-06-12 Thread Piers Cawley
Chip and I have been having a discussion. I want to write:

sub foo { my $x = 1; return sub { eval $^codestring } }
say foo()($x);

I claim that that should print 1. Chip claims it should throw a warning about
because of timely destruction. My claim is that a closure should close over the
entire lexical stack, and not simply those things it uses statically. Chip
claims the opposite, arguing that timely destruction implies that this is
absolutely the right thing to do. It's also quicker.

But dammit, I'm doing runtime evaluation of code strings, I don't care about
quicker.

If it's not the default can it please be mandated that there be some way of
doing:

sub foo { my $x = 1; return sub is lexically_greedy {eval $^codestring} }

in other words, some way of declaring that a subroutine wants to hang onto
every lexical it can see in its lexical stack, not matter what static analysis
may say.


Re: How much do we close over?

2005-06-12 Thread Rob Kinyon
 Piers Cawley said:
 in other words, some way of declaring that a subroutine wants to hang onto
 every lexical it can see in its lexical stack, not matter what static analysis
 may say.

I'm not arguing with the idea, in general. I just want to point out
that this implies that you're going to hold onto every single
file-scoped lexical, leading to quite a bit of action-at-a-distance.

Maybe, instead, you should say sub is lexical_stack(N) where N is
the number of scoping levels it will hold onto in addition to any
lexical it actually refers to. I would have 0 be the innermost scope,
1 be the enclosing scope, etc.

Rob


Re: How much do we close over?

2005-06-12 Thread Rod Adams

Piers Cawley wrote:


Chip and I have been having a discussion. I want to write:

   sub foo { my $x = 1; return sub { eval $^codestring } }
   say foo()($x);

I claim that that should print 1. Chip claims it should throw a warning about
because of timely destruction. My claim is that a closure should close over the
entire lexical stack, and not simply those things it uses statically. Chip
claims the opposite, arguing that timely destruction implies that this is
absolutely the right thing to do. It's also quicker.
 

I'm going to have to side with Piers on this one. My feeling is that 
having a reference to a closure accessible in memory should keep all the 
possible lexicals it can access in memory as well. That said, I can the 
compiler optimizing memory consumption by destroying all the outer 
lexicals that it can prove will never be used by the inner closure. 
However, the presence of an 'eval' in the closure makes such a proof 
tenuous at best.


On the other hand, one could easily view the eval as constructing yet 
another closure, and it's unclear if we wish for that closure to be able 
to skip the first level outer closure to directly access the 2nd level 
outer lexicals. In that respect, I could see things Chip's way.


As for the warning, it should only be a warning if strictures are on, 
for using the now undeclared '$x' inside the eval.



But dammit, I'm doing runtime evaluation of code strings, I don't care about
quicker.

If it's not the default can it please be mandated that there be some way of
doing:

   sub foo { my $x = 1; return sub is lexically_greedy {eval $^codestring} }

in other words, some way of declaring that a subroutine wants to hang onto
every lexical it can see in its lexical stack, not matter what static analysis
may say.

 


Well, you could always do something like:

   sub foo { my $x = 1; return sub {my $x := $OUTER::x; eval 
$^codestring} }


But I'm spending too much time in other languages lately to remember 
exactly how $OUTER::x is spelled for certain. One could probably even 
write a macro that auto-binds all the lexicals in the outer scope to the 
current scope.


-- Rod Adams


Re: How much do we close over?

2005-06-12 Thread Dave Mitchell
On Sun, Jun 12, 2005 at 11:26:49PM +0100, Piers Cawley wrote:
 Chip and I have been having a discussion. I want to write:
 
 sub foo { my $x = 1; return sub { eval $^codestring } }
 say foo()($x);
 
 I claim that that should print 1. Chip claims it should throw a warning
 about because of timely destruction. My claim is that a closure should
 close over the entire lexical stack, and not simply those things it uses
 statically. Chip claims the opposite, arguing that timely destruction
 implies that this is absolutely the right thing to do. It's also
 quicker.

I'm with Chip on this one. In fact, year ago I specifically fixed
bleedperl so that it gives this runtime warning:

$ perl592 -we 'sub f { my $x; sub { eval q($x)} } f()-()'
Variable $x is not available at (eval 1) line 2.


 But dammit, I'm doing runtime evaluation of code strings, I don't care about
 quicker.

You may be using slow evals, but other fast code may not be. Should the
closure in

 sub foo { my $x = 1; return sub { 1 } }

also capture the current instance of $x? You are basically condeming any
code that creates any closure, however simple, to basically hang on to
just about any data that has ever existed, in the vague hope that maybe,
just maybe, some day some code may use an eval and make use of that data.

 If it's not the default can it please be mandated that there be some way of
 doing:
 
 sub foo { my $x = 1; return sub is lexically_greedy {eval $^codestring} }
 
 in other words, some way of declaring that a subroutine wants to hang onto
 every lexical it can see in its lexical stack, not matter what static analysis
 may say.

I have no opinion on that.

-- 
This is a great day for France!
-- Nixon at Charles De Gaulle's funeral


Re: How much do we close over?

2005-06-12 Thread Dave Mitchell
On Sun, Jun 12, 2005 at 06:22:22PM -0500, Rod Adams wrote:
 Well, you could always do something like:
 
sub foo { my $x = 1; return sub {my $x := $OUTER::x; eval $^codestring} }

In perl5, that would just be

sub foo { my $x = 1; return sub { $x ; eval $_[0]} }

-- 
You live and learn (although usually you just live).


Re: How much do we close over?

2005-06-12 Thread Brent 'Dax' Royal-Gordon
On 6/12/05, Dave Mitchell [EMAIL PROTECTED] wrote:
 You may be using slow evals, but other fast code may not be. Should the
 closure in

  sub foo { my $x = 1; return sub { 1 } }

 also capture the current instance of $x? You are basically condeming any
 code that creates any closure, however simple, to basically hang on to
 just about any data that has ever existed, in the vague hope that maybe,
 just maybe, some day some code may use an eval and make use of that data.

A simple analysis of the parse tree should show that sub { 1 } isn't
going to access $x.  I personally don't see what's wrong with marking
certain constructs (eval, symbolic dereference, stash access, etc.) as
dirty and forcing a closure to close over everything if one is
present.  This is optimizer stuff, really, in the same class of
problems as optimizing Parrot's continuation-based sub calls into
bsr/ret where possible.

Hmm...maybe the answer is that most destruction isn't guaranteed to be
timely, and any object which *is* guaranteed to have timely
destruction is illegal to close over unless the programmer marks it as
okay.  Or maybe that's only with an appropriate stricture...

--
Brent 'Dax' Royal-Gordon [EMAIL PROTECTED]
Perl and Parrot hacker