On 6/12/05, Piers Cawley <[EMAIL PROTECTED]> wrote:
> Chip and I have been having a discussion. I want to write:
> 
>     sub foo { my $x = 1; return sub { eval $^codestring } }
>     say foo()("$x");
> 
> I claim that that should print 1. Chip claims it should throw a warning about
> because of timely destruction. My claim is that a closure should close over 
> the
> entire lexical stack, and not simply those things it uses statically. Chip
> claims the opposite, arguing that timely destruction implies that this is
> absolutely the right thing to do. It's also quicker.

I just have to say that it's really annoying running into
optimizations when I don't want them.  Back when I wrote an
back-chaining system in perl, I used tied variables in order to
determine when I needed to solve for something.  A standard idiom was:

    rule \$var, sub {
        $a and $b and $c;
        $var = 1;
    };

And damnit, $c would never be solved for, because it was optimized
away by Perl.  I ran into that problem rather quickly, and I knew a
"no optimizations" would do the trick, if it existed.  But no such
luck, so I always had to add an "and 1" on the end.  Funny thing is,
that should have been optimized away too, but it wasn't, because Perl
wasn't that good at optimizations.  So now I have to know about the
capabilities of the language I'm using in order to program.

I ran into a related problem when I was writing Class::Closure, that
dealt with freeing lexical variables when they weren't used anymore. 
But I was closing over lexicals as my member variables, so things were
getting destroyed much too early.

To sum up, optimizations are nice, and it's nice to have optimizations
on by default for PR reasons, but you have to be able to turn them
off.

And yes, you could mark "eval" as "lexically dirty" so that you
wouldn't optimize when there's one in sight.  But that solution is
quite pessimistic, as you would have to mark every late-bound sub call
as dirty too, given caller introspection.

Luke

Reply via email to