On Wed, 2002-03-27 at 19:46, Michel J Lambert wrote: > > Macros could add something to Perl, but I don't see why having a macro > > return a string instead of looking and acting like a subroutine would be > > a bad thing. In fact, as I pointed out before, you can do almost all of > > the scoping stuff that you would want out of a macro in Perl with the > > existing subroutine/code ref syntax and a special property. > > I disagree here. How would I define foreach as a macro (assuming the > following, simplified syntax). > foreach $each, @array, { > print $each; > }; > > Since $each is being passed to the foreach macro, the macro needs to > create a lexical scoping for $each, and place the third argument as a loop > body, within the context of the scoping it just defined.
First off, I want to say that this is the kind of discussion I joined p6l for. I respect your point of view here, and don't want my argument to be taken as lack of validation of yours. I don't have time to code, so in the long run, my "assistance" here may be moot. I think the difference between what we're saying is that you're insisting on a defered evaluation phase (actually defered parsing, but let's abstract that a little). Ok, I see the value of the deferal (which, really is what I was asking about at first). I think where you will get caught is where the Perl tokenizer/parser have to look at an expression, determine which parts of it are to be defered and which parts of it to perform Perl's usual unholy rites on, and then step forward. I also don't like "manual" construction of the macro string, though I understand that it gives you some flexibility that you can't get otherwise. Here's what I suggest as a compromise: macro forall ($iterator, $list, $block) { my @ltmp = ($list); foreach $iterator -> @ltmp $block } forall{$var}{@list}{{print;}}; Where the parser sees "macro NAME PARAMS BLOCK" it interpolates ONLY the variables in PARAMS into BLOCK for every occurance of the macro. The macro itself is then treated as a quoting operator (I'm uncomfortable with "forall%x%%y%%z%;", but it seems to be an expected consequence of this way of defining macros). So, the above would become: my @ltmp = (@list); foreach $var -> @ltmp {print;} Would that get you everything you want? For things like binary operators, I don't know. I don't think you want to introduce binary quoting operators. This avoids a couple of things that I found hairy in what you proposed, but does accomplish all of the defered evaluation that you suggested. > - Transformation: they can look inside the structure of their arguments. Ok, here's where I think you don't want to go. I understand the power, but now you get into what you had suggested earlier. Basically, a macro would have to be a subroutine that constructs a string that is your code. This sets off alarms deep in my racial memory. I see debugability going near-zero and code maintanence being a not-even-remotely-humorus joke. I do think that you need some sort of quotemeta syntax for a macro so that you can construct a string out of one of your arguments. This gives you: marcro assert($expr) { die "Assertion failed: \"$\expr\"" unless $expr; } In this case, I'm suggesting <$><\>name as the syntax for meta-quoting the macro parameter, but if there's something that would seem more natural, let me know. > - Binding: I've already explained this one to death. "Any operator which > is to alter the lexical bindings of its arguments must be written as a > macro." Not clear on how this impacts Perl. I do think that a macro expansion should implicitly comprise its own block. This gives you the ability to declare lexicals that will evaporate at the end of the macro. However, if you're talking about forall{my $x}{@list}{code....} then I think my suggestion covers your case. > - Conditional evaluation: how would you write 'if' or 'and' using > functions? if is hard because it has variable forms. I guess you could have: macro condition(*@pairs) { for ($expr,$block) -> (@pairs) { if (eval $expr) { return eval $block; } } } But you're evaluating at run-time, not compile time. I understand why you would *want* more, but I don't think it's going to be very clean otherwise. > - Multiple evaluation: this can be emulated by receiving coderefs, and > calling them multiple times. But it requires the caller format them *as* a > coderef. You can do this the way I propse. > - Using the calling environment: I believe Damian likes this one for > Exporter, and flexibility, and Dan doesn't for optimization reasons. > Macros give the BOBW. I think that macro x($a){$a} x{y} should evaluate to {y} This gives you lexical scope for any declared variables (thus avoiding needing a gensym type facillity) but allows access to the "caller's" namespace. > - Saving function calls: A minor one, which can be emulated with inlined > functions Both of our proposals allow for this. > > > If labels are lexically scoped, There's no problem here. Of course, you > need to "promote" all labels to the beginning of the enclosing block, > > but that's compiler magic, and doesn't violate lexical scoping. > > If they're lexically scoped, yes. But this doesn't solve all the problems. > For example, what if, in this macro, I pass the 'jumpto' label to another > macro I'm calling. There's no guarantee that the inner macro, when it uses > the label I passed it, will work, since it could accidentally re-scope the > macro name itself for its own purposes. Thus, the problems with variable > capture, and the reasons for gensym. If the macro you call re-defines a lexical, overriding an outer declaration, I think that's just cause for a warning, but should behave otherwise just as if you did the same with any lexically scoped variable. Passing a label between macros really should be a maiming offense in the first place. Your comments about volume are well-taken, but given my change of heart, I thought I should fire off one more chapter....