On Fri, Apr 15, 2005 at 12:45:14PM +1200, Sam Vilain wrote: : Larry Wall wrote: : > Well, only if you stick to a standard dialect. As soon as you start : > defining your own macros, it gets a little trickier. : : Interesting, I hadn't considered that. : : Having a quick browse through some of the discussions about macros, many : of the macros I saw[*] looked something like they could be conceptualised : as referring to the part of the AST where they were defined. : : ie, making the AST more of an Abstract Syntax Graph. And macros like : 'free' (ie, stack frame and scope-less) subs, with only the complication : of variable binding. The ability to have recursive macros would then : relate to this graph-ness.
That is one variety of macro. : What are the shortcomings of this view of macros, as 'smart' (symbol : binding) AST shortcuts? The biggest problem with smart things is they're harder for not-so-smart people to understand. : The ability to know exactly what source corresponds to a given point on : the AST, as well as knowing straight after parse time (save for string : eval, of course) what each token in the source stream relates to is one : thing that I'm aiming to have work with Perldoc. I'm hoping this will : assist I18N efforts and other uses like smart editors. Yes, that's an important quality for many kinds of tools, whether documentation, debugging, or refactoring. : By smart editors, I'm talking about something that uses Perl/PPI as its : grammar parsing engine, and it highlights the code based on where each : token in the source stream ended up on the AST. This would work : completely with source that munges grammars (assuming the grammars are : working ;). Then, use cases like performing L10N for display to non- : English speakers would be 'easy'. I can think of other side-benefits : to such "regularity" of the language, such as allowing Programatica- : style systems for visually identifying 'proof-carrying code' and : 'testing certificates' (see http://xrl.us/programatica). Glad you think it's 'easy'. Maybe you should 'just do it' for us. :-) : macros that run at compile time, and insert strings back into the : document source seem hackish and scary to these sorts of prospects. We also allow (but discourage) textual substitution macros. They're essentially just lexically scoped source filters, and suffer the same problems as source filters, except for the fact that you can more easily limit the damage to a small patch of code. The problem is that the original patch of text has to be stored in the AST along with the new chunk of AST generated by the reparse, and it's not at all clear how a tool should handle that conflict. It's better to only parse once whenever possible, and just make sure the original text remains attached to the appropriate place in the AST. More basically, it's usually better to cooperate with the parser than to lie to it. : But then, one man's hackish and scary is another man's elegant : simplicity, I guess. : : * - in particular, messages like this: : - http://xrl.us/fr78 : : but this one gives me a hint that there is more to the story... I : don't grok the intent of 'is parsed' : - http://xrl.us/fr8a This is mostly talked about in the relevant Apocalypses, and maybe the Synopses. See dev.perl.org for more. Larry