> -- test binary additional --
> foo + bar
> -- expect --
> <binary:
> <variable:
> <node:
> <ident:foo>
>
>
> <op:+>
> <variable:
> <node:
> <ident:bar>
Alloy looks like this:
perl -e 'use Template::Alloy; print Template::Alloy->dump_parse_tree("[% foo +
bar %]")'
[
['GET', 2, 12, [[undef, '+', ['foo', 0], ['bar', 0]], 0]],
]
> -- test args and ranges no commas --
> a(10..20 30 40..50)
> -- expect --
> <variable:
> <node:
> <ident:a>
> <args:
> <range:
> <number:10>
> <number:20>
>
> <number:30>
> <range:
> <number:40>
> <number:50>
perl -e 'use Template::Alloy; print Template::Alloy->dump_parse_tree("[%
a(10..20 30 40..50) %]")'
[
['GET', 2, 22, ['a', [[[undef, '..', 10, 20], 0], 30, [[undef, '..', 40,
50], 0]]]],
]
Here's another sample:
perl -e 'use Template::Alloy; print Template::Alloy->dump_parse_tree("[% IF
1 ; FOR 2 ; bar ; END ; END %]")'
[
['IF', 2, 7, 1, [
['FOR', 10, 15, [undef, 2], [
['GET', 18, 21, ['bar', 0]],
['END', 24, 28, undef],
]],
['END', 30, 34, undef],
]],
]
The interesting thing to note is that the entire structure is only scalars or
arrayrefs. No hashes or blessed objects or coderefs. I'm fairly sure yours
would look about the same. The dumped data in this case is straight Perl
with a little bit of indenting thrown in.
> > There is no reason that the raw data could also be turned into Parrot
> > AST, or Perl 6.
>
> Or C. It's a bit of a pipe dream, but it would be rather nice to be able
> to compile a bunch of templates down to a C library and dynamically link
> them straight into the Apache process.
Text::Tmpl and HTML::Template::JIT try this. Text::Tmpl is the fastest.
HTML::Template::JIT is pretty quick. The problem is that you still need to
call into perl to access your variable stash or execute your stashed
functions or object methods. That boundary crossing is costly. In the
previously pasted benchmark output, Alloy was faster than all of the C based
ones in a mod_perl style environment. Text::Tmpl was faster in a CGI style
environment (twice as fast -- 6000 per second vs 3000) - but then you're
stuck with the lack of features that Text::Tmpl presents.
Its not to say that somebody could do something neat - but so much of
TT/Alloy's time is spent in method calls that would be a waste of time to
re-implement in C that the C conversion may not offer much.
> > A final reason for using the AST is that you can then avoid having to
> > store perl written by the Apache server.
>
> Do you meaning storing to disk? What's the alternative, a custom bytecode
> format or a straight serialised AST?
I mean Storable. In terms of data storage, I haven't found anything that
beats Storable for reading in serialized data. It is at least twice as fast
as parsing the same structure stored in perl code with perl. It isn't human
readable - but it is perl readable and easy to change to human readable
forms.
> > Alloy also offers two stage compilation - on the
> > first hit you get AST, but on the second hit to the same document in a
> > cached object you get compiled perl. I'd love to see this in TT3.
>
> Ah right. That sounds like you've already implemented what I was getting
> at a paragraph or two ago. You're one step ahead of me again. Sounds
> good. :-)
Where I work we have some systems that use Text::Tmpl and we need the speed.
Template::Alloy is fast enough to replace Text::Tmpl. I am really hoping
that TT3 will have similar (faster is ok too :)) speed than Alloy - whatever
I can do to help on the coding/research side, I'd be willing to help.
Paul
_______________________________________________
templates mailing list
[email protected]
http://lists.template-toolkit.org/mailman/listinfo/templates