Pre-fetching all the data and then passing a reference to an array of arrays (or array of hashes) would indeed work. My understanding is that the memory price to be paid for very large datasets could be huge
Yes. To me, this is usually worth it since you can do things like handle DBI errors gracefully, which becomes harder if you wait until you've started generating output to fetch the data.
However, if you have an unusually large data set you can create an object that implements the TT iterator interface. You can look at the DBI plugin for pointers. Another approach would be to simply pre-populate some DBI plugin objects (or Class::DBI objects) and pass them in as template data. That keeps the actual SQL out of your templates.
To try to avoid that, I've attempted:
my (@datarefs);
...
@datarefs=(\$fld1,\$fld2); $dbi_statement_handle->bind_columns(@datarefs);
$$vars{"datarefs"[EMAIL PROTECTED];
Then, in the template:
[% datarefs.0 %]
should dereference the first element of the @datarefs array and print the value, but it does not.
TT does not allow scalar refs as stash values. If you want to do it this way, I'd suggest either making a minimal object that returns $fld1 and $fld2 as a response to methods of those names, or just pass code refs:
$vars->{'datarefs'} = {
'fld1' => sub { return $fld1 },
'fld2' => sub { return $fld2 },
};- Perrin
_______________________________________________ templates mailing list [EMAIL PROTECTED] http://lists.template-toolkit.org/mailman/listinfo/templates
