Brat Wizard wrote:
> On Wednesday 18 September 2002 01:10 pm, Josh Chamas spewed into the ether:
> 
>>Assuming that the data is available when the my:sub is starting
>>to get processed, then you can use more XMLSubs to do your dirty work like:
>>
>><% my $data_ref = \%data_hash; %>
>><my:sub data=$data_ref>
>>    <my:value name='COST' fmt='%.2f' />
>>    <my:value name='STATE' fmt='%uc2' />
>></my:sub>
> 
> 
> I've been playing around with this some more and have this observation to make 
> (among some others)-- what you have started here with XMLSubs is very nifty 
> and offers the designer a large degree of abstraction ability (a la 
> templating) in crufting up pages-- however, the one big weakness that I see 
> is that all of of the ASP assisted bits-- nested subs, <% %> substitution, 
> etc-- all happen in the PRE processing stage and not DURING the processing 
> stage. A lot of what one wants to do doesn't come into scope until _during_ 
> the processing.

Right, its almost as if one would want a pre-processing stage of not the
output, but of the content.  The current model was not built to be easily
extendable to this as there is only one sub per XMLSubs that gets called,
but what you would really want is something like:

sub my::sub_pre()
sub my::sub_post()

Unfortunately, any parsing of this script is quite likely to thrown lines
numbers off from the original source.

Another mechanism for might be good to use for this is to use ASPish
global.asa events that can be called every XMLSubs execution, both
before & after, like:

sub XMLSubs_OnStart {
    my($xml_tag_name, $args, $html_input) = @_;
}

sub XMLSubs_OnEnd {
    my($xml_tag_name, $args, $html_output) = @_;
}

The problem that I see with this is that it makes your XMLSubs
necessarily wired to the application, when ideally it should
be easily repurposable to other web applications.  JSP taglibs
seem to solve this problem by having an XML description file
for each tag lib.

I think I would like to solve this by having the ability to
map a module namespace to an XML namespace, and then have
a certain OO specification that if the module implements
it, then the XMLSubs can have its preprocessing stage et al.

> 
> For example, one use of this ability (and enabling a big win) would be in 
> interpreting/rendering row data selected from a database. None of the nifty 
> ASP-assist functions are at all useful in this context because it isn't until 
> inside the iteration loop which is inside the my:sub function that this 
> information first becomes available. 
> 
> Here is an (actual) example of the problem:
> 
> <table width="100%">
> <my:pgloop sth=$sth>
>   <fmt>PRICE:%.2f</fmt>
>   <tr>
>     <td><my:pimage pg="__PG__" sku="__SKU__" width="32"/></td>
>     <td><a href="catalog/__PG__/__SKU__">__SKU__</a></td>
>     <td>__USER1__</td>
>     <td>__NAME__</td>
>     <td align=right>__QTY_ONHAND__</td>
>     <td align=right>__AMT_PRICE__</td>
>   </tr>
> </my:pgloop>
> </table>
> 
> In this case, the 'my:pimage' sub is nested inside the 'my:pgloop' sub-- the 
> statement handle '$sth' contains the selected DBI table rows, and the pgloop 
> sub iterates the rows and subs the template for each row. The 'my:pimage' sub 
> retrieves an image based on the product group and sku id. But in this case, 
> the sku is not known until inside the pg:loop func. Intuitively (to me at 

In an OO model, I think we could achieve something like this as in:

my $pgloop = My::Pgloop->new(
        ASP => $asp,
        Attrib => { sth => $sth },
        Content => \$data_within,
};
my $pgimage = My::PgImage->new(
        ASP => $asp,
        Attrib => { pg=>"__PG__" sku=>"__SKU__" },
        Parent => $pgloop
};
$pgimage->execute(\'');
$pgloop->execute(\$final_data_output_within);

This pre/post processing model would allow pgimage to
work with $self->{Parent}{sth}, or pgloop could populate
its own data in this way.  Output from pgimage could be
trapped and inlined, passed into the execute() function
for pgloop.

Now, I'm not saying I could deliver on all of this necessarily,
but this seems like where the model should go, or some such
directtion.

> Another useful thing might be a <def> field (define) field that could 
> construct artificial tokens based on expression-manipulation of existing data 
> available in the dataset at time of rendering. That way you could have a cost 
> field and apply a markup value to it and come up with an artificial, but 
> usable price token.
> 
> <my:sub>
>   <define>PRICE:COST*1.2</define>
>  The cost of this item is __COST__ and it is currently selling for __PRICE__.
> </my:sub>
> 

If we have a preprocessing stage like above, then you could probably
take the inner content and read through it with XML::Simple, or HTML::Parser
and pick out your internal definitions that way.  It may be that I am unable
to achieve a fully post processed model where I can trap the output of
the inner tokens too, so that we would need to put the rest of the
data into another tag that you could read easily as well at input time like:

  <my:sub>
   <define>PRICE:COST*1.2</define>
   <output>The cost of this item is __COST__ and it is currently selling for 
__PRICE__.</output>
  </my:sub>

This kind of tag could probably be entirely handled by My::Sub->new() without
the need for a post process stage.

> Finally there is a need for some sort of simple if/then logic. This is where 
> everything gets tricky. If you go to far with this concept then you cross the 
> threshold of simple formatting and get into procedural aspects which should 
> probably be avoided. There probably should never be any sort of iterative 
> aspect. That seems definately over the line, and can be handled anyway by 
> simply creating a my:sub that performs the iteration. But consider this 
> situation:

In a pre-process model, the if/else children could be evaluated by the
parent at parent->new() time, or could look at the parent data at their
if->new() time.  If ASP <% if() { } %> will still be supported, within,
then that could be evaluated at parent->new() time by simply doing:

$output = $Response->TrapInclude(\$content);

where that will execute the content without needing a post processing model
at all.  Hmmm, I kind of like where this is going. :)  Of course its totally
different that what we have today.

> sub render {
>         my ($tmpl, $hash, $fmt) = @_;
>         while ($tmpl =~ /__(.*?)__/) {
>                 my $TOKEN = $1; my $token = lc($TOKEN);
>                 if ((my $fmt = $$fmt{$TOKEN}) ne undef) {
>                         # to-uppercase

Seeing where you are going with this, I feel like an object per
XMLSubs approach has the greatest potential.  With well well
defined reserved interface, mixed in with Apache::ASP::Share
system templates, I think this can go a long way towards
building reusable components.

> ="$$args{go_button}" border=0>}:qq{ &nbsp;<input type=submit value="Go">};
>         print << "EOF";
> <table border=0 cellpadding=1 cellspacing=0>
> <form method=GET action="/$storeid/$script">
> <tr>
>         $caption

I am hoping that this form processing could be done from a
reusable XMlSubs by something like $ASP->Response->Include('Share::My/Subs/form.inc');
I believe the way I constructed the Share:: mechanism is that
some could overload that template at their $GLOBAL/My/Subs/form.inc
to further customize what you have done, but there is a whole
lot of extension that can be done there too to make it really powerful.

Regards,

Josh
________________________________________________________________
Josh Chamas, Founder                   phone:925-552-0128
Chamas Enterprises Inc.                http://www.chamas.com
NodeWorks Link Checking                http://www.nodeworks.com


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to