Bhaskar,

I don't think there should be a significant difference in
performance. The main difference between the two approaches is that
the "breadth" approach is easier to understand as you call all your
processors or pipelines from a singe pipeline. In general I would
favor that approach.

As a side note, you can perform aggregation in a href attribute as
follows:

  <p:input name="data"
           href="aggregate('root-element', #ref1, #ref2, #ref3)">

If you need to aggregate outputs right before sending them to an
output of the current pipeline, you can use the identity processor:

  <p:processor uri="oxf/processor/identity">
    <p:input name="data"
             href="aggregate('root-element', #ref1, #ref2, #ref3)"/>
    <p:output name="data" ref="data"/>
  </p:processor>

-Erik

> Hi,
>
> we have observed that there are two ways to generate the final
> transformed document from a set of different docs. (Example,
> combining footer, header, body to form the final document):
>
> 1. Breadth approach:
> The first pipeline calls a set of processors one after the another ,
> each of which returns an HTML document that is ultimately aggregated
> by the last processor of the main pipeline.
>
> 2. Depth approach:
> The first processor in the main pipeline calls a second pipeline
> (through its config parameter), which in turn calls a third one, and
> so on.
> The last(pipeline) in this chain (say third) returns its output
> (HTML/XML) to the caller (second), which in turn aggregates this
> with its own (data) input, transforms it and passes its output back
> to the first processor. The first processor's output is the final
> page containing all the components, which later can be formatted.
>
> Which approach is more efficient?
>
> Thanks,
> Bhaskar





Reply via email to