Andreas:
I have several _wonderful_ users that demand to generate several thousand
page reports and kill many, many trees, all in the name of something. I
should just render the first and last 20 pages and fill the middle 1960
pages with blanks - I bet they never notice. :)
With that said, this is how I break the page sequences:
<xsl:for-each select="reportrow[position() mod $rowCount = 1]">
<fo:page-sequence master-reference="generic" language="en"
country="us">
<fo:static-content flow-name="xsl-region-before">
<snip/>
</fo:static-content>
<fo:static-content flow-name="xsl-region-after">
<snip/>
</fo:static-content>
<fo:flow flow-name="xsl-region-body">
<fo:table table-layout="fixed" width="100%">
<xsl:for-each select="columns/column">
<fo:table-column>
<snip/>
</fo:table-column>
</xsl:for-each>
<fo:table-header>
<snip/>
</fo:table-header>
<fo:table-body>
<xsl:for-each
select=".|following-sibling::reportrow[position() < $rowCount]">
<fo:table-row>
<snip/>
</fo:table-row>
</xsl:for-each>
</fo:table-body>
</fo:table>
</fo:flow>
</fo:page-sequence>
</xsl:for-each>
I did not know that block could also consume a lot of memory - it gives me
something else to check - thanks.
-Lou
Andreas L Delmelle <[EMAIL PROTECTED]> wrote on 07/11/2007 02:51:45
PM:
> On Jul 11, 2007, at 20:12, [EMAIL PROTECTED] wrote:
>
> Hi Lou
>
> > We also have a memory issue due to very large page sequences. One
> > thing we do is offer the customer two report outputs: PDF and
> > something called PDF Simple. PDF is the regular report that has a
> > single page sequence and can consume a lot of memory. The simple
> > output is the same report, but we break the page sequences every n
> > number if lines. It does result in some "half populated" pages,
> > but the memory and speed improvement make it worth it to our
> > customers.
>
> Just curious: how exactly do you break the page-sequences? I've seen
> it done in some Bugzilla, IIC, but I cannot remember the details. All
> I remember is that it was a very dirty hack...
>
> In the end, that page-sequence boundary is currently not a major
> issue if you take into account:
> a) what you get in return (absolutely gratis)
> b) how easy it is sometimes to sell the dirty solution as a necessary
> evil (especially when it means it will cost the customer less)
> c) that no one ever really sees much more of these 1000-paged reports
> than the first 20 pages anyway :)
>
> FYI:
> Note that a similar limit also applies to blocks, although people
> tend to run into this far less often. In case of very large page-
> sequences, those generally never make it to the layout stage. The
> heap will be filled with the page-sequence and all its descendants.
> A huge block of text OTOH still occupies relatively little space in
> terms of memory, but it's only when layout begins that the memory
> usage increases drastically and FOP finally dies because of the
> ridiculous amount of break-possibilities the layout algorithm is
> forced to consider. Give it a few forced breaks and everything runs
> smoothly.
>
> That said, we'd definitely appreciate any suggestions in terms of
> architectural changes to shift this logic of forcing breaks into
> FOP's black box. Something like an internal threshold, in terms of
> memory consumption, number of Java object instances or the amount of
> break-possibilities?
>
>
> Cheers
>
> Andreas
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>