There's a patch around, which never got applied to fop. This patch frees memory 
even in page sequences. Maybe that patch can help. The list archive should help 
you there I remember, I gave the same advice before, so maybe searching for 
"Datterl" and "Ben" (the author of the patch) will lead you there.

Regards,

Georg Datterl

------ Kontakt ------

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:    www.irs-nbg.de
Willmy PrintMedia GmbH:                            www.willmy.de
Willmy Consult & Content GmbH:                 www.willmycc.de


-----Ursprüngliche Nachricht-----
Von: Andreas Delmelle [mailto:[email protected]]
Gesendet: Samstag, 22. Januar 2011 19:50
An: [email protected]
Betreff: Re: AW: AW: [FOP 1.0] Worse performance than with 0.20.5 !?

On 21 Jan 2011, at 08:31, Matthias Müller wrote:

Hi Mathias

> I temporarily disabled all images and special fonts in my fo file and still 
> have
> the issue with the heap space. So i assume that i only have a chance to 
> improve
> the rendering by splitting the document in multiple page-sequences. The thing 
> is
> now, that the size of the single tables may vary extremely. There's also a 
> case
> that i produce a table over 400+ pages !!!
> What about splitting the tables after each, let's say, 20 pages? Where's the
> best performance, less than 20?

That could be a good start, but --it depends. It's really very difficult to say 
how many pages is ideal.
It is possible to let FOP run out of heap space with a document containing only 
a single fo:block with a dump of a chapter generating about 40 pages (or even 
only 1 page --at font-size 1pt). That is, purely the layout engine's 
linebreaking algorithm. No fancy fonts or tons of images. Not even a fo:table. 
On the one hand, that admittedly points to a lack of scalability.
On the other, the fact remains: divide and conquer. Breaking up that same 
fo:block into multiple blocks can make a world of difference.

For the end-result, it is obviously best to break the content at a boundary 
that makes sense logically. You can try to approximate how many rows you can 
fit into one page, and try to insert breaks from there, but that will likely 
lead to results that make little sense to the ultimate consumer/reader of the 
document...

>
> I almost forgot the most important point: If I split the table after the 20th
> page (or rather: after each 200th table rows (assuming ~10 rows per page)), 
> how
> do I ensure that the page-sequence ends at the page bottom. The size of the 
> rows
> also may vary.

Short answer: you can't. Splitting into multiple fo:page-sequences is always a 
trade-off, since it introduces a forced break that is basically arbitrary, 
unless you can /make/ it so that the break makes sense there.

Longer anwser is that one might be able to pull it off, but that would require 
a two-pass approach. In your case, that seems non-applicable, since the first 
pass will cause the out-of-memory condition.



Regards

Andreas
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to