AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Georg Datterl
Hi Dennis,

Page-sequences start with a new page. If you start a new page-sequence instead 
of inserting a fixed page break, the layout does not change, as far as I can 
tell.

Regards,

Georg Datterl

-- Kontakt --

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:www.irs-nbg.de
Willmy PrintMedia GmbH:  www.willmy.de
Willmy Consult  Content GmbH:   www.willmycc.de


-Ursprüngliche Nachricht-
Von: Dennis van Zoerlandt [mailto:dvzoerla...@vanboxtel.nl]
Gesendet: Freitag, 1. April 2011 13:13
An: fop-users@xmlgraphics.apache.org
Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming large XML 
to PDF


Hi Andreas,

Alright, it seems a logical explanation you need a large heap to produce
this kind of large documents.

Font auto detection seems to be off. In the FOP configuration file no
auto-detect flag is present and I also didn't include a manifest file with
x-fonts.

I will look further into modifying the XSL file in a such way multiple
page-sequences are used. I think it's the best solution this far. Am I
correct to say multiple page-sequences won't affect the definitive page
lay-out of the PDF file? How can I split up the content in multiple
page-sequences? I think there's also a modification necessary in the XML
input file?

Another question: is there a reliable way to 'predict' or calculate the page
count the PDF file will have, before any transformation is started? I can
check the file size of the XML input file, but that isn't really reliable
because the complexity of the XSL stylesheet is also a factor. I'm thinking
of aborting the task when the resulting PDF file will have 100+ pages (for
instance). Is this possible?

Best regards,
Dennis van Zoerlandt


Andreas Delmelle-2 wrote:

 On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:

 Hi Dennis

 In the meanwhile I have tested a few things. In the attachment you'll
 find a
 FO file ( http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
 fop1.0-5000-fo.zip ) which has scrambled data because of confidentiality.

 I created the FO file with XMLspy and tried to create a PDF file with
 Apache
 FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it
 seems)
 this error (see below). No PDF file was created.

 It seems like the classic cram all content into one page-sequence issue.
 With a file of that size, there is little or nothing you can do. The
 current architecture of FOP does not allow to render such documents
 without a sufficiently large heap.

 That said: I wrote the above while I was running your sample file (with
 FOP Trunk, using Saxon as XSLT/JAXP implementation), and it just completed
 on my end, with a heap of 1GB. It did take about 7 minutes, but still... I
 got a nice output file of 455 pages.
 I doubt that it is related to images, as there is only one
 fo:external-graphic.
 Do you have font auto-detection enabled, by any chance? That might consume
 an unnecessary amount of heap space, for example, if you only actually use
 a handful of custom fonts, but have a large number of those installed on
 your system.
 Another option is that some fixes for memory-leaks, applied to Trunk after
 the 1.0 release, are actually helping here.

 Splitting the XML input file into several chunks is not a preferable
 option
 for me, nevertheless it is a valid one.

 Note: it is, strictly speaking, not necessary to split up the input so
 that you have several FOs. What would suffice is to modify the stylesheet,
 so that the content is divided over multiple page-sequences. If you can
 keep the size of the page-sequences down to, say, 30 to 40 pages, that
 might already reduce the overall memory usage significantly.
 There are known cases of people rendering documents of +10.000 pages. No
 problem, iff not all of those pages are generated by the same
 fo:page-sequence.


 Regards

 Andreas
 ---
 -
 To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
 For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org




--
View this message in context: 
http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31293232.html
Sent from the FOP - Users mailing list archive at Nabble.com.


-
To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org


-
To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org



Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Dennis van Zoerlandt

Hi Georg,

At this moment we don't use fixed page breaks, just 1 page-sequence. The
stylesheet files are generated with Digiforms Designer.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:
 
 Hi Dennis,
 
 Page-sequences start with a new page. If you start a new page-sequence
 instead of inserting a fixed page break, the layout does not change, as
 far as I can tell.
 
 Regards,
 
 Georg Datterl
 
 -- Kontakt --
 
 Georg Datterl
 
 Geneon media solutions gmbh
 Gutenstetter Straße 8a
 90449 Nürnberg
 
 HRB Nürnberg: 17193
 Geschäftsführer: Yong-Harry Steiert
 
 Tel.: 0911/36 78 88 - 26
 Fax: 0911/36 78 88 - 20
 
 www.geneon.de
 
 Weitere Mitglieder der Willmy MediaGroup:
 
 IRS Integrated Realization Services GmbH:www.irs-nbg.de
 Willmy PrintMedia GmbH:  www.willmy.de
 Willmy Consult  Content GmbH:   www.willmycc.de
 
 
 -Ursprüngliche Nachricht-
 Von: Dennis van Zoerlandt [mailto:dvzoerla...@vanboxtel.nl]
 Gesendet: Freitag, 1. April 2011 13:13
 An: fop-users@xmlgraphics.apache.org
 Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming large
 XML to PDF
 
 
 Hi Andreas,
 
 Alright, it seems a logical explanation you need a large heap to produce
 this kind of large documents.
 
 Font auto detection seems to be off. In the FOP configuration file no
 auto-detect flag is present and I also didn't include a manifest file with
 x-fonts.
 
 I will look further into modifying the XSL file in a such way multiple
 page-sequences are used. I think it's the best solution this far. Am I
 correct to say multiple page-sequences won't affect the definitive page
 lay-out of the PDF file? How can I split up the content in multiple
 page-sequences? I think there's also a modification necessary in the XML
 input file?
 
 Another question: is there a reliable way to 'predict' or calculate the
 page
 count the PDF file will have, before any transformation is started? I can
 check the file size of the XML input file, but that isn't really reliable
 because the complexity of the XSL stylesheet is also a factor. I'm
 thinking
 of aborting the task when the resulting PDF file will have 100+ pages (for
 instance). Is this possible?
 
 Best regards,
 Dennis van Zoerlandt
 
 
 Andreas Delmelle-2 wrote:

 On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:

 Hi Dennis

 In the meanwhile I have tested a few things. In the attachment you'll
 find a
 FO file ( http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
 fop1.0-5000-fo.zip ) which has scrambled data because of
 confidentiality.

 I created the FO file with XMLspy and tried to create a PDF file with
 Apache
 FOP 1.0 (fop.bat) on my Windows XP workstation. It produced (what it
 seems)
 this error (see below). No PDF file was created.

 It seems like the classic cram all content into one page-sequence
 issue.
 With a file of that size, there is little or nothing you can do. The
 current architecture of FOP does not allow to render such documents
 without a sufficiently large heap.

 That said: I wrote the above while I was running your sample file (with
 FOP Trunk, using Saxon as XSLT/JAXP implementation), and it just
 completed
 on my end, with a heap of 1GB. It did take about 7 minutes, but still...
 I
 got a nice output file of 455 pages.
 I doubt that it is related to images, as there is only one
 fo:external-graphic.
 Do you have font auto-detection enabled, by any chance? That might
 consume
 an unnecessary amount of heap space, for example, if you only actually
 use
 a handful of custom fonts, but have a large number of those installed on
 your system.
 Another option is that some fixes for memory-leaks, applied to Trunk
 after
 the 1.0 release, are actually helping here.

 Splitting the XML input file into several chunks is not a preferable
 option
 for me, nevertheless it is a valid one.

 Note: it is, strictly speaking, not necessary to split up the input so
 that you have several FOs. What would suffice is to modify the
 stylesheet,
 so that the content is divided over multiple page-sequences. If you can
 keep the size of the page-sequences down to, say, 30 to 40 pages, that
 might already reduce the overall memory usage significantly.
 There are known cases of people rendering documents of +10.000 pages. No
 problem, iff not all of those pages are generated by the same
 fo:page-sequence.


 Regards

 Andreas
 ---
 -
 To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
 For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org



 
 --
 View this message in context:
 http://old.nabble.com/OutOfMemoryException-while-transforming-large-XML-to-PDF-tp31236044p31293232.html
 Sent from the FOP - Users mailing list archive at Nabble.com.
 
 
 -
 To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
 For additional

RE: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Eric Douglas
I currently only have one fo:page-sequence tag in my xsl.
How would auto page numbering with fo:page-number work otherwise?

Is it possible the memory requirements could be reduced for extremely large 
documents by adding an option to swap some values out to temp files?  Maybe 
save information in a file for each 100 pages?


-Original Message-
From: Dennis van Zoerlandt [mailto:dvzoerla...@vanboxtel.nl] 
Sent: Friday, April 01, 2011 10:25 AM
To: fop-users@xmlgraphics.apache.org
Subject: Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large 
XML to PDF


Hi Georg,

At this moment we don't use fixed page breaks, just 1 page-sequence. The 
stylesheet files are generated with Digiforms Designer.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:
 
 Hi Dennis,
 
 Page-sequences start with a new page. If you start a new page-sequence 
 instead of inserting a fixed page break, the layout does not change, 
 as far as I can tell.
 
 Regards,
 
 Georg Datterl
 
 -- Kontakt --
 
 Georg Datterl
 
 Geneon media solutions gmbh
 Gutenstetter Straße 8a
 90449 Nürnberg
 
 HRB Nürnberg: 17193
 Geschäftsführer: Yong-Harry Steiert
 
 Tel.: 0911/36 78 88 - 26
 Fax: 0911/36 78 88 - 20
 
 www.geneon.de
 
 Weitere Mitglieder der Willmy MediaGroup:
 
 IRS Integrated Realization Services GmbH:www.irs-nbg.de
 Willmy PrintMedia GmbH:  www.willmy.de
 Willmy Consult  Content GmbH:   www.willmycc.de
 
 
 -Ursprüngliche Nachricht-
 Von: Dennis van Zoerlandt [mailto:dvzoerla...@vanboxtel.nl]
 Gesendet: Freitag, 1. April 2011 13:13
 An: fop-users@xmlgraphics.apache.org
 Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming 
 large XML to PDF
 
 
 Hi Andreas,
 
 Alright, it seems a logical explanation you need a large heap to 
 produce this kind of large documents.
 
 Font auto detection seems to be off. In the FOP configuration file no 
 auto-detect flag is present and I also didn't include a manifest file 
 with x-fonts.
 
 I will look further into modifying the XSL file in a such way multiple 
 page-sequences are used. I think it's the best solution this far. Am I 
 correct to say multiple page-sequences won't affect the definitive 
 page lay-out of the PDF file? How can I split up the content in 
 multiple page-sequences? I think there's also a modification necessary 
 in the XML input file?
 
 Another question: is there a reliable way to 'predict' or calculate 
 the page count the PDF file will have, before any transformation is 
 started? I can check the file size of the XML input file, but that 
 isn't really reliable because the complexity of the XSL stylesheet is 
 also a factor. I'm thinking of aborting the task when the resulting 
 PDF file will have 100+ pages (for instance). Is this possible?
 
 Best regards,
 Dennis van Zoerlandt
 
 
 Andreas Delmelle-2 wrote:

 On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:

 Hi Dennis

 In the meanwhile I have tested a few things. In the attachment 
 you'll find a FO file ( 
 http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
 fop1.0-5000-fo.zip ) which has scrambled data because of 
 confidentiality.

 I created the FO file with XMLspy and tried to create a PDF file 
 with Apache FOP 1.0 (fop.bat) on my Windows XP workstation. It 
 produced (what it
 seems)
 this error (see below). No PDF file was created.

 It seems like the classic cram all content into one page-sequence
 issue.
 With a file of that size, there is little or nothing you can do. The 
 current architecture of FOP does not allow to render such documents 
 without a sufficiently large heap.

 That said: I wrote the above while I was running your sample file 
 (with FOP Trunk, using Saxon as XSLT/JAXP implementation), and it 
 just completed on my end, with a heap of 1GB. It did take about 7 
 minutes, but still...
 I
 got a nice output file of 455 pages.
 I doubt that it is related to images, as there is only one 
 fo:external-graphic.
 Do you have font auto-detection enabled, by any chance? That might 
 consume an unnecessary amount of heap space, for example, if you only 
 actually use a handful of custom fonts, but have a large number of 
 those installed on your system.
 Another option is that some fixes for memory-leaks, applied to Trunk 
 after the 1.0 release, are actually helping here.

 Splitting the XML input file into several chunks is not a preferable 
 option for me, nevertheless it is a valid one.

 Note: it is, strictly speaking, not necessary to split up the input 
 so that you have several FOs. What would suffice is to modify the 
 stylesheet, so that the content is divided over multiple 
 page-sequences. If you can keep the size of the page-sequences down 
 to, say, 30 to 40 pages, that might already reduce the overall memory 
 usage significantly.
 There are known cases of people rendering documents of +10.000 pages. 
 No problem, iff not all of those pages are generated by the same 
 fo:page

AW: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Georg Datterl
Hi Eric,

I don't think page numbering is depending on page-sequence.

Regards,

Georg Datterl

-- Kontakt --

Georg Datterl

Geneon media solutions gmbh
Gutenstetter Straße 8a
90449 Nürnberg

HRB Nürnberg: 17193
Geschäftsführer: Yong-Harry Steiert

Tel.: 0911/36 78 88 - 26
Fax: 0911/36 78 88 - 20

www.geneon.de

Weitere Mitglieder der Willmy MediaGroup:

IRS Integrated Realization Services GmbH:www.irs-nbg.de
Willmy PrintMedia GmbH:  www.willmy.de
Willmy Consult  Content GmbH:   www.willmycc.de


-Ursprüngliche Nachricht-
Von: Eric Douglas [mailto:edoug...@blockhouse.com]
Gesendet: Freitag, 1. April 2011 16:47
An: fop-users@xmlgraphics.apache.org
Betreff: RE: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large 
XML to PDF

I currently only have one fo:page-sequence tag in my xsl.
How would auto page numbering with fo:page-number work otherwise?

Is it possible the memory requirements could be reduced for extremely large 
documents by adding an option to swap some values out to temp files?  Maybe 
save information in a file for each 100 pages?


-Original Message-
From: Dennis van Zoerlandt [mailto:dvzoerla...@vanboxtel.nl]
Sent: Friday, April 01, 2011 10:25 AM
To: fop-users@xmlgraphics.apache.org
Subject: Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large 
XML to PDF


Hi Georg,

At this moment we don't use fixed page breaks, just 1 page-sequence. The 
stylesheet files are generated with Digiforms Designer.

Best regards,
Dennis van Zoerlandt


Georg Datterl wrote:

 Hi Dennis,

 Page-sequences start with a new page. If you start a new page-sequence
 instead of inserting a fixed page break, the layout does not change,
 as far as I can tell.

 Regards,

 Georg Datterl

 -- Kontakt --

 Georg Datterl

 Geneon media solutions gmbh
 Gutenstetter Straße 8a
 90449 Nürnberg

 HRB Nürnberg: 17193
 Geschäftsführer: Yong-Harry Steiert

 Tel.: 0911/36 78 88 - 26
 Fax: 0911/36 78 88 - 20

 www.geneon.de

 Weitere Mitglieder der Willmy MediaGroup:

 IRS Integrated Realization Services GmbH:www.irs-nbg.de
 Willmy PrintMedia GmbH:  www.willmy.de
 Willmy Consult  Content GmbH:   www.willmycc.de


 -Ursprüngliche Nachricht-
 Von: Dennis van Zoerlandt [mailto:dvzoerla...@vanboxtel.nl]
 Gesendet: Freitag, 1. April 2011 13:13
 An: fop-users@xmlgraphics.apache.org
 Betreff: Re: AW: AW: AW: AW: OutOfMemoryException while transforming
 large XML to PDF


 Hi Andreas,

 Alright, it seems a logical explanation you need a large heap to
 produce this kind of large documents.

 Font auto detection seems to be off. In the FOP configuration file no
 auto-detect flag is present and I also didn't include a manifest file
 with x-fonts.

 I will look further into modifying the XSL file in a such way multiple
 page-sequences are used. I think it's the best solution this far. Am I
 correct to say multiple page-sequences won't affect the definitive
 page lay-out of the PDF file? How can I split up the content in
 multiple page-sequences? I think there's also a modification necessary
 in the XML input file?

 Another question: is there a reliable way to 'predict' or calculate
 the page count the PDF file will have, before any transformation is
 started? I can check the file size of the XML input file, but that
 isn't really reliable because the complexity of the XSL stylesheet is
 also a factor. I'm thinking of aborting the task when the resulting
 PDF file will have 100+ pages (for instance). Is this possible?

 Best regards,
 Dennis van Zoerlandt


 Andreas Delmelle-2 wrote:

 On 31 Mar 2011, at 15:08, Dennis van Zoerlandt wrote:

 Hi Dennis

 In the meanwhile I have tested a few things. In the attachment
 you'll find a FO file (
 http://old.nabble.com/file/p31286241/fop1.0-5000-fo.zip
 fop1.0-5000-fo.zip ) which has scrambled data because of
 confidentiality.

 I created the FO file with XMLspy and tried to create a PDF file
 with Apache FOP 1.0 (fop.bat) on my Windows XP workstation. It
 produced (what it
 seems)
 this error (see below). No PDF file was created.

 It seems like the classic cram all content into one page-sequence
 issue.
 With a file of that size, there is little or nothing you can do. The
 current architecture of FOP does not allow to render such documents
 without a sufficiently large heap.

 That said: I wrote the above while I was running your sample file
 (with FOP Trunk, using Saxon as XSLT/JAXP implementation), and it
 just completed on my end, with a heap of 1GB. It did take about 7
 minutes, but still...
 I
 got a nice output file of 455 pages.
 I doubt that it is related to images, as there is only one
 fo:external-graphic.
 Do you have font auto-detection enabled, by any chance? That might
 consume an unnecessary amount of heap space, for example, if you only
 actually use a handful of custom fonts, but have a large number of
 those installed on your system.
 Another

Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Andreas L. Delmelle
On 01 Apr 2011, at 16:47, Eric Douglas wrote:

 I currently only have one fo:page-sequence tag in my xsl.
 How would auto page numbering with fo:page-number work otherwise?

If you do not use the 'initial-page-number' property, the numbering for the 
next page-sequence just continues from where the previous one left off. In 
other words, by default, page-number does work across page-sequences.

See also: http://www.w3.org/TR/xsl/#initial-page-number

 
 Is it possible the memory requirements could be reduced for extremely large 
 documents by adding an option to swap some values out to temp files?  Maybe 
 save information in a file for each 100 pages?

We already have a '-conserve' option on the command line, that results in pages 
being serialized to disk to avoid keeping them in memory, but that would likely 
not help in this particular situation. It is meant to be used in conjunction 
with multiple page-sequences if there are a lot of cross-references. That is a 
scenario where even multiple page-sequences might still consume too much memory 
for the remainder of the process to run smoothly.



Regards

Andreas
---
-
To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org



RE: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Eric Douglas
I only reference the words page-sequence once.  Is this the single page
sequence problem you're talking about, or is my page loop referencing
multiple page sequences?
My input is already formatted so I know what goes on each page.  I just
need the pages connected for the page variable references with
fo:page-number keeping track of the page count, and an
fo:page-number-citation needing to know the number of the last page
which I do with the empty block.

 fo:page-sequence
 xsl:attribute
name=master-referenceSTANDARD_PAGE/xsl:attribute
  fo:flow
   xsl:attribute
name=flow-namexsl-region-body/xsl:attribute
   xsl:for-each select=PAGE_DATA
fo:block
 xsl:attribute
name=break-beforepage/xsl:attribute
...
/fo:block
   /xsl:for-each
   fo:block
xsl:attribute name=idlast-page/xsl:attribute
xsl:attribute
name=positionabsolute/xsl:attribute
xsl:attribute name=left0/xsl:attribute
xsl:attribute name=top0/xsl:attribute
xsl:attribute name=width0/xsl:attribute
xsl:attribute name=height0/xsl:attribute
   /fo:block
  /fo:flow
 /fo:page-sequence

If the conserve option can help with memory use for very large reports
I'll look for it, but I don't call it from the command line.  My code is
all embedded.

-Original Message-
From: Andreas L. Delmelle [mailto:andreas.delme...@telenet.be] 
Sent: Friday, April 01, 2011 2:32 PM
To: fop-users@xmlgraphics.apache.org
Subject: Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming
large XML to PDF

On 01 Apr 2011, at 16:47, Eric Douglas wrote:

 I currently only have one fo:page-sequence tag in my xsl.
 How would auto page numbering with fo:page-number work otherwise?

If you do not use the 'initial-page-number' property, the numbering for
the next page-sequence just continues from where the previous one left
off. In other words, by default, page-number does work across
page-sequences.

See also: http://www.w3.org/TR/xsl/#initial-page-number

 
 Is it possible the memory requirements could be reduced for extremely
large documents by adding an option to swap some values out to temp
files?  Maybe save information in a file for each 100 pages?

We already have a '-conserve' option on the command line, that results
in pages being serialized to disk to avoid keeping them in memory, but
that would likely not help in this particular situation. It is meant to
be used in conjunction with multiple page-sequences if there are a lot
of cross-references. That is a scenario where even multiple
page-sequences might still consume too much memory for the remainder of
the process to run smoothly.



Regards

Andreas
---
-
To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org


-
To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org



Re: AW: AW: AW: AW: AW: OutOfMemoryException while transforming large XML to PDF

2011-04-01 Thread Andreas L. Delmelle
On 01 Apr 2011, at 21:38, Eric Douglas wrote:

 I only reference the words page-sequence once.  Is this the single page
 sequence problem you're talking about, or is my page loop referencing
 multiple page sequences?

Your sample does have the potential to create large page-sequences, yes. 
It all depends on how many PAGE_DATA nodes you have.
However, since those are basically already separate pages, if I understand 
correctly, you might fare better by generating a page-sequence around every 10 
pages or so (i.e. plain grouping of the PAGE_DATA nodes by position). 
Nothing would change in the output, and you should be safe, whatever the actual 
amount of pages is. Your page-numbers and page-number-citations will just keep 
working as they do now.

 My input is already formatted so I know what goes on each page.  I just
 need the pages connected for the page variable references with
 fo:page-number keeping track of the page count, and an
 fo:page-number-citation needing to know the number of the last page
 which I do with the empty block.

Note: In FOP Trunk, using the empty block trick is no longer necessary for this 
case, as you can simply add an id to the fo:root and then use 
fo:page-number-citation-last wherever you need it.
Come to think of it, the FAQ will need to be updated to reflect this...


 If the conserve option can help with memory use for very large reports
 I'll look for it, but I don't call it from the command line.  My code is
 all embedded.

In embedded code, you can activate it via:

foUserAgent.setConserveMemoryPolicy(true);


Regards

Andreas
---
-
To unsubscribe, e-mail: fop-users-unsubscr...@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-h...@xmlgraphics.apache.org