Steven Liu commented on FOP-1936:
Our product also experienced FOP library consumes 11GB memory, but still ran
out of memory.
We did a head dump, there are about 80% of the memory is on
After investigation, we found the problem is from the xml data file. There are
two node contains large amount of text (e.g. <item>a lot of text side, like
One node contains about 9MB characters, the other node contains about 4MB
characters. This has consumed all 11 GB memory, then blew up the product.
The workaround we did is that before giving FOP library to generate, we tidy up
the data file. We replaced the content to something like 'too large to
display', if the length of the content of the node is greater than a certain
Hopefully, it will be helpful for the user who also experience the problem.
> FOP is unable to create PDF if there is an unusually large paragraph.
> Key: FOP-1936
> URL: https://issues.apache.org/jira/browse/FOP-1936
> Project: FOP
> Issue Type: Bug
> Components: unqualified
> Affects Versions: 1.0
> Environment: Operating System: All
> Platform: PC
> Reporter: Abhijeet
> Assignee: fop-dev
> Attachments: warrencounty.fo
> We have two problems :
> 1. FOP Performs unusually SLOW if there is a large paragraph
> We have noticed that when there is an unusually large paragraph than FOP
> performance is incredibly slow. FOP takes more than 15 minutes in the method
> findBreakingPoints which is defined in BreakingAlgorithm.java. The paragraph
> size is of around 50 thousand characters. This method seems to find the best
> possible Break point. Can we not make this method return a default break
> point that works for the English language ?
> 2. FOP uses unusually large memory when running in findBreakingPoints method
> defined in BreakingAlgorithm.java. This method starts to consume around 500
> MB memory creating thousands of Objects of KnuthNode type. Such memory
> consumption is unacceptable just for finding a line break :-(.
> 2. FOP gives a SAX Exception on having a long paragraph in Systems which dont
> have 1.5 GB RAM for a simple paragraph which has 90K Characters. Below is the
> javax.xml.transform.TransformerException: org.xml.sax.SAXParseException: The
> element type "xsl:template" must be terminated by the matching end-tag
> Caused by: javax.xml.transform.TransformerException:
> org.xml.sax.SAXParseException: The element type "xsl:template" must be
> terminated by the matching end-tag "</xsl:template>".
> ... 6 more
> Caused by: org.xml.sax.SAXParseException: The element type "xsl:template"
> must be terminated by the matching end-tag "</xsl:template>".
> at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown
> org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
> Is there a way, I can prevent this extensive memory usage and slow
> performance by using a default break ? I am ready to build the JAR myself. Is
> this a bug which has already been fixed ?
This message was sent by Atlassian JIRA