[
https://issues.apache.org/jira/browse/FOP-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Simon Steiner updated FOP-2860:
-------------------------------
Attachment: fop.xconf
> [PATCH] BreakingAlgorithm causes high memory consumption
> --------------------------------------------------------
>
> Key: FOP-2860
> URL: https://issues.apache.org/jira/browse/FOP-2860
> Project: FOP
> Issue Type: Bug
> Affects Versions: 2.3
> Reporter: Raman Katsora
> Assignee: Simon Steiner
> Priority: Critical
> Attachments: fop.xconf, image-2019-04-16-10-07-53-502.png,
> memory6.patch, test-1500000.fo, test-250000.fo, test-300000.fo
>
>
> when a single element (e.g. {{<fo:block>}}) contains a sufficiently large
> amount of text, the fo-to-pdf transformation causes very high memory
> consumption.
> For instance, transforming a document with {{<fo:block>}} containing 1.5
> million characters (~1.5Mb [^test-1500000.fo]) requires about 3Gb of RAM.
> The heapdump shows 27.5 million
> {{org.apache.fop.layoutmgr.BreakingAlgorithm.KnuthNode}} (~2.6Gb).
> We start observing this issue, having about 300 thousand characters in a
> single element ([^test-300000.fo]). But the high memory consumption isn't
> observed when processing 250 thousand characters ([^test-250000.fo]).
> Add <simple-line-breaking>true</simple-line-breaking> to fop.xconf for a
> lower memory method
--
This message was sent by Atlassian Jira
(v8.20.10#820010)