[
https://issues.apache.org/jira/browse/TIKA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841008#comment-16841008
]
Andreas Hubold commented on TIKA-2802:
--------------------------------------
I wonder if the addition of Xerces is still recommended for Java 9+ projects.
Since Java 9, the JDK contains bugfixes from Xerces 2.11.0, see
lhttps://bugs.openjdk.java.net/browse/JDK-8044086
For Java 13, an update to Xerces 2.12.0 is in progress according to
https://bugs.openjdk.java.net/browse/JDK-8214064
Do you know which Xerces issue was causing the problem?
> Out of memory issues when extracting large files (pst)
> ------------------------------------------------------
>
> Key: TIKA-2802
> URL: https://issues.apache.org/jira/browse/TIKA-2802
> Project: Tika
> Issue Type: Bug
> Components: parser
> Affects Versions: 1.20, 1.19.1
> Environment: Reproduced on Windows 2012 R2 and Ubuntu 18.04.
> Java: jdk1.8.0_151
>
> Reporter: Caleb Ott
> Priority: Critical
> Attachments: Selection_111.png, Selection_117.png
>
>
> I have an application that extracts text from multiple files on a file share.
> I've been running into issues with the application running out of memory
> (~26g dedicated to the heap).
> I found in the heap dumps there is a "fDTDDecl" buffer which is creating very
> large char arrays and never releasing that memory. In the picture you can see
> the heap dump with 4 SAXParsers holding onto a large chunk of memory. The
> fourth one is expanded to show it is all being held by the "fDTDDecl" field.
> This dump is from a scaled down execution (not a 26g heap).
> It looks like that DTD field should never be that large, I'm wondering if
> this is a bug with xerces instead? I can easily reproduce the issue by
> attempting to extract text from large .pst files.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)