[
https://issues.apache.org/jira/browse/TIKA-2849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16813588#comment-16813588
]
Tim Allison edited comment on TIKA-2849 at 4/9/19 4:26 PM:
-----------------------------------------------------------
I could see improving the {{ZipContainerDetector}} and maybe the
{{POIFSContainerDetector}} by allowing users to set a {{markLimit}}, and the
{{ZCD/PCD}} would do their best.
As a first step, try turning off the detectors you don't want (any container
detectors) with a config file like this one:
https://github.com/apache/tika/blob/master/tika-parsers/src/test/resources/org/apache/tika/config/TIKA-1702-detector-blacklist.xml
Let us know if that helps improve performance and if you're happy enough
without the granularity you can get with the container detectors.
was (Author: [email protected]):
I could see improving the {{ZipContainerDetector}} and maybe the
{{POIFSContainerDetector}} by allowing users to set a {{markLimit}}, and the
{{ZCD/PCD}} would do their best.
As a first step, try turning off the detectors you don't want (any container
detectors) with a config file like this one:
> TikaInputStream copies the input stream locally
> -----------------------------------------------
>
> Key: TIKA-2849
> URL: https://issues.apache.org/jira/browse/TIKA-2849
> Project: Tika
> Issue Type: Bug
> Affects Versions: 1.20
> Reporter: Boris Petrov
> Priority: Major
>
> When doing "tika.detect(stream, name)" and the stream is a "TikaInputStream",
> execution gets to "TikaInputStream#getPath" which does a "Files.copy(in,
> path, REPLACE_EXISTING);" which is very, very bad. This input stream could
> be, as in our case, an input stream from a network file which is tens or
> hundreds of gigabytes large. Copying it locally is a huge waste of resources
> to say the least. Why does it do that and can I make it not do it? Or is this
> something that has to be fixed in Tika?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)