[ https://issues.apache.org/jira/browse/TIKA-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735961#comment-15735961 ]
Ashish Basran commented on TIKA-2180: ------------------------------------- I tried with new experimental parser and with this document memory usage reaches it's limits. I also tried with same document converted to PDF and see the same issue with memory utilization. I tried with other set of different type of documents (.csv, .doc, .htm, .pdf, .msg, .ppt, .xlsx etc) and memory usage was under 1 GB. My observation with .docx, .xlsx, .pdf is that process is not releasing memory. It keep using to whatever peek it reaches. > Multiple requests on Tika to extract text slows down > ---------------------------------------------------- > > Key: TIKA-2180 > URL: https://issues.apache.org/jira/browse/TIKA-2180 > Project: Tika > Issue Type: Bug > Components: server > Affects Versions: 1.13, 1.14 > Environment: Windows OS, Open JDK, 4 core 32 GB RAM > Reporter: Ashish Basran > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, > with new experimental SAX docx parser.png > > > I observed that if I send multiple requests to Tika (eg. > http://localhost:8080/tika) with around 5MB files, Tika is very slow in > completing the action. I tried with ~20 random files, it took 170 seconds to > process all the files in sequence. If I pass all files in parallel, it took > around 780 seconds to process same set of files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)