Hi Raghu, Previously you'd said
"sending very large files to Tika will cause out of memory exception" and "sending that large file to Tika will causing timeout issues" I assume these are two different issues, as the second one seems related to how you're connecting to the Tika server via HTTP, correct? For out of memory issues, I'd suggested creating an input stream that can read from a chunked file *stored on disk*, thus alleviating at least part of the memory usage constraint. If the problem is that the resulting extracted text is also too big for memory, and you need to send it as a single document to Elasticsearch, then that's a separate (non-Tika) issue. For the timeout when sending the file to the Tika server, Sergey has already mentioned that you should be able to send it as multipart/form-data. And that will construct a temp file on disk from the chunks, and (I assume) stream it to Tika, so that also would take care of the same memory issue on the input side. Given the above, it seems like you've got enough ideas to try to solve this issue, yes? Regards, -- Ken > From: raghu vittal > Sent: February 24, 2016 10:50:29pm PST > To: [email protected] > Subject: Re: Unable to extract content from chunked portion of large file > > Hi Ken, > > Thanks for the reply. > i understood your point. > > what i have tried. > > > byte[] srcBytes = File.ReadAllBytes(filePath); > > > get the chunk of 1 MB out of srcBytes > > > when i pass this 1 MB chunk to Tika it is giving me the error. > > > As the WIKI Tika needs the entire file to extract content. > > this is where i struck. i don't wan't to pass entire file to Tika. > > correct me if i am wrong. > > --Raghu. > > From: Ken Krugler <[email protected]> > Sent: Wednesday, February 24, 2016 9:07 PM > To: [email protected] > Subject: RE: Unable to extract content from chunked portion of large file > > Hi Raghu, > > I don't think you understood what I was proposing. > > I suggested creating a service that could receive chunks of the file > (persisted to local disk). Then this service could implement an input stream > class that would read sequentially from these pieces. This input stream would > be passed to Tika, thus giving Tika a single continuous stream of data to the > entire file content. > > -- Ken > >> From: raghu vittal >> Sent: February 24, 2016 4:32:01am PST >> To: [email protected] >> Subject: Re: Unable to extract content from chunked portion of large file >> >> Thanks for your reply. >> >> In our application user can upload large files. Our intention is to extract >> the content out of large file and dump that in Elastic for contented based >> search. >> we have > 300 MB size .xlsx and .doc files. sending that large file to Tika >> will causing timeout issues. >> >> i tried getting chunk of file and pass to Tika. Tika given me invalid data >> exception. >> >> I Think for Tika we need to pass entire file at once to extract content. >> >> Raghu. >> >> From: Ken Krugler <[email protected]> >> Sent: Friday, February 19, 2016 8:22 PM >> To: [email protected] >> Subject: RE: Unable to extract content from chunked portion of large file >> >> One option is to create your own RESTful API that lets you send chunks of >> the file, and then you can provide an input stream that provides the >> seamless data view of the chunks to Tika (which is what it needs). >> >> -- Ken >> >>> From: raghu vittal >>> Sent: February 19, 2016 1:37:49am PST >>> To: [email protected] >>> Subject: Unable to extract content from chunked portion of large file >>> >>> Hi All >>> >>> we have very large PDF,.docx,.xlsx. We are using Tika to extract content >>> and dump data in Elastic Search for full-text search. >>> sending very large files to Tika will cause out of memory exception. >>> >>> we want to chunk the file and send it to TIKA for content extraction. when >>> we passed chunked portion of file to Tika it is giving empty text. >>> I assume Tika is relied on file structure that why it is not giving any >>> content. >>> >>> we are using Tika Server(REST api) in our .net application. >>> >>> please suggest us better approach for this scenario. >>> >>> Regards, >>> Raghu. -------------------------- Ken Krugler +1 530-210-6378 http://www.scaleunlimited.com custom big data solutions & training Hadoop, Cascading, Cassandra & Solr
