Thanks, But tika server creates only 1 child process or there's many? On Tue, May 14, 2019, 16:44 Tim Allison <[email protected]> wrote:
> I'm not sure how the underlying cxf server's thread model works. I'd > hope they'd have a pool and reuse threads rather than spawning a new > thread for each request, but I don't know. > > Perhaps Sergey Beryozkin might know? > > On Tue, May 14, 2019 at 9:28 AM Slava G <[email protected]> wrote: > > > > I saw this configuration, but was thinking it for child process, how > then it's concurrent parsing is handled? This child process is only 1 and > it's run thread for each parsing request? > > > > Thanks > > > > On Tue, May 14, 2019, 16:12 Tim Allison <[email protected]> wrote: > >> > >> >1gb/thread is significant amount of ram, I would say. > >> It is, and you may not need it depending on your docs...your mileage > will vary. > >> > >> See https://wiki.apache.org/tika/TikaJAXRS and search for "-JXmx4g" on > >> that page to see how to specify the -Xmx for the child process. > >> > >> On Mon, May 13, 2019 at 4:20 PM Slava G <[email protected]> wrote: > >> > > >> > Thanks, > >> > 1gb/thread is significant amount of ram, I would say. > >> > How can I configure this configuration? > >> > > >> > On Mon, May 13, 2019, 22:52 Tim Allison <[email protected]> wrote: > >> >> > >> >> I like to have 1GB per thread. I'd encourage you to use the > >> >> -spawnChild option to avoid problems w OOM, infinite loops, etc. If > >> >> you do this, you'll need to make sure that your clients can handle > >> >> tika-server being down for a few seconds on restart. > >> >> > >> >> Other than that, you should be good to go. Let us know if you find > >> >> any other optimizations/settings that help or if you have any > >> >> surprises. > >> >> > >> >> Cheers, > >> >> > >> >> Tim > >> >> > >> >> On Wed, May 8, 2019 at 5:48 AM Slava G <[email protected]> wrote: > >> >> > > >> >> > Hi, > >> >> > What is recommended configuration of TIAK server, if I'll run at > most 15 concurrent parsing requests ? > >> >> > > >> >> > Thanks >
