bq: That's evidence enough for me to beat on our systems guys.... Beat them with a stick. A large stick. If they insist that they can't up these limits, I always push back hard with "why?". Usually reluctance to up them is a misplaced efficiency argument....
And I should have said you should see exceptions in your Solr logs if this is really something that happens. FWIW, Erick On Fri, Jan 19, 2018 at 11:07 AM, Pouliot, Scott <scott.poul...@peoplefluent.com> wrote: > Working on that now to see if it helps us out. Solr process is NOT dying at > all. Searches are still working as expected, but since we load balance > requests....if the master/slave are out of sync the search results vary. > > The advice is MUCH appreciated! > > -----Original Message----- > From: Shawn Heisey [mailto:apa...@elyograg.org] > Sent: Friday, January 19, 2018 1:49 PM > To: solr-user@lucene.apache.org > Subject: Re: Solr Replication being flaky (6.2.0) > > On 1/19/2018 11:27 AM, Shawn Heisey wrote: >> On 1/19/2018 8:54 AM, Pouliot, Scott wrote: >>> I do have a ticket in with our systems team to up the file handlers >>> since I am seeing the "Too many files open" error on occasion on our >>> prod servers. Is this the setting you're referring to? Found we >>> were set to to 1024 using the "Ulimit" command. >> >> No, but that often needs increasing too. I think you need to increase >> the process limit even if that's not the cause of this particular problem. > > Had another thought. Either of these limits can cause completely > unpredictable problems with Solr. The open file limit could be the reason > for these issues, even if you're not actually hitting the process limit. As > I mentioned before, I would expect a process limit to cause Solr to kill > itself, and your other messages don't mention problems like that. > > The scale of your Solr installation indicates that you should greatly > increase both limits on all of your Solr servers. > > Thanks, > Shawn