I am getting the same problem: ElasticSearch nodes with too low open file limit a few seconds ago Even after implementing the changes suggested. There is something else going on here. I can clearly see that elasticsearch has more than 64000 opefile capability, it's just ignoring it....
Anyone else come up with a solution ? On Wednesday, June 4, 2014 3:06:31 PM UTC+1, Arie wrote: > > Ankit, > > As I remember, you have to enable a memory setting in the elasticsearch > conf file: > bootstrap.mlockall: true > > And I did something with /etc/security/limits.cond on centos 6.5 > > root soft nofile 65536 > root hard nofile 65536 > * soft nofile 65536 > * hard nofile 65536 > > # tbv elasticsearch > * soft nproc 65536 > * hard nproc 65536 > > elasticsearch soft memlock unlimited > elasticsearch hard memlock unlimited > > > > > > On Wednesday, June 4, 2014 3:25:02 PM UTC+2, Ankit Mittal wrote: >> >> Dear All, >> >> I have already set the open file limit to 65536 , but i am still getting >> error in graylog2 web interface that the open file limit is too low. >> >> I have restarted the server and web interface. but the error is still >> exist. >> >> Please help me to resolve the above issue. >> >> Thanks >> Ankit Mittal >> > -- You received this message because you are subscribed to the Google Groups "graylog2" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
