Hi All, I'm quite new to ES and using it as part of the ELK stack. I'm running a two node cluster where each node got 24core and 64GB RAM (32 allocated to Java), We have an index rate @ ~3000/s and total document count per 24h @ 200miljon (and storing 10days). When mem usage gets close to 80% I start having search problems in Kibana and get all kinds of exception and all gets better when mem usage gets lower again. So, one way is of course to scale out, but my question is, what mem tuning can be done and will it make a big difference? 1, Drop unused data fields in logstash already, will that make any difference? 2, Reduce index.refresh_interval, will that reduce mem usage? 3, Set replicas to zero and get all primary shards spread over both nodes, will that impact mem usage? What else can be done to lower mem usage?
Anyone out there having the same type of load or higher (document count and index rate), how does your set up look like? Br Mathias -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2ba446f8-6017-4863-b462-a075b60780f2%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
