Thanks for your reply, see comment in line...

Den torsdagen den 2:e oktober 2014 kl. 23:52:13 UTC+2 skrev Mark Walkom:
>
> You should drop your heap to 31GB, over that you don't compress java 
> pointers and GC will suffer.
>
*Ok, I thought the limit was 32? I'll drop to 31 then.* 

>
>    1. That will help as it reduces document size.
>
> *I'll investigate what fields can be removed and then drop them. *

>
>    1. Definitely, if you don't need near-realtime access then open it up, 
>    we run at 60sec but could probably go to 2 or more minutes.
>
> *Will reduce to 10s to see if it makes a difference*.  

>
>    1. This could be risky, doubly so given you are only running two nodes.
>
> To elaborate on point 3, and on general note, you should really run 3 or 
> more nodes incrementing up to odd numbers (3,5,7,9....). The reason for 
> this is that is that it helps to prevent split brain across your cluster.
>
 *I actually have three nodes for reason stated above but only two data 
nodes, third node is slower HW.*

You should use disable bloom filtering on your indexes as it will give you 
> a bit of a boost, Elasticsearch curator can handle that for you.
>
*Interesting, haven't seen this option before, thanks!*

>
> However you are probably reaching the limits of your cluster, 2 billion 
> docs is a fair bit of data. What versions of ES and java are you on? What 
> amount of data (GB) is it?
>
*I'm running ES 1.3.2 and Java 1.7.0_65 and collect ~70GB per 24h.*
*I'll test the suggestions above but will add two more nodes anyway as we 
have more logs to through in.*

*Thanks for your input, I'll post the result when done!*

Br
Mathias Adler 

>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: [email protected] <javascript:>
> web: www.campaignmonitor.com
>
> On 3 October 2014 00:04, Mathias Adler <[email protected] <javascript:>
> > wrote:
>
>> Hi All,
>> I'm quite new to ES and using it as part of the ELK stack. I'm running a 
>> two node cluster where each node got 24core and 64GB RAM (32 allocated to 
>> Java), We have an index rate @ ~3000/s and total document count per 24h @ 
>> 200miljon (and storing 10days).
>> When mem usage gets close to 80% I start having search problems in Kibana 
>> and get all kinds of exception and all gets better when mem usage gets 
>> lower again.
>> So, one way is of course to scale out, but my question is, what mem 
>> tuning can be done and will it make a big difference?
>> 1, Drop unused data fields in logstash already, will that make any 
>> difference?
>> 2, Reduce index.refresh_interval, will that reduce mem usage?
>> 3, Set replicas to zero and get all primary shards spread over both 
>> nodes, will that impact mem usage?
>> What else can be done to lower mem usage?
>>
>> Anyone out there having the same type of load or higher (document count 
>> and index rate), how does your set up look like? 
>>
>> Br
>> Mathias
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/2ba446f8-6017-4863-b462-a075b60780f2%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/2ba446f8-6017-4863-b462-a075b60780f2%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b68e68d5-93cc-4350-8f17-de137010fb4e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to