It could be a number of things.  Check your various ES caches.  Full? 
 Correlated with GC activity increase and eventual OOM.  Then check your 
queries - are they big? Expensive aggregations? (the other day I saw one of 
our clients using agg queries 10K lines in size)  I could keep asking 
questions..... share everything you've got to get help here.

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On Thursday, November 20, 2014 3:53:24 AM UTC-5, tetlika wrote:
>
> anyone?
>
> Середа, 19 листопада 2014 р. 13:32:37 UTC+1 користувач Serg Fillipenko 
> написав:
>>
>> We have contact profiles (20+ fields, containing nested documents) 
>> indexed and their social profiles(10+ fields) indexed as child documents of 
>> contact profile.
>> We run complex bool match queries, delete by query, delete children by 
>> query, faceting queries on contact profiles.
>> index rate 14.31op/s
>> remove by query rate  13.41op/s (such high value caused by fact we 
>> delete all child docs first before indexing of parent and then we index 
>> children again)
>> search rate 2.53op/s
>> remove by ids 0.15op/s
>>
>> We started to face this trouble under ES 1.2 but just after we started to 
>> index and delete (no searching requests yet) child documents. On ES 1.4 we 
>> have the same issue.
>>
>>
>> What sort of data is it, what sort of queries are you running and how 
>>> often are they run?
>>>
>>> On 19 November 2014 17:52, tetlika <[email protected]> wrote:
>>>
>>>> hi,
>>>>
>>>> we have 6 servers and 14 shards in cluster, the index size 26GB, we 
>>>> have 1 replica so total size is 52GB, and ES v1.4.0, java version 
>>>> "1.7.0_65"
>>>>
>>>> we use servers with RAM of 14GB (m3.xlarge), and heap is set to 7GB
>>>>
>>>> around week ago we started facing next issue:
>>>>
>>>> random cluster servers around once per day/two are hitting the heap 
>>>> size limit (java.lang.OutOfMemoryError: Java heap space) in log, and 
>>>> cluster is failing - becomes red or yellow
>>>>
>>>> we tried adding more servers to cluster - even 8, but than it's a 
>>>> matter of time when we'll hit the problem, so looks no matter how many 
>>>> servers are in cluster - it will still hit the limit after some time
>>>>
>>>> before we started facing the problem we were running smoothly with 3 
>>>> servers
>>>> also we set indices.fielddata.cache.size:  40% but it didnt helped
>>>>
>>>> also, there are possible workarounds to decrease heap usage:
>>>>
>>>> 1) reboot some server - than heap becomes under 70% and for some time 
>>>> cluster is ok
>>>>
>>>> or
>>>>
>>>> 2) decrease number of replicas to 0, and than back to 1
>>>>
>>>> but I dont like to use those workarounds
>>>>
>>>> how it can happen while all index can fit into RAM it can run out of it?
>>>>
>>>> thanks much for possible help
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/elasticsearch/2ae23017-fde7-4b10-b31b-39076b079f10%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/elasticsearch/2ae23017-fde7-4b10-b31b-39076b079f10%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e12375e3-3d57-4ae9-9e3b-641ed2862329%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to