Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats";

curl -XGET "http://localhost:9200/_nodes";

Those are my first-stop APIs for determining where memory is being 
allocated.


By the way, these settings don't do anything anymore (they were depreciated 
and removed):

index.cache.field.type: soft 
index.term_index_interval: 256 
index.term_index_divisor: 5 

index.cache.field.max_size: 10000

 

`max_size` was replaced with `indices.fielddata.cache.size` and accepts a 
value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC thrashing):

index.cache.field.expire: 10m 


 

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:
>
> Now the process went back down to 25% usage, from now on it will go back 
> up, and won't stop going up.
>
> Sorry for spamming
>
> - - - - - - - - - -
> Sincerely:
> Hicham Mallah 
> Software Developer
> [email protected] <javascript:>
> 00961 700 49 600
>           
>
>
> On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah 
> <[email protected]<javascript:>
> > wrote:
>
>> Here's the top after ~1 hour running:
>>
>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> 780 root      20   0  317g  14g 7.1g S 492.9 46.4 157:50.89 java
>>
>>  
>> - - - - - - - - - -
>> Sincerely:
>> Hicham Mallah 
>> Software Developer
>> [email protected] <javascript:>
>> 00961 700 49 600
>>           
>>
>>
>> On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah 
>> <[email protected]<javascript:>
>> > wrote:
>>
>>> Hello Jörg
>>>
>>> Thanks for the reply, our swap size is 2g. I don't know at what % the 
>>> process is being killed as the first time it happened I wasn't around, and 
>>> then I never let that happen again as the website is online. After 2 hours 
>>> of running the memory in sure is going up to 60%, I am restarting each time 
>>> when it arrives at 70% (2h/2h30) when I am around and testing config 
>>> changes. When I am not around, I am setting a cron job to restart the 
>>> server every 2 hours. Server has apache and mysql running on it too.
>>>
>>>
>>>
>>> - - - - - - - - - -
>>> Sincerely:
>>> Hicham Mallah 
>>> Software Developer
>>> [email protected] <javascript:>
>>> 00961 700 49 600
>>>           
>>>
>>>
>>> On Thu, Mar 13, 2014 at 2:22 PM, [email protected] <javascript:> <
>>> [email protected] <javascript:>> wrote:
>>>
>>>> You wrote, the OOM killer killed the ES process. With 32g (and the swap 
>>>> size), the process must be very big. much more than you configured. Can 
>>>> you 
>>>> give more info about the live size of the process, after ~2 hours? Are 
>>>> there more application processes on the box?
>>>>
>>>> Jörg
>>>>
>>>>
>>>> On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah 
>>>> <[email protected]<javascript:>
>>>> > wrote:
>>>>
>>>>> Hello, 
>>>>>
>>>>> I have been using elasticsearch on a ubuntu server for a year now, and 
>>>>> everything was going great. I had an index of 150,000,000 entries of 
>>>>> domain 
>>>>> names, running small queries on it, just filtering by 1 term no sorting 
>>>>> no 
>>>>> wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 
>>>>> 32GB 
>>>>> ram and running elasticserach but now we have 2 indices, of about 150 
>>>>> million entries each 32 shards, still running the same queries on them 
>>>>> nothing changed in the queries. But since we went online with the new 
>>>>> server, I have to restart elasticsearch every 2 hours before OOM killer 
>>>>> kills it. 
>>>>>
>>>>> What's happening is that elasticsearch starts using memory till 50% 
>>>>> then it goes back down to about 30% gradually then starts to go up again 
>>>>> gradually and never goes back down. 
>>>>>
>>>>> I have tried all the solutions I found on the net, I am a developer 
>>>>> not a server admin. 
>>>>>
>>>>> *I have these setting in my service wrapper configuration*
>>>>>
>>>>> set.default.ES_HOME=/home/elasticsearch 
>>>>> set.default.ES_HEAP_SIZE=8192 
>>>>> set.default.MAX_OPEN_FILES=65535 
>>>>> set.default.MAX_LOCKED_MEMORY=10240 
>>>>> set.default.CONF_DIR=/home/elasticsearch/conf 
>>>>> set.default.WORK_DIR=/home/elasticsearch/tmp 
>>>>> set.default.DIRECT_SIZE=4g 
>>>>>
>>>>> # Java Additional Parameters 
>>>>> wrapper.java.additional.1=-Delasticsearch-service 
>>>>> wrapper.java.additional.2=-Des.path.home=%ES_HOME% 
>>>>> wrapper.java.additional.3=-Xss256k 
>>>>> wrapper.java.additional.4=-XX:+UseParNewGC 
>>>>> wrapper.java.additional.5=-XX:+UseConcMarkSweepGC 
>>>>> wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75 
>>>>> wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly 
>>>>> wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError 
>>>>> wrapper.java.additional.9=-Djava.awt.headless=true 
>>>>> wrapper.java.additional.10=-XX:MinHeapFreeRatio=40 
>>>>> wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70 
>>>>> wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75 
>>>>> wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly 
>>>>> wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g 
>>>>> # Initial Java Heap Size (in MB) 
>>>>> wrapper.java.initmemory=%ES_HEAP_SIZE% 
>>>>>
>>>>> *And these in elasticsearch.yml*
>>>>> ES_MIN_MEM: 5g 
>>>>> ES_MAX_MEM: 5g 
>>>>> #index.store.type=mmapfs 
>>>>> index.cache.field.type: soft 
>>>>> index.cache.field.max_size: 10000 
>>>>> index.cache.field.expire: 10m 
>>>>> index.term_index_interval: 256 
>>>>> index.term_index_divisor: 5 
>>>>>
>>>>> *java version: *
>>>>> java version "1.7.0_51" 
>>>>> Java(TM) SE Runtime Environment (build 1.7.0_51-b13) 
>>>>> Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) 
>>>>>
>>>>> *Elasticsearch version*
>>>>>  "version" : { 
>>>>>     "number" : "1.0.0", 
>>>>>     "build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32", 
>>>>>     "build_timestamp" : "2014-02-12T16:18:34Z", 
>>>>>     "build_snapshot" : false, 
>>>>>     "lucene_version" : "4.6" 
>>>>>   } 
>>>>>
>>>>> Using elastica PHP 
>>>>>
>>>>>
>>>>> I have tried playing with values up and down to try to make it work, 
>>>>> but nothing is changing.   
>>>>>
>>>>> Please any help would be highly appreciated. 
>>>>>
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "elasticsearch" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to [email protected] <javascript:>.
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>  -- 
>>>> You received this message because you are subscribed to a topic in the 
>>>> Google Groups "elasticsearch" group.
>>>> To unsubscribe from this topic, visit 
>>>> https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
>>>> .
>>>> To unsubscribe from this group and all its topics, send an email to 
>>>> [email protected] <javascript:>.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com<https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to