Thanks Mark. I separated out the processes on different servers, so now ES 
has a full server.
I made the changes yesterday and it seems to be stable since then.

On Wednesday, April 1, 2015 at 2:07:41 AM UTC+5:30, Mark Walkom wrote:
>
> Also, we don't recommend running ES alongside other apps. As you can see 
> contention is an issue and you will have to pay the price there.
>
> On 1 April 2015 at 03:27, Aaron Mefford <aa...@mefford.org <javascript:>> 
> wrote:
>
>> You need to read up a bit on how memory is allocated in Linux. 
>>
>> In an ElasticSearch or Database server, this seems to be both, you want 
>> that free column to be 0.  All available free memory should be used to 
>> cache files.  In your snapshot you have 35GB of file cache listed under the 
>> cached heading.  Memory listed under cached is essentially free memory that 
>> is temporarily being used to cache files until it is otherwise requested.  
>> This is how Linux makes efficient use of your memory, leveraging free 
>> memory for file cache, but still having it available when you need it.  As 
>> such when determining if your box is out of memory you need to sum free and 
>> cached.  
>>
>> This is precisely the reason that it is recommended that ElasticSearch 
>> only be allocated 50% of the memory on the box for heap.  In your case 
>> where you have databases running, it should be 50% of the memory you have 
>> available for ElasticSearch.  For that matter you should apply the same 
>> basic rule( 50%) to your database unless it has specifically some other 
>> file caching mechanism.  For instance, you have 50GB of ram, assuming MySQL 
>> and ElasticSearch, and you want to equally divide the ram, 25GB to each.  
>> ElasticSearch then would be allowed to use 25GB, 12GB should be allocated 
>> to heap, the balance left to the OS for file caching on behalf of 
>> ElasticSearch.  Assuming MySQL, with MyIsam, the same would be done 12GB to 
>> MySQL, 12GB to the OS for file system caching of the MyISAM tables.  Now if 
>> you are using InnoDB things are different but that is way outside the scope 
>> of this discussion.  
>>
>> So that you have 35GB of files being cached is a very good thing.  It 
>> means that you have a large amount of your data cached.  It means you have 
>> ample free memory, well beyond the 12GB a 50/50 split would demand.  The 
>> 12GB of free you have now probably came from the processes that you killed, 
>> I think you meant this was ElasticSearch, though you were not specific.
>>
>> The one concern I see looking at your top, is that you have a large swap, 
>> and that some of it has been used.  This is a sign that at some point you 
>> had memory pressure, the only sign I see from your snapshot.  That pressure 
>> was not significant, but any swapping will destroy the performance of a 
>> database, or ElasticSearch.  In many cases people go to the extreme of 
>> disabling swap entirely, as performance during swapping will be so poor, 
>> that it will be unusable.  Further by the time you were to even put a dent 
>> in the size of that swap you will have wanted to reboot your box.  My 
>> approach is to keep a small swap available, so that I can see if the system 
>> ever got to a point that it needed it, and to potentially buy a moment of 
>> time.
>>
>> If you are experiencing database slowdowns, this screenshot does not 
>> illustrate that it is due to memory issues.  Rather I would suspect disk IO 
>> instead based on this information.
>>
>> On Tuesday, March 31, 2015 at 4:25:40 AM UTC-6, Yogesh wrote:
>>>
>>> Thanks Uwe. As I mentioned earlier, I did guess that VIRT doesn't 
>>> indicate RAM consumption.
>>>
>>> What I am concerned about is the 3rd row which shows memory and 
>>> indicates that out of the total 50g, 43g is in use. Once this number 
>>> crosses 45g, my other databases start behaving badly.
>>>
>>> Problem is, even after I kill all the processes, this doesn't go down. 
>>> (Attaching snapshot of top after killing all processes). Right now what I 
>>> do is reboot the system every three days which is the time it takes to 
>>> gradually fill the memory with something (I have no clue what that is).
>>>
>>> Though I think the max file descriptors wouldn't be the culprit for 
>>> this? I haven't changed that yet.
>>>
>>> On Mon, Mar 30, 2015 at 3:19 AM, Uwe Schindler <uwe.h.s...@gmail.com> 
>>> wrote:
>>>
>>>> You should read: http://blog.thetaphi.de/2012/07/use-lucenes-
>>>> mmapdirectory-on-64bit.html
>>>>
>>>> Maybe this allows you to figure out what's going on! VIRT means nothing 
>>>> about consumption, you should look at RES.
>>>>
>>>> Thanks,
>>>> Uwe
>>>>
>>>>
>>>> Am Sonntag, 29. März 2015 22:23:00 UTC+2 schrieb Yogesh:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I have a single node ES setup (50GB memory, 500GB disk, 4 cores) and I 
>>>>> run the Twitter river on it. I've set the ES_HEAP_SIZE to 5g. However, 
>>>>> when 
>>>>> I do "top", the ES process shows the VIRT memory to be around 34g. That 
>>>>> would be I assume the max mapped memory. The %MEM though always hovers 
>>>>> around 10%
>>>>>
>>>>> However, within a few days post-reboot, the memory used keeps going 
>>>>> up. From 10g to almost 50g (as shown in the third line) because of which 
>>>>> my 
>>>>> other dbs start behaving badly. Below is the snapshot of "top". Despite 
>>>>> the 
>>>>> fact that VIRT and %MEM still hover around the same 34g and 10% 
>>>>> respectively.
>>>>>
>>>>> Please help me understand where is my memory going over time! My one 
>>>>> guess is that Lucene is eating it up. How do I remedy it?
>>>>>
>>>>> Thanks-in-advance!
>>>>>
>>>>>
>>>>>
>>>>> <https://lh3.googleusercontent.com/-zD9y4f2Eqqk/VRhdtX2XtTI/AAAAAAAAAN8/aq8-wxm2bBg/s1600/top.png>
>>>>>
>>>>>
>>>>>  -- 
>>>> You received this message because you are subscribed to a topic in the 
>>>> Google Groups "elasticsearch" group.
>>>> To unsubscribe from this topic, visit https://groups.google.com/d/
>>>> topic/elasticsearch/kTDNDJwxOzA/unsubscribe.
>>>> To unsubscribe from this group and all its topics, send an email to 
>>>> elasticsearc...@googlegroups.com.
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%
>>>> 40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/elasticsearch/c6e834ab-77c4-4a99-9307-b6b3baf0d232%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/11d2d973-bbe8-49e3-a2ba-b9ad2c3df0b3%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4df4d368-1113-4718-9a14-3bf5abd10ea0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to