On 13 Oct. 2016, at 20:21, Praveen Kumar G T (Cloud Platform) 
<praveen...@flipkart.com> wrote:
> Hi David,
> I am Praveen, we also had a similar problem with hammer 0.94.2. We had the 
> problem when we created a new cluster with erasure coding pool (10+5 config). 
> Root cause:
> The high memory usage in our case was because of pg logs. The number of pg 
> logs are higher in case of erasure coding pool compared to replica pools. so 
> in our case we started running out of memory when we created the new cluster 
> with erasure coding pools
> Solution:
> Ceph provides configuration to control the number of pg log entries. You can 
> try setting this value in your cluster and check your OSD memory usage. This 
> will also improve the osd boot up time. Below are the config parameters and 
> the values we use
>   osd max pg log entries = 600
>   osd min pg log entries = 200
>   osd pg log trim min = 200
> Other Information:
> We dug around this problem for some time before figuring out the root cause. 
> So we are fairly sure there are no memory leaks in ceph hammer 0.94.2 
> version. 
> Regards,
> Praveen

Hello Praveen,

Thankyou for your suggestions.

We’ve previously attempting tuning these parameters with no effect.

Regardless, today we’ve tested your parameters (which are more aggressive than 
what we tried) on one of the OSDs … but there was no change.

Next step is to examine debug output but this may take some time to interpret...

NB At the moment we’re only using replicated pools. We’d like to evaluate EC 
pools but this is on the backburner until we can fix the high OSD memory usage.


FetchTV Pty Ltd, Level 5, 61 Lavender Street, Milsons Point, NSW 2061


This email is sent by FetchTV Pty Ltd (ABN 36 130 669 500). The contents of 
this communication may be 
confidential, legally privileged and/or copyright material. If you are not 
the intended recipient, any use, 
disclosure or copying of this communication is expressed prohibited. If you 
have received this email in error, 
please notify the sender and delete it immediately.

ceph-users mailing list

Reply via email to