Am 12.07.2013 um 21:23 schrieb Mark Nelson <[email protected]>:

> On 07/12/2013 02:19 PM, Stefan Priebe - Profihost AG wrote:
>> Right now I have 4096. 36*100/3 => 1200. As recovery take ages I thought 
>> this might be the reason.
> 
> Are you seeing any craziness on the mons?

What could this be? Nothing noticed.

Stefan 



> 
>> 
>> Stefan
>> 
>> This mail was sent with my iPhone.
>> 
>> Am 12.07.2013 um 17:03 schrieb Mark Nelson <[email protected]>:
>> 
>>> On 07/12/2013 09:53 AM, Gandalf Corvotempesta wrote:
>>>> 2013/7/12 Mark Nelson <[email protected]>:
>>>>> At large numbers of PGs it may not matter very much, but I don't think it
>>>>> would hurt either!
>>>>> 
>>>>> Basically this has to do with how ceph_stable_mod works.  At
>>>>> non-power-of-two values, the bucket counts aren't even, but that's only a
>>>>> small part of the story and may ultimately only have a small effect on the
>>>>> distribution unless the PG count is small.
>>>> 
>>>> In case of 12 OSDs for each node, and a cluster made with 18 storage
>>>> nodes are you suggesting:
>>>> 
>>>> (12*18*100) / 3 = 7200 PGs, that rounded to an exponent of 2 means 8192 ?
>>> 
>>> Well, our official recommendation on the website is PGS = OSDS * 100 / 
>>> replicas.  I think the thought is that with sufficient numbers of OSDs the 
>>> behaviour of ceph_stable_mod shouldn't matter (much).  At some point I'd 
>>> like to do a little more involved of an analysis to see how PG distribution 
>>> changes, but for now I wouldn't really expect a dramatic difference between 
>>> 7200 and 8192 PGs.
>>> 
>>> Mark
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to