Thanks Dan,

I'll use these ones from Infernalis:


[global]
osd map message max = 100

[osd]
osd map cache size = 200
osd map max advance = 150
osd map share max epochs = 100
osd pg epoch persisted max stale = 150


George

On Mon, Nov 30, 2015 at 4:20 PM, Dan van der Ster <d...@vanderster.com>
wrote:

> I wouldn't run with those settings in production. That was a test to
> squeeze too many OSDs into too little RAM.
>
> Check the values from infernalis/master. Those should be safe.
>
> --
> Dan
> On 30 Nov 2015 21:45, "George Mihaiescu" <lmihaie...@gmail.com> wrote:
>
>> Hi,
>>
>> I've read the recommendation from CERN about the number of OSD maps (
>> https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf,
>> page 3) and I would like to know if there is any negative impact from these
>> changes:
>>
>> [global]
>> osd map message max = 10
>>
>> [osd]
>> osd map cache size = 20
>> osd map max advance = 10
>> osd map share max epochs = 10
>> osd pg epoch persisted max stale = 10
>>
>>
>> We are running Hammer with nowhere closer to 7000 OSDs, but I don't want
>> to waste memory on OSD maps which are not needed.
>>
>> Are there are large production deployments running with these or similar
>> settings?
>>
>> Thank you,
>> George
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to