The memory gets used for additional PGs on the OSD.
If you were to "swap" PGs between two OSDs, you'll get memory wasted on both of 
them because tcmalloc doesn't release it.*
It usually gets stable after few days even during backfills, so it does get 
reused if needed.
If for some reason your OSDs get to 8GB RSS then I recommend you just get more 
memory, or try disabling tcmalloc which can either help or make it even worse 
:-)

* E.g. if you do something silly like "ceph osd crush reweight osd.1 10000" you 
will see the RSS of osd.28 skyrocket. Reweighting it back down will not release 
the memory until you do "heap release".

Jan


> On 09 Sep 2015, at 12:05, Mariusz Gronczewski 
> <mariusz.gronczew...@efigence.com> wrote:
> 
> On Tue, 08 Sep 2015 16:14:15 -0500, Chad William Seys
> <cws...@physics.wisc.edu> wrote:
> 
>> Does 'ceph tell osd.* heap release' help with OSD RAM usage?
>> 
>> From
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003932.html
>> 
>> Chad.
> 
> it did help now, but cluster is in clean state at the moment. But I
> didnt know that one, thanks.
> 
> High memory usage stopped once cluster rebuilt, but I've planned
> cluster to have 2GB per OSD so I needed to add ram to even get to the
> point of ceph starting to rebuild, as some OSD ate up to 8 GBs during
> recover
> 
> -- 
> Mariusz Gronczewski, Administrator
> 
> Efigence S. A.
> ul. WoĊ‚oska 9a, 02-583 Warszawa
> T: [+48] 22 380 13 13
> F: [+48] 22 380 13 14
> E: mariusz.gronczew...@efigence.com
> <mailto:mariusz.gronczew...@efigence.com>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to