Bloated to ~4 GB per OSD and you are on HDDs?

13.2.3 backported the cache auto-tuning which targets 4 GB memory
usage by default.

See https://ceph.com/releases/13-2-4-mimic-released/

The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.
For more details, see the BlueStore docs.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Mar 4, 2019 at 3:55 PM Steffen Winther Sørensen
<[email protected]> wrote:
>
> List Members,
>
> patched a centos 7  based cluster from 13.2.2 to 13.2.4 last monday, 
> everything appeared working fine.
>
> Only this morning I found all OSDs in the cluster to be bloated in memory 
> foot print, possible after weekend backup through MDS.
>
> Anyone else seeing possible memory leak in 13.2.4 OSD possible primarily when 
> using MDS?
>
> TIA
>
> /Steffen
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to