[ceph-users] OSD memory usage after cephadm adoption

2023-07-11 Thread Luis Domingues
Hi everyone, We recently migrate a cluster from ceph-ansible to cephadm. Everything went as expected. But now we have some alerts on high memory usage. Cluster is running ceph 16.2.13. Of course, after adoption OSDs ended up in the zone: NAME PORTS RUNNING REFRESHED AGE PLACEMENT osd 88 7m ag

[ceph-users] OSD Memory usage

2020-11-22 Thread Seena Fallah
Hi all, After I upgrade from 14.2.9 to 14.2.14 my OSDs are using less more memory than before! I give each OSD 6GB memory target and before the free memory was 20GB and now after 24h from the upgrade I have 104GB free memory of 128GB memory! Also, my OSD latency got increases! This happens in both