Check to see what your osd_memory_target is set to.  The default 4GB is generally a decent starting point, but if you have a large active data set you might benefit from increasing the amount of memory available to the OSDs.  They'll generally prefer giving it to the onode cache first if it's hot.

*Note:  In some container based deployments the osd_memory_target might be getting set automatically based on the container limit (and possibly based on the memory available in the node).


Mark


On 8/2/23 11:25 PM, Ben wrote:
Hi,
We have a cluster running for a while. From grafana ceph dashboard, I saw
OSD onode hits ratio 92% when cluster was just up and running. After couple
month, it says now 70%. This is not a good trend I think. Just wondering
what should be done to stop this trend.

Many thank,
Ben
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

--
Best Regards,
Mark Nelson
Head of R&D (USA)

Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: [email protected]

We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to