Hi Steven,

It really depends on how large the active dataset size is that's being accessed across all of the clients. The biggest consumer of memory in the OSD typically is the onode cache. Reading onodes from disk is expensive, so keeping the onode cache hit rate high (especially on NVMe) is a pretty big performance win.

The suggestions you mentioned are probably fairly reasonable as an initial rule-of-thumb, but you may want to scale up/down depending on various factors. IE 8GB memory targets might be fine for NVMe backed OSDs if the drives are smaller and if most of the data being stored is cold. On the other hand, you might want more than 6GB per OSD if you have big HDDs with many active users all storing tiny objects.

Mark



On 7/31/25 7:17 AM, Steven Vacaroaia wrote:
Hi

What is the best practice / your expert advice about using
osd_memory_target_autotune
on hosts with lots of RAM  ?

My hosts have 1 TB RAM , only 3 NVMEs , 12 HDD and 12 SSD
Should I disable autotune and allocate more RAM?

I saw some suggestion for 16GB to NVME , 8GB to SSD and 6 to HDD

Many thanks
Steven
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to