Hi,

I have a Ceph 16.2.12 cluster with hybrid OSDs (HDD block storage, DB/WAL
on NVME). All OSD settings are default except, cache-related settings are
as follows:

    osd.14        dev       bluestore_cache_autotune               true

    osd.14        dev       bluestore_cache_size_hdd
4294967296
    osd.14        dev       bluestore_cache_size_ssd
4294967296                                                osd.14
 advanced  bluestore_default_buffered_write       false
                                osd.14        dev
osd_memory_cache_min                   2147483648
                    osd.14        basic     osd_memory_target
       17179869184

Other settings such as bluestore_cache_kv_ratio,
bluestore_cache_meta_ratio, etc. are default. I.e. OSD memory target is set
to 16 GB, bluestore cache is set to 4 GB for HDDs and SSDs, minimum cache
size is 2 GB.

When I dump memory pools of OSDs, bluestore cache doesn't seem to be
actively used (https://pastebin.com/EpfFp85C), despite there's plenty of
memory and the memory target is 16 GB, memory pools are around 2 GB and the
total RSS of the OSD process is ~4.8 GB.

There are 66 OSDs in the cluster and the situation is very similar with all
of them. The OSDs are being used quite actively for both reads and writes,
and I guess they could benefit from using more memory for
caching, especially considering that we have lots of RAM available on each
host.

Is there a way to increase and/or tune OSD cache memory usage? I would
appreciate any advice or pointers.

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to