Be careful when you are designing, if you are planning to have billions of 
objects, you need to have more than 2-4% for rocksdb+wal to avoid spillover.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com<mailto:istvan.sz...@agoda.com>
---------------------------------------------------

On 2021. Oct 22., at 14:47, Peter Sabaini <pe...@sabaini.at> wrote:

Email received from the internet. If in doubt, don't click any link nor open 
any attachment !
________________________________

On 22.10.21 11:29, Mevludin Blazevic wrote:
Dear Ceph users,

I have a small Ceph cluster where each host consist of a small amount of SSDs 
and a larger number of HDDs. Is there a way to use the SSDs as performance 
optimization such as putting OSD Journals to the SSDs and/or using SSDs for 
caching?


Hi,

yes, SSDs can be put to good use as journal devices[0] for OSDs, or you could 
use them as caching devices for bcache[1]. This is actually a pretty widespread 
setup, theres docs for various scenarios[2][3]

But be aware that OSD journals are pretty write-intense, so be sure to use 
fast, reliable SSDs (or NVMes). I have actually seen OSD performance (esp. 
latency and jitter) actually worsen with (prosumer-grade) SSDs that could not 
support the sustained write load.

If at all in doubt run tests before putting prod on it


[0] https://docs.ceph.com/en/latest/start/hardware-recommendations/
[1] https://bcache.evilpiepirate.org/
[2] https://charmhub.io/ceph-osd
[3] 
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/object_gateway_for_production_guide/using-nvme-with-lvm-optimally

Best regards,
Mevludin

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to