[ceph-users] Re: Ceph performance optimization with SSDs

2021-10-22 Thread Szabo, Istvan (Agoda)
Be careful when you are designing, if you are planning to have billions of objects, you need to have more than 2-4% for rocksdb+wal to avoid spillover. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e:

[ceph-users] Re: Ceph performance optimization with SSDs

2021-10-22 Thread MERZOUKI, HAMID
Hello Mevludin Blazevic, Yes it is quite possible, I did it on my cluster. I put the cephDB (WAL + RocksDB) hard drives on the SSD drives. That increases the performance of HDD. Regards -Message d'origine- De : Mevludin Blazevic Envoyé : vendredi 22 octobre 2021 11:30 À : Ceph Users

[ceph-users] Re: Ceph performance optimization with SSDs

2021-10-22 Thread Peter Sabaini
On 22.10.21 11:29, Mevludin Blazevic wrote: > Dear Ceph users, > > I have a small Ceph cluster where each host consist of a small amount of SSDs > and a larger number of HDDs. Is there a way to use the SSDs as performance > optimization such as putting OSD Journals to the SSDs and/or using SSDs

[ceph-users] Re: Ceph performance optimization with SSDs

2021-10-22 Thread Mevludin Blazevic
Hi, thank you very much for your quick response! Ok, so putting the DB/WAL on the NVMe drives could be beneficial. What happens if your SSD with the DB/WAL on it breaks? I assume you configured some type of (double/triple, etc.) replication? Regards, Mevludin Am 22.10.2021 um 11:47

[ceph-users] Re: Ceph performance optimization with SSDs

2021-10-22 Thread Zakhar Kirpichenko
Hi, It is my experience with Bluestore that even if you put DB/WAL on a very fast SSD (NVME in my case), the OSD will still refuse to write faster than the storage device (HDD) can write on average. This is a bummer because the behavior is very different from Filestore, which could buffer write