Be careful when you are designing, if you are planning to have billions of
objects, you need to have more than 2-4% for rocksdb+wal to avoid spillover.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e:
Hello Mevludin Blazevic,
Yes it is quite possible, I did it on my cluster.
I put the cephDB (WAL + RocksDB) hard drives on the SSD drives.
That increases the performance of HDD.
Regards
-Message d'origine-
De : Mevludin Blazevic
Envoyé : vendredi 22 octobre 2021 11:30
À : Ceph Users
On 22.10.21 11:29, Mevludin Blazevic wrote:
> Dear Ceph users,
>
> I have a small Ceph cluster where each host consist of a small amount of SSDs
> and a larger number of HDDs. Is there a way to use the SSDs as performance
> optimization such as putting OSD Journals to the SSDs and/or using SSDs
Hi,
thank you very much for your quick response! Ok, so putting the DB/WAL
on the NVMe drives could be beneficial. What happens if your SSD with
the DB/WAL on it breaks? I assume you configured some type of
(double/triple, etc.) replication?
Regards,
Mevludin
Am 22.10.2021 um 11:47
Hi,
It is my experience with Bluestore that even if you put DB/WAL on a very
fast SSD (NVME in my case), the OSD will still refuse to write faster than
the storage device (HDD) can write on average. This is a bummer because the
behavior is very different from Filestore, which could buffer write