Dear Zakhar and Anthony,

Thank you for your valuable feedback.

Maybe, my information was wrong. after creating VG and LV our command was
below to create osd for SATA-HDD
ceph-deploy osd create --bluestore --data ceph-block-2/block-2 --block-db
ceph-db-0/db-2 cephosd1

So, we are using separate block db.

Currently we are planning to install new OSD with SATA-SSD. so block-db
will not be required.

we will use device-class and different replicated_rule

$ceph osd crush set-device-class SSD osd.10
$ceph osd crush rule create-replicated replicated_rule_SSD default host SSD

Are we good to go?

Regards,
Munna





On Sun, Dec 5, 2021 at 3:09 PM Zakhar Kirpichenko <[email protected]> wrote:

> Hi!
>
> If you use SSDs for OSDs, there's no real benefit from putting DB/WAL on a
> separate drive.
>
> Best regards,
> Z
>
> On Sun, Dec 5, 2021 at 10:15 AM Md. Hejbul Tawhid MUNNA <
> [email protected]> wrote:
>
>> Hi,
>>
>> We are running openstack cloud with backend ceph storage. Currently we
>> have
>> only HDD storage in our ceph cluster.  Now we are planning to add new
>> server and osd with SSD disk. Currently we are using separate SSD disk for
>> journal disk.
>>
>> Now if we install new OSD with SSD disk, do we need separate SSD disk for
>> the journal? What will be the best approach.
>>
>> Regards,
>> Munna
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to