You need to do them on separate partitions. You can either do sdc{num} or
manage the SSD using LVM.

On Sun, Aug 26, 2018, 8:39 AM Zhenshi Zhou <[email protected]> wrote:

> Hi,
> I have 4 osd nodes with 4 hdd and 1 ssd on each.
> I'm gonna add these osds in an existing cluster.
> What I'm confused is that how to deal with the ssd.
> Can I deploy 4 osd with wal and db in one ssd partition such as:
> # ceph-disk prepare --bluestore --block.db /dev/sdc --block.wal /dev/sdc
> /dev/sdd
> # ceph-disk prepare --bluestore --block.db /dev/sdc --block.wal /dev/sdc
> /dev/sde
> # ceph-disk prepare --bluestore --block.db /dev/sdc --block.wal /dev/sdc
> /dev/sdf
> # ceph-disk prepare --bluestore --block.db /dev/sdc --block.wal /dev/sdc
> /dev/sdg
> or should I place wal and db in seperate ssd partitions:
> # ceph-disk prepare --bluestore --block.db /dev/sdc1 --block.wal /dev/sdc1
> /dev/sdd
> # ceph-disk prepare --bluestore --block.db /dev/sdc2 --block.wal /dev/sdc2
> /dev/sde
> # ceph-disk prepare --bluestore --block.db /dev/sdc3 --block.wal /dev/sdc3
> /dev/sdf
> # ceph-disk prepare --bluestore --block.db /dev/sdc4 --block.wal /dev/sdc4
> /dev/sdg
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to