Hi Alfredo,

I want to have such LVs on NVME is for having the best performance. I have
read that having 4 OSDs per NVME is the best one. Beside, since there is
only one NVMe, I think that sacrificing small portion of NVME to accelerate
block.db of HDD OSDs is good too. About VG and LV naming, I think it is
just for cosmetic purposes only. It seem that I have to do it with manual
VG and LV creation if I want to do it with ceph-ansible.

Best regards,


On Sat, May 11, 2019, 02:42 Alfredo Deza <[email protected]> wrote:

> On Fri, May 10, 2019 at 3:21 PM Lazuardi Nasution
> <[email protected]> wrote:
> >
> > Hi Alfredo,
> >
> > Thank you for your answer, it is very helpful. Do you mean that
> --osds-per-device=3 is mistyped? It should be --osds-per-device=4 to create
> 4 OSDs as expected, right? I'm trying to not create it by specifying
> manually created LVM to have consistent ceph way of VG and LV naming.
>
> Typo, yes... good catch!
>
> The VG/LV naming isn't a super advantage here because it was done to
> avoid collisions when creating them programmatically :) I don't know
> why you want to place OSDs in this way which we aren't recommending
> anywhere, you might as well go with what batch proposes.
>
> >
> > By the way, is it possible to do this two ceph-volume batch command by
> using single ceph-ansible run or I should run twice with different
> configuration? If it is possible, what should I put on configuration file?
>
> This might be a good example to take why I am recommending against it:
> tools will probably not support it. I don't think you can make
> ceph-ansible do this, unless you are pre-creating the LVs, which if
> using Ansible shouldn't be too hard anyway
> >
> > Best regards,
> >
> > On Sat, May 11, 2019, 02:09 Alfredo Deza <[email protected]> wrote:
> >>
> >> On Fri, May 10, 2019 at 2:43 PM Lazuardi Nasution
> >> <[email protected]> wrote:
> >> >
> >> > Hi,
> >> >
> >> > Let's say I have following devices on a host.
> >> >
> >> > /dev/sda
> >> > /dev/sdb
> >> > /dev/nvme0n1
> >> >
> >> > How can I do ceph-volume batch which create bluestore OSD on HDDs and
> NVME (devided to be 4 OSDs) and put block.db of HDDs on the NVME too?
> Following are what I'm expecting on created LVs.
> >>
> >> You can, but it isn't easy (batch is meant to be opinionated) and what
> >> you are proposing is a bit of an odd scenario that doesn't fit well
> >> with what the batch command will want to do, which is: create OSDs
> >> from a list
> >> of devices and do the most optimal layout possible.
> >>
> >> I would suggest strongly to just use `ceph-volume lvm create` with
> >> pre-made LVs that you can pass into it to arrange things in the way
> >> you need. However, you might still be able to force batch here by
> >> defining
> >> the block.db sizes in ceph.conf, otherwise ceph-volume falls back to
> >> "as large as possible". Having defined a size (say, 10GB) you can do
> >> this:
> >>
> >> ceph-volume lvm batch /dev/sda /dev/sdb /dev/nvme0n1
> >> ceph-volume lvm batch --osds-per-device=3 /dev/nvme0n1
> >>
> >> Again, I highly recommend against this setup and trying to make batch
> >> do this - not 100% it will work...
> >> >
> >> > /dev/sda: DATA0
> >> > /dev/sdb: DATA1
> >> > /dev/nvme0n1: DB0 | DB1 | DATA2 | DATA3 | DATA4 | DATA5
> >> >
> >> > Best regards,
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > [email protected]
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to