eff106.120c536d-cb30-4cea-b607-dd347022a497
-> ../../dm-22
~ # ls -al /dev/disk/by-uuid | grep dm-22
~ # ls -al /dev/disk/by-partuuid/ | grep dm-22
~ # ls -al /dev/disk/by-path | grep dm-22
Best Regards,
Nicholas Gim.
On Wed, Mar 15, 2017 at 6:46 PM Peter Maloney <
peter.malo...@brockmann-con
ph:ceph?
Thank you very much for reading.
Best Regards,
Nicholas.
On Wed, Mar 15, 2017 at 1:06 AM Gunwoo Gim <wind8...@gmail.com> wrote:
> Thank you very much, Peter.
>
> I'm sorry for not clarifying the version number; it's kraken and
> 11.2.0-1xenial.
>
> I guess
~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPE
E: DEVTYPE=disk
Best Regards,
Nicholas.
On Tue, Mar 14, 2017 at 6:37 PM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> Is this Jewel? Do you have some udev rules or anything that changes the
> owner on the journal
I'd love to get helped out; it'd be much appreciated.
Best Wishes,
Nicholas.
On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim <wind8...@gmail.com> wrote:
> Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code bloc
Hello, I'm trying to deploy a ceph filestore cluster with LVM using
ceph-ansible playbook. I've been fixing a couple of code blocks in
ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
again; 'ceph-disk activate osd' fails.
Please let me just show you the error message