It's running now using the /dev/mapper/by-id name so I'll just stick with
that and use this in the future.  Thanks.

On Thu, Apr 30, 2020 at 3:43 PM Strahil Nikolov <hunter86...@yahoo.com>
wrote:

> On April 29, 2020 8:21:58 PM GMT+03:00, Shareef Jalloq <
> shar...@jalloq.co.uk> wrote:
> >Actually, now I've fixed that, indeed, the deployment now fails with an
> >lvm
> >filter error.  I'm not familiar with filters but there aren't any
> >uncommented instances of 'filter' in /etc/lvm/lvm.conf.
> >
> >
> >
> >On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shar...@jalloq.co.uk>
> >wrote:
> >
> >> Ah of course.  I was assuming something had gone wrong with the
> >deployment
> >> and it couldn't clean up its own mess.  I'll raise a bug on the
> >> documentation.
> >>
> >> Strahil, what are the other options to using /dev/sdxxx?
> >>
> >> On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov
> ><hunter86...@yahoo.com>
> >> wrote:
> >>
> >>> On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jay...@gmail.com>
> >wrote:
> >>> >Has the drive been used before, it might have existing
> >>> >partition/filesystem
> >>> >on it? If you are sure it's fine to overwrite try running wipefs -a
> >>> >/dev/sdb on all hosts. Also make sure there aren't any filters
> >setup in
> >>> >lvm.conf (there shouldn't be on fresh install, but worth checking).
> >>> >
> >>> >On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq
> ><shar...@jalloq.co.uk>
> >>> >wrote:
> >>> >
> >>> >> Hi,
> >>> >>
> >>> >> I'm running the gluster deployment flow and am trying to use a
> >second
> >>> >> drive as the gluster volume.  It's /dev/sdb on each node and I'm
> >>> >using the
> >>> >> JBOD mode.
> >>> >>
> >>> >> I'm seeing the following gluster ansible task fail and a google
> >>> >search
> >>> >> doesn't bring up much.
> >>> >>
> >>> >> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> >>> >> ****************
> >>> >>
> >>> >> failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
> >>> >> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) =>
> >{"ansible_loop_var":
> >>> >"item",
> >>> >> "changed": false, "err": "  Couldn't find device with uuid
> >>> >> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n  Couldn't find device
> >with
> >>> >uuid
> >>> >> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n  Couldn't find device
> >with
> >>> >uuid
> >>> >> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n  Couldn't find device
> >with
> >>> >uuid
> >>> >> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n  Device /dev/sdb
> >excluded
> >>> >by a
> >>> >> filter.\n", "item": {"pvname": "/dev/sdb", "vgname":
> >>> >"gluster_vg_sdb"},
> >>> >> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> >>> >> _______________________________________________
> >>> >> Users mailing list -- users@ovirt.org
> >>> >> To unsubscribe send an email to users-le...@ovirt.org
> >>> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> >> oVirt Code of Conduct:
> >>> >> https://www.ovirt.org/community/about/community-guidelines/
> >>> >> List Archives:
> >>> >>
> >>> >
> >>>
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUFJ56FGJI3TYWT6NOLAZ/
> >>> >>
> >>>
> >>> Actually best practice is not to use /dev/sdxxx as they can change.
> >>>
> >>> In your case most peobably the LUN is not fresh, so wipe it with
> >>> dd/blktrim so any remnants of old FS signature  is gone.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>
>
> Can you get a debug output from ansible ?
> I haven't deployed oVirt recently.
>
> Best Regards,
> Strahil Nikolov
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4SQLYRFYCQDIB75Z4J55F46RPBCJWMP/

Reply via email to