On April 29, 2020 7:42:55 PM GMT+03:00, Shareef Jalloq <[email protected]> 
wrote:
>Ah of course.  I was assuming something had gone wrong with the
>deployment
>and it couldn't clean up its own mess.  I'll raise a bug on the
>documentation.
>
>Strahil, what are the other options to using /dev/sdxxx?
>
>On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov
><[email protected]>
>wrote:
>
>> On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <[email protected]>
>wrote:
>> >Has the drive been used before, it might have existing
>> >partition/filesystem
>> >on it? If you are sure it's fine to overwrite try running wipefs -a
>> >/dev/sdb on all hosts. Also make sure there aren't any filters setup
>in
>> >lvm.conf (there shouldn't be on fresh install, but worth checking).
>> >
>> >On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq
><[email protected]>
>> >wrote:
>> >
>> >> Hi,
>> >>
>> >> I'm running the gluster deployment flow and am trying to use a
>second
>> >> drive as the gluster volume.  It's /dev/sdb on each node and I'm
>> >using the
>> >> JBOD mode.
>> >>
>> >> I'm seeing the following gluster ansible task fail and a google
>> >search
>> >> doesn't bring up much.
>> >>
>> >> TASK [gluster.infra/roles/backend_setup : Create volume groups]
>> >> ****************
>> >>
>> >> failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
>> >> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) =>
>{"ansible_loop_var":
>> >"item",
>> >> "changed": false, "err": "  Couldn't find device with uuid
>> >> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n  Couldn't find device
>with
>> >uuid
>> >> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n  Couldn't find device
>with
>> >uuid
>> >> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n  Couldn't find device
>with
>> >uuid
>> >> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n  Device /dev/sdb
>excluded
>> >by a
>> >> filter.\n", "item": {"pvname": "/dev/sdb", "vgname":
>> >"gluster_vg_sdb"},
>> >> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
>> >> _______________________________________________
>> >> Users mailing list -- [email protected]
>> >> To unsubscribe send an email to [email protected]
>> >> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >> oVirt Code of Conduct:
>> >> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives:
>> >>
>> >
>>
>https://lists.ovirt.org/archives/list/[email protected]/message/5U3K3IPYCFOLUFJ56FGJI3TYWT6NOLAZ/
>> >>
>>
>> Actually best practice is not to use /dev/sdxxx as they can change.
>>
>> In your case most peobably the LUN is not fresh, so wipe it with
>> dd/blktrim so any remnants of old FS signature  is gone.
>>
>> Best Regards,
>> Strahil Nikolov
>>

Hi Schareef,

In general we should use persistent names like '/dev/disk/by-id/scsi-XYZ or 
/dev/disk/by-id/wwn-XYZ if we want to be idempotent (be able to rerun the 
ansible play/role multiple times /even after a reboot/).

For  example ,  I make an array of all available disks with the exclusion of 
the system disk (filtering disks with partitions) and then use those as PVs in 
a VG and then everything else  is easy.

If you need  to separate the disks by size  (multiple  VGs), you can sort the 
array and then select which disk to be a PV for a specific VG.
Aanother approach is to filter the disks by vendor or type and then create your 
VGs with ansible.

Anyway, for initial deployment /dev/sdXYZ is enough.


Best Regards,
Strahil Nikolov
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/TJVX2JN74W2UBAE2TRPQ7T2CP2AUXSBV/

Reply via email to