On Tue, May 21, 2019 at 2:36 AM Strahil Nikolov <hunter86...@yahoo.com>
wrote:

> Hey Sahina,
>
> it seems that almost all of my devices are locked - just like Fred's.
> What exactly does it mean - I don't have any issues with my bricks/storage
> domains.
>


If the devices show up as locked - it means the disk cannot be used to
create a brick. This is when the disk either already has a filesystem or is
in use.
But if the device is a clean device and it still shows up as locked - this
could be a bug with how python-blivet/ vdsm reads this

The code to check is implemented as
_canCreateBrick(device):
    if not device or device.kids > 0 or device.format.type or \
       hasattr(device.format, 'mountpoint') or \
       device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv',
'lvmthinlv']:
        return False
    return True


> Best Regards,
> Strahil Nikolov
>
> В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose <
> sab...@redhat.com> написа:
>
>
> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expanding existing volumes as the bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>
> As to procedure to expand volumes:
> 1. Create bricks from UI - select Host -> Storage Device -> Storage
> device. Click on "Create Brick"
> If the device is shown as locked, make sure there's no signature on
> device.  If multipath entries have been created for local devices, you can
> blacklist those devices in multipath.conf and restart multipath.
> (If you see device as locked even after you do this -please report back).
> 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3
> bricks created in previous step
> 3. Run Rebalance on the volume. Volume -> Rebalance.
>
>
> On Thu, May 16, 2019 at 2:48 PM Fred Rolland <froll...@redhat.com> wrote:
>
> Sahina,
> Can someone from your team review the steps done by Adrian?
> Thanks,
> Freddy
>
> On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquint...@gmail.com>
> wrote:
>
> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
> re-attach them to clear any possible issues and try out the suggestions
> provided.
>
> thank you!
>
> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86...@yahoo.com>
> wrote:
>
> I have the same locks , despite I have blacklisted all local disks:
>
> # VDSM PRIVATE
> blacklist {
>         devnode "*"
>         wwid Crucial_CT256MX100SSD1_14390D52DCF5
>         wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>         wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>         wwid
> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001
> }
>
> If you have multipath reconfigured, do not forget to rebuild the initramfs
> (dracut -f). It's a linux issue , and not oVirt one.
>
> In your case you had something like this:
>                /dev/VG/LV
>           /dev/disk/by-id/pvuuid
>      /dev/mapper/multipath-uuid
> /dev/sdb
>
> Linux will not allow you to work with /dev/sdb , when multipath is locking
> the block device.
>
> Best Regards,
> Strahil Nikolov
>
> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> under Compute, hosts, select the host that has the locks on /dev/sdb,
> /dev/sdc, etc.., select storage devices and in here is where you see a
> small column with a bunch of lock images showing for each row.
>
>
> However as a work around, on the newly added hosts (3 total), I had to
> manually modify /etc/multipath.conf and add the following at the end as
> this is what I noticed from the original 3 node setup.
>
> -------------------------------------------------------------
> # VDSM REVISION 1.3
> # VDSM PRIVATE
> # BEGIN Added by gluster_hci role
>
> blacklist {
>         devnode "*"
> }
> # END Added by gluster_hci role
> ----------------------------------------------------------
> After this I restarted multipath and the lock went away and was able to
> configure the new bricks thru the UI, however my concern is what will
> happen if I reboot the server will the disks be read the same way by the OS?
>
> Also now able to expand the gluster with a new replicate 3 volume if
> needed using http://host4.mydomain.com:9090.
>
>
> thanks again
>
> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86...@yahoo.com>
> wrote:
>
> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> Strahil,
> this is the issue I am seeing now
>
> [image: image.png]
>
> The is thru the UI when I try to create a new brick.
>
> So my concern is if I modify the filters on the OS what impact will that
> have after server reboots?
>
> thanks,
>
>
>
> On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86...@yahoo.com> wrote:
>
> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do
> with any linux.
>
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
> >
> > Thanks Alex, that makes more sense now  while trying to follow the
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
> are locked and inidicating " multpath_member" hence not letting me create
> new bricks. And on the logs I see
> >
> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb'
> failed", "rc": 5}
> > Same thing for sdc, sdd
> >
> > Should I manually edit the filters inside the OS, what will be the
> impact?
> >
> > thanks again.
> > _______________________________________________
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>
>
>
> --
> Adrian Quintero
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>
>
>
> --
> Adrian Quintero
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFRHVQUA5IUFAHLVG2ENK/
>
>
>
> --
> Adrian Quintero
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG427OSDBIDCQPT4RDY4ZC/
>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/253X4O6IYNNNAKWSYXCYETP5MC4S5IDU/

Reply via email to