[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sachidananda URS
On Tue, May 21, 2019 at 9:00 PM Adrian Quintero 
wrote:

> Sac,
>
> 6.-started the hyperconverged setup wizard and added*
> "gluster_features_force_varlogsizecheck: false"* to the "vars:" section
> on the  Generated Ansible inventory :
> */etc/ansible/hc_wizard_inventory.yml* file as it was complaining about
> /var/log messages LV.
>

In the upcoming release I plan to remove this check. Since we will go ahead
with logrotate.


>
> *EUREKA: *After doing the above I was able to get past the filter issues,
> however I am still concerned if during a reboot the disks might come up
> differently. For example /dev/sdb might come up as /dev/sdx...
>
>
Even this shouldn't be a problem going forward, since we will use UUID to
mount the devices.
And the device name change shouldn't matter.

Thanks for your feedback, I will see how we can improve the install
experience.

-sac
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EKAHNGN74NFAUNMZ7RITPNEXVDATW2Y3/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Adrian Quintero
Awesome, thanks! , and yes I agree, this is a great project!

I will now continue to scale the cluster from 3 to 6 nodes including the
storage...I will let y'all know how it goes and post the steps as I have
only seen examples of 3 hosts but not steps to go from 3 to 6.

regards,

AQ


On Tue, May 21, 2019 at 1:06 PM Strahil  wrote:

> > EUREKA: After doing the above I was able to get past the filter issues,
> however I am still concerned if during a reboot the disks might come up
> differently. For example /dev/sdb might come up as /dev/sdx...
>
> Even if they change , you don't have to worry about as each PV contains
> LVM metadata (including  VG configuration) which is read by LVM on boot
> (actually everything that is not in the LVM filter is being scanned like
> that).
> Once all PVs are available  the VG is activated and then the LVs are also
> activated.
>
> > I am trying to make sure this setup is always the same as we want to
> move this to production, however seems I still don't have the full hang of
> it and the RHV 4.1 course is way to old :)
> >
> > Thanks again for helping out with this.
>
> It's a plain KVM with some management layer.
>
> Just a hint:
> Get your HostedEngine's configuration xml from the vdsm log (for
> emergencies) and another copy with reverse boot order  where DVD is booting
> first.
> Also get the xml for the ovirtmgmt network.
>
> It helped me a lot of times  when I wanted to recover my HostedEngine.
> I'm too lazy to rebuild it.
>
> Hint2:
> Vdsm logs contain each VM's configuration xml when the VMs are powered on.
>
> Hint3:
> Get regular backups of the HostedEngine and patch it from time to time.
> I would go in prod as follows:
> Let's say you are on 4.2.8
> Next step would be to go to 4.3.latest and then to 4.4.latest .
>
> A test cluster (even in VMs ) is also benefitial.
>
> Despite the hiccups I have stumbled upon, I think that the project is
> great.
>
> Best Regards,
> Strahil Nikolov
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTN2XEMCD2MNU2J5HWZK4HL36LMVUQ6Q/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Strahil
> EUREKA: After doing the above I was able to get past the filter issues, 
> however I am still concerned if during a reboot the disks might come up 
> differently. For example /dev/sdb might come up as /dev/sdx...

Even if they change , you don't have to worry about as each PV contains LVM 
metadata (including  VG configuration) which is read by LVM on boot (actually 
everything that is not in the LVM filter is being scanned like that).
Once all PVs are available  the VG is activated and then the LVs are also 
activated.


> I am trying to make sure this setup is always the same as we want to move 
> this to production, however seems I still don't have the full hang of it and 
> the RHV 4.1 course is way to old :)
>
> Thanks again for helping out with this.


It's a plain KVM with some management layer.

Just a hint:
Get your HostedEngine's configuration xml from the vdsm log (for emergencies) 
and another copy with reverse boot order  where DVD is booting first.
Also get the xml for the ovirtmgmt network.

It helped me a lot of times  when I wanted to recover my HostedEngine.
I'm too lazy to rebuild it.

Hint2:
Vdsm logs contain each VM's configuration xml when the VMs are powered on.

Hint3:
Get regular backups of the HostedEngine and patch it from time to time.
I would go in prod as follows:
Let's say you are on 4.2.8
Next step would be to go to 4.3.latest and then to 4.4.latest .

A test cluster (even in VMs ) is also benefitial.

Despite the hiccups I have stumbled upon, I think that the project is great.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MR4A7I4OOARCEEFUX4LKKW6CT3UGXNUM/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Adrian Quintero
Sac,
*To answer some of your questions:*
*fdisk -l:*
[root@host1 ~]# fdisk -l /dev/sdb
Disk /dev/sde: 480.1 GB, 480070426624 bytes, 937637552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes

[root@host1 ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 3000.6 GB, 3000559427584 bytes, 5860467632 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes

[root@host1 ~]# fdisk -l /dev/sdd

Disk /dev/sdd: 3000.6 GB, 3000559427584 bytes, 5860467632 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes



*1) i did  wipefs to all /dev/sdb,c,d,e*
*2) I did not zero out the disks as I had done it thru the controller.*

*3) cat /proc/partitions:*
[root@host1 ~]# cat /proc/partitions
major minor  #blocks  name

   80  586029016 sda
   811048576 sda1
   82  584978432 sda2
   8   16 2930233816 sdb
   8   32 2930233816 sdc
   8   48 2930233816 sdd
   8   64  468818776 sde



*4) grep filter /etc/lvm/lvm.conf (I did not modify the  lvm.conf file)*
[root@host1 ~]# grep "filter =" /etc/lvm/lvm.conf
# filter = [ "a|.*/|" ]
# filter = [ "r|/dev/cdrom|" ]
# filter = [ "a|loop|", "r|.*|" ]
# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
# filter = [ "a|.*/|" ]
# global_filter = [ "a|.*/|" ]
# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]




*What I did to get it working:*

I re-installed my first 3 hosts using
"ovirt-node-ng-installer-4.3.3-2019041712.el7.iso"and made sure I zeroed
the disks from within the controller, then I performed the following steps:

1.- modifed the blacklist section on /etc/multipath.conf to this:
blacklist {
 #   protocol "(scsi:adt|scsi:sbp)"
devnode "*"
}
2.-Made sure the second line of /etc/multipath.conf has:
# VDSM PRIVATE
3.-Increased /var/log to 15GB
4.-Rebuilt initramfs, rebooted
5.-wipefs -a /dev/sdb /dev/sdc /dev/sdd /dev/sde
6.-started the hyperconverged setup wizard and added*
"gluster_features_force_varlogsizecheck: false"* to the "vars:" section on
the  Generated Ansible inventory : */etc/ansible/hc_wizard_inventory.yml*
file as it was complaining about /var/log messages LV.

*EUREKA: *After doing the above I was able to get past the filter issues,
however I am still concerned if during a reboot the disks might come up
differently. For example /dev/sdb might come up as /dev/sdx...


I am trying to make sure this setup is always the same as we want to move
this to production, however seems I still don't have the full hang of it
and the RHV 4.1 course is way to old :)

Thanks again for helping out with this.



-AQ




On Tue, May 21, 2019 at 3:29 AM Sachidananda URS  wrote:

>
>
> On Tue, May 21, 2019 at 12:16 PM Sahina Bose  wrote:
>
>>
>>
>> On Mon, May 20, 2019 at 9:55 PM Adrian Quintero 
>> wrote:
>>
>>> Sahina,
>>> Yesterday I started with a fresh install, I completely wiped clean all
>>> the disks, recreated the arrays from within my controller of our DL380 Gen
>>> 9's.
>>>
>>> OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
>>> engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
>>> DATA1: JBOD (1x3TB HDD): /dev/sdc
>>> DATA2: JBOD (1x3TB HDD): /dev/sdd
>>> Caching disk: JOBD (1x440GB SDD): /dev/sde
>>>
>>> *After the OS install on the first 3 servers and setting up ssh keys,  I
>>> started the Hyperconverged deploy process:*
>>> 1.-Logged int to the first server http://host1.example.com:9090
>>> 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
>>> 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
>>> Review)
>>> *Hosts/FQDNs:*
>>> host1.example.com
>>> host2.example.com
>>> host3.example.com
>>> *Packages:*
>>> *Volumes:*
>>> engine:replicate:/gluster_bricks/engine/engine
>>> vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
>>> data1:replicate:/gluster_bricks/data1/data1
>>> data2:replicate:/gluster_bricks/data2/data2
>>> *Bricks:*
>>> engine:/dev/sdb:100GB:/gluster_bricks/engine
>>> vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
>>> data1:/dev/sdc:2700GB:/gluster_bricks/data1
>>> data2:/dev/sdd:2700GB:/gluster_bricks/data2
>>> LV Cache:
>>> /dev/sde:400GB:writethrough
>>> 4.-After I hit deploy on the last step of the "Wizard" that is when I
>>> get the disk filter error.
>>> TASK [gluster.infra/roles/backend_setup : Create volume groups]
>>> 
>>> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
>>> u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb
>>> excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname":
>>> "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed",
>>> "rc": 5}
>>> failed: [vmm12.virt.iad3p] 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Strahil
Thanks for the clarification.
It seems that my nvme (used by vdo) is not locked.
I will check again before opening a bug.

Best Regards,
Strahil NikolovOn May 21, 2019 09:52, Sahina Bose  wrote:
>
>
>
> On Tue, May 21, 2019 at 2:36 AM Strahil Nikolov  wrote:
>>
>> Hey Sahina,
>>
>> it seems that almost all of my devices are locked - just like Fred's.
>> What exactly does it mean - I don't have any issues with my bricks/storage 
>> domains.
>
>
>
> If the devices show up as locked - it means the disk cannot be used to create 
> a brick. This is when the disk either already has a filesystem or is in use.
> But if the device is a clean device and it still shows up as locked - this 
> could be a bug with how python-blivet/ vdsm reads this
>
> The code to check is implemented as
> _canCreateBrick(device):
>     if not device or device.kids > 0 or device.format.type or \
>        hasattr(device.format, 'mountpoint') or \
>        device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']:
>         return False
>     return True
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose 
>>  написа:
>>
>>
>> To scale existing volumes - you need to add bricks and run rebalance on the 
>> gluster volume so that data is correctly redistributed as Alex mentioned.
>> We do support expanding existing volumes as the bug 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>>
>> As to procedure to expand volumes:
>> 1. Create bricks from UI - select Host -> Storage Device -> Storage device. 
>> Click on "Create Brick"
>> If the device is shown as locked, make sure there's no signature on device.  
>> If multipath entries have been created for local devices, you can blacklist 
>> those devices in multipath.conf and restart multipath.
>> (If you see device as locked even after you do this -please report back).
>> 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 
>> bricks created in previous step
>> 3. Run Rebalance on the volume. Volume -> Rebalance.
>>
>>
>> On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:
>>>
>>> Sahina,
>>> Can someone from your team review the steps done by Adrian?
>>> Thanks,
>>> Freddy
>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHPXUDT3ZSUYNIH54QOTNMUEGYZGSCTM/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sachidananda URS
On Tue, May 21, 2019 at 12:16 PM Sahina Bose  wrote:

>
>
> On Mon, May 20, 2019 at 9:55 PM Adrian Quintero 
> wrote:
>
>> Sahina,
>> Yesterday I started with a fresh install, I completely wiped clean all
>> the disks, recreated the arrays from within my controller of our DL380 Gen
>> 9's.
>>
>> OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
>> engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
>> DATA1: JBOD (1x3TB HDD): /dev/sdc
>> DATA2: JBOD (1x3TB HDD): /dev/sdd
>> Caching disk: JOBD (1x440GB SDD): /dev/sde
>>
>> *After the OS install on the first 3 servers and setting up ssh keys,  I
>> started the Hyperconverged deploy process:*
>> 1.-Logged int to the first server http://host1.example.com:9090
>> 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
>> 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
>> Review)
>> *Hosts/FQDNs:*
>> host1.example.com
>> host2.example.com
>> host3.example.com
>> *Packages:*
>> *Volumes:*
>> engine:replicate:/gluster_bricks/engine/engine
>> vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
>> data1:replicate:/gluster_bricks/data1/data1
>> data2:replicate:/gluster_bricks/data2/data2
>> *Bricks:*
>> engine:/dev/sdb:100GB:/gluster_bricks/engine
>> vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
>> data1:/dev/sdc:2700GB:/gluster_bricks/data1
>> data2:/dev/sdd:2700GB:/gluster_bricks/data2
>> LV Cache:
>> /dev/sde:400GB:writethrough
>> 4.-After I hit deploy on the last step of the "Wizard" that is when I get
>> the disk filter error.
>> TASK [gluster.infra/roles/backend_setup : Create volume groups]
>> 
>> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
>> u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname":
>> "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed",
>> "rc": 5}
>> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
>> u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname":
>> "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed",
>> "rc": 5}
>> failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
>> u'pvname': u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname":
>> "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed",
>> "rc": 5}
>> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc',
>> u'pvname': u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname":
>> "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed",
>> "rc": 5}
>> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc',
>> u'pvname': u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname":
>> "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed",
>> "rc": 5}
>> failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc',
>> u'pvname': u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname":
>> "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed",
>> "rc": 5}
>> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
>> u'pvname': u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname":
>> "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed",
>> "rc": 5}
>> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
>> u'pvname': u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname":
>> "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed",
>> "rc": 5}
>> failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
>> u'pvname': u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd
>> excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname":
>> "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed",
>> "rc": 5}
>>
>> Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml)
>> and the "Deployment Failed" file
>>
>>
>>
>>
>> Also wondering if I hit this bug?
>> https://bugzilla.redhat.com/show_bug.cgi?id=1635614
>>
>>
> +Sachidananda URS  +Gobinda Das  to
> review the inventory file and failures
>

Hello Adrian,

Can you please provide the output of:
# fdisk -l /dev/sdd
# fdisk -l /dev/sdb

I think there could be stale signature on the disk causing this error.
Some of the possible solutions to try:
1)
# wipefs -a /dev/sdb
# wipefs -a /dev/sdd

2)
You can zero out first few sectors of disk by:

# dd if=/dev/zero of=/dev/sdb bs=1M count=10

3)
Check 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sahina Bose
On Tue, May 21, 2019 at 2:36 AM Strahil Nikolov 
wrote:

> Hey Sahina,
>
> it seems that almost all of my devices are locked - just like Fred's.
> What exactly does it mean - I don't have any issues with my bricks/storage
> domains.
>


If the devices show up as locked - it means the disk cannot be used to
create a brick. This is when the disk either already has a filesystem or is
in use.
But if the device is a clean device and it still shows up as locked - this
could be a bug with how python-blivet/ vdsm reads this

The code to check is implemented as
_canCreateBrick(device):
if not device or device.kids > 0 or device.format.type or \
   hasattr(device.format, 'mountpoint') or \
   device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv',
'lvmthinlv']:
return False
return True


> Best Regards,
> Strahil Nikolov
>
> В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose <
> sab...@redhat.com> написа:
>
>
> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expanding existing volumes as the bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>
> As to procedure to expand volumes:
> 1. Create bricks from UI - select Host -> Storage Device -> Storage
> device. Click on "Create Brick"
> If the device is shown as locked, make sure there's no signature on
> device.  If multipath entries have been created for local devices, you can
> blacklist those devices in multipath.conf and restart multipath.
> (If you see device as locked even after you do this -please report back).
> 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3
> bricks created in previous step
> 3. Run Rebalance on the volume. Volume -> Rebalance.
>
>
> On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:
>
> Sahina,
> Can someone from your team review the steps done by Adrian?
> Thanks,
> Freddy
>
> On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero 
> wrote:
>
> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
> re-attach them to clear any possible issues and try out the suggestions
> provided.
>
> thank you!
>
> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov 
> wrote:
>
> I have the same locks , despite I have blacklisted all local disks:
>
> # VDSM PRIVATE
> blacklist {
> devnode "*"
> wwid Crucial_CT256MX100SSD1_14390D52DCF5
> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
> wwid
> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
> }
>
> If you have multipath reconfigured, do not forget to rebuild the initramfs
> (dracut -f). It's a linux issue , and not oVirt one.
>
> In your case you had something like this:
>/dev/VG/LV
>   /dev/disk/by-id/pvuuid
>  /dev/mapper/multipath-uuid
> /dev/sdb
>
> Linux will not allow you to work with /dev/sdb , when multipath is locking
> the block device.
>
> Best Regards,
> Strahil Nikolov
>
> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> under Compute, hosts, select the host that has the locks on /dev/sdb,
> /dev/sdc, etc.., select storage devices and in here is where you see a
> small column with a bunch of lock images showing for each row.
>
>
> However as a work around, on the newly added hosts (3 total), I had to
> manually modify /etc/multipath.conf and add the following at the end as
> this is what I noticed from the original 3 node setup.
>
> -
> # VDSM REVISION 1.3
> # VDSM PRIVATE
> # BEGIN Added by gluster_hci role
>
> blacklist {
> devnode "*"
> }
> # END Added by gluster_hci role
> --
> After this I restarted multipath and the lock went away and was able to
> configure the new bricks thru the UI, however my concern is what will
> happen if I reboot the server will the disks be read the same way by the OS?
>
> Also now able to expand the gluster with a new replicate 3 volume if
> needed using http://host4.mydomain.com:9090.
>
>
> thanks again
>
> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
> wrote:
>
> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> Strahil,
> this is the issue I am seeing now
>
> [image: image.png]
>
> The is thru the UI when I try to create a new brick.
>
> So my concern is if I modify the filters on the OS what impact will that
> have after server reboots?
>
> thanks,
>
>
>
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>
> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sahina Bose
On Mon, May 20, 2019 at 9:55 PM Adrian Quintero 
wrote:

> Sahina,
> Yesterday I started with a fresh install, I completely wiped clean all the
> disks, recreated the arrays from within my controller of our DL380 Gen 9's.
>
> OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
> engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
> DATA1: JBOD (1x3TB HDD): /dev/sdc
> DATA2: JBOD (1x3TB HDD): /dev/sdd
> Caching disk: JOBD (1x440GB SDD): /dev/sde
>
> *After the OS install on the first 3 servers and setting up ssh keys,  I
> started the Hyperconverged deploy process:*
> 1.-Logged int to the first server http://host1.example.com:9090
> 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
> 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
> Review)
> *Hosts/FQDNs:*
> host1.example.com
> host2.example.com
> host3.example.com
> *Packages:*
> *Volumes:*
> engine:replicate:/gluster_bricks/engine/engine
> vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
> data1:replicate:/gluster_bricks/data1/data1
> data2:replicate:/gluster_bricks/data2/data2
> *Bricks:*
> engine:/dev/sdb:100GB:/gluster_bricks/engine
> vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
> data1:/dev/sdc:2700GB:/gluster_bricks/data1
> data2:/dev/sdd:2700GB:/gluster_bricks/data2
> LV Cache:
> /dev/sde:400GB:writethrough
> 4.-After I hit deploy on the last step of the "Wizard" that is when I get
> the disk filter error.
> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> 
> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
> u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
> u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
> u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
> u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
> filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
> "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
> u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
> filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
> "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
> failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
> u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
> filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
> "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
> u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
> filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
> "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
> failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
> u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
> filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
> "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
> failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
> u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
> filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
> "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
>
> Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml)
> and the "Deployment Failed" file
>
>
>
>
> Also wondering if I hit this bug?
> https://bugzilla.redhat.com/show_bug.cgi?id=1635614
>
>
+Sachidananda URS  +Gobinda Das  to
review the inventory file and failures


>
> Thanks for looking into this.
>
> *Adrian Quintero*
> *adrianquint...@gmail.com  |
> adrian.quint...@rackspace.com *
>
>
> On Mon, May 20, 2019 at 7:56 AM Sahina Bose  wrote:
>
>> To scale existing volumes - you need to add bricks and run rebalance on
>> the gluster volume so that data is correctly redistributed as Alex
>> mentioned.
>> We do support expanding existing volumes as the bug
>> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>>
>> As to procedure to expand volumes:
>> 1. Create bricks from UI - 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Strahil Nikolov
 Hi Adrian,
are you using local storage ?
If yes, set a blacklist in multipath.conf (don't forget the "#VDSM PRIVATE" 
flag) and rebuild the initramfs and reboot.When multipath locks a path - no 
direct access is possible - thus your pvcreate should not be possible.Also , 
multipath is not needed for local storage ;)

Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 19:31:04 ч. Гринуич+3, Adrian Quintero 
 написа:  
 
 Sahina,Yesterday I started with a fresh install, I completely wiped clean all 
the disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda    // Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde

After the OS install on the first 3 servers and setting up ssh keys,  I started 
the Hyperconverged deploy process:1.-Logged int to the first server 
http://host1.example.com:90902.-Selected Hyperconverged, clicked on "Run 
Gluster Wizard"3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, 
Bricks, 
Review)Hosts/FQDNs:host1.example.comhost2.example.comhost3.example.comPackages:Volumes:engine:replicate:/gluster_bricks/engine/enginevmstore1:replicate:/gluster_bricks/vmstore1/vmstore1data1:replicate:/gluster_bricks/data1/data1data2:replicate:/gluster_bricks/data2/data2Bricks:engine:/dev/sdb:100GB:/gluster_bricks/enginevmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1data1:/dev/sdc:2700GB:/gluster_bricks/data1data2:/dev/sdd:2700GB:/gluster_bricks/data2LV
 Cache:/dev/sde:400GB:writethrough4.-After I hit deploy on the last step of the 
"Wizard" that is when I get the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups] 
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and 
the "Deployment Failed" file


 
Also wondering if I hit this bug? 
https://bugzilla.redhat.com/show_bug.cgi?id=1635614


Thanks for looking into this.
Adrian quinteroadrianquint...@gmail.com | adrian.quint...@rackspace.com


On Mon, May 20, 2019 at 7:56 AM Sahina Bose  wrote:

To scale existing volumes - you need to add bricks and run rebalance on the 
gluster volume so that data is correctly redistributed as Alex mentioned.We do 
support expanding existing volumes as the bug 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Strahil Nikolov
 Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's.What 
exactly does it mean - I don't have any issues with my bricks/storage domains.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose 
 написа:  
 
 To scale existing volumes - you need to add bricks and run rebalance on the 
gluster volume so that data is correctly redistributed as Alex mentioned.We do 
support expanding existing volumes as the bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand volumes:1. Create bricks from UI - select Host -> 
Storage Device -> Storage device. Click on "Create Brick"If the device is shown 
as locked, make sure there's no signature on device.  If multipath entries have 
been created for local devices, you can blacklist those devices in 
multipath.conf and restart multipath.
 (If you see device as locked even after you do this -please report back).2. 
Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks 
created in previous step3. Run Rebalance on the volume. Volume -> Rebalance.

On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:

Sahina,Can someone from your team review the steps done by Adrian?
Thanks,Freddy

On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero  
wrote:

Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach 
them to clear any possible issues and try out the suggestions provided.
thank you!

On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov  wrote:

 I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE
blacklist {
    devnode "*"
    wwid Crucial_CT256MX100SSD1_14390D52DCF5
    wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
    wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
    wwid 
nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
}

If you have multipath reconfigured, do not forget to rebuild the initramfs 
(dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this:   /dev/VG/LV
  /dev/disk/by-id/pvuuid
 /dev/mapper/multipath-uuid
/dev/sdb

Linux will not allow you to work with /dev/sdb , when multipath is locking the 
block device.
Best Regards,Strahil Nikolov

В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 under Compute, hosts, select the host that has the locks on /dev/sdb, 
/dev/sdc, etc.., select storage devices and in here is where you see a small 
column with a bunch of lock images showing for each row.

However as a work around, on the newly added hosts (3 total), I had to manually 
modify /etc/multipath.conf and add the following at the end as this is what I 
noticed from the original 3 node setup.

-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
    devnode "*"
}
# END Added by gluster_hci role
--After this I 
restarted multipath and the lock went away and was able to configure the new 
bricks thru the UI, however my concern is what will happen if I reboot the 
server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed 
using http://host4.mydomain.com:9090.

thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov  wrote:

 In which menu do you see it this way ?
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 Strahil,this is the issue I am seeing now 


The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have 
after server reboots?
thanks,


On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Adrian Quintero
Sahina,
Yesterday I started with a fresh install, I completely wiped clean all the
disks, recreated the arrays from within my controller of our DL380 Gen 9's.

OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde

*After the OS install on the first 3 servers and setting up ssh keys,  I
started the Hyperconverged deploy process:*
1.-Logged int to the first server http://host1.example.com:9090
2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
Review)
*Hosts/FQDNs:*
host1.example.com
host2.example.com
host3.example.com
*Packages:*
*Volumes:*
engine:replicate:/gluster_bricks/engine/engine
vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
data1:replicate:/gluster_bricks/data1/data1
data2:replicate:/gluster_bricks/data2/data2
*Bricks:*
engine:/dev/sdb:100GB:/gluster_bricks/engine
vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
data1:/dev/sdc:2700GB:/gluster_bricks/data1
data2:/dev/sdd:2700GB:/gluster_bricks/data2
LV Cache:
/dev/sde:400GB:writethrough
4.-After I hit deploy on the last step of the "Wizard" that is when I get
the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups]

failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}

Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml)
and the "Deployment Failed" file




Also wondering if I hit this bug?
https://bugzilla.redhat.com/show_bug.cgi?id=1635614



Thanks for looking into this.

*Adrian Quintero*
*adrianquint...@gmail.com  |
adrian.quint...@rackspace.com *


On Mon, May 20, 2019 at 7:56 AM Sahina Bose  wrote:

> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expanding existing volumes as the bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>
> As to procedure to expand volumes:
> 1. Create bricks from UI - select Host -> Storage Device -> Storage
> device. Click on "Create Brick"
> If the device is shown as locked, make sure there's no signature on
> device.  If multipath entries have been created for local devices, you can
> blacklist those devices in multipath.conf and restart multipath.
> (If you see device as 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Sahina Bose
To scale existing volumes - you need to add bricks and run rebalance on the
gluster volume so that data is correctly redistributed as Alex mentioned.
We do support expanding existing volumes as the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed

As to procedure to expand volumes:
1. Create bricks from UI - select Host -> Storage Device -> Storage device.
Click on "Create Brick"
If the device is shown as locked, make sure there's no signature on
device.  If multipath entries have been created for local devices, you can
blacklist those devices in multipath.conf and restart multipath.
(If you see device as locked even after you do this -please report back).
2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3
bricks created in previous step
3. Run Rebalance on the volume. Volume -> Rebalance.


On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:

> Sahina,
> Can someone from your team review the steps done by Adrian?
> Thanks,
> Freddy
>
> On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero 
> wrote:
>
>> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
>> re-attach them to clear any possible issues and try out the suggestions
>> provided.
>>
>> thank you!
>>
>> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov 
>> wrote:
>>
>>> I have the same locks , despite I have blacklisted all local disks:
>>>
>>> # VDSM PRIVATE
>>> blacklist {
>>> devnode "*"
>>> wwid Crucial_CT256MX100SSD1_14390D52DCF5
>>> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>>> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>>> wwid
>>> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
>>> }
>>>
>>> If you have multipath reconfigured, do not forget to rebuild the
>>> initramfs (dracut -f). It's a linux issue , and not oVirt one.
>>>
>>> In your case you had something like this:
>>>/dev/VG/LV
>>>   /dev/disk/by-id/pvuuid
>>>  /dev/mapper/multipath-uuid
>>> /dev/sdb
>>>
>>> Linux will not allow you to work with /dev/sdb , when multipath is
>>> locking the block device.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
>>> adrianquint...@gmail.com> написа:
>>>
>>>
>>> under Compute, hosts, select the host that has the locks on /dev/sdb,
>>> /dev/sdc, etc.., select storage devices and in here is where you see a
>>> small column with a bunch of lock images showing for each row.
>>>
>>>
>>> However as a work around, on the newly added hosts (3 total), I had to
>>> manually modify /etc/multipath.conf and add the following at the end as
>>> this is what I noticed from the original 3 node setup.
>>>
>>> -
>>> # VDSM REVISION 1.3
>>> # VDSM PRIVATE
>>> # BEGIN Added by gluster_hci role
>>>
>>> blacklist {
>>> devnode "*"
>>> }
>>> # END Added by gluster_hci role
>>> --
>>> After this I restarted multipath and the lock went away and was able to
>>> configure the new bricks thru the UI, however my concern is what will
>>> happen if I reboot the server will the disks be read the same way by the OS?
>>>
>>> Also now able to expand the gluster with a new replicate 3 volume if
>>> needed using http://host4.mydomain.com:9090.
>>>
>>>
>>> thanks again
>>>
>>> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
>>> wrote:
>>>
>>> In which menu do you see it this way ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
>>> adrianquint...@gmail.com> написа:
>>>
>>>
>>> Strahil,
>>> this is the issue I am seeing now
>>>
>>> [image: image.png]
>>>
>>> The is thru the UI when I try to create a new brick.
>>>
>>> So my concern is if I modify the filters on the OS what impact will that
>>> have after server reboots?
>>>
>>> thanks,
>>>
>>>
>>>
>>> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>>>
>>> I have edited my multipath.conf to exclude local disks , but you need to
>>> set '#VDSM private' as per the comments in the header of the file.
>>> Otherwise, use the /dev/mapper/multipath-device notation - as you would
>>> do with any linux.
>>>
>>> Best Regards,
>>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>>> >
>>> > Thanks Alex, that makes more sense now  while trying to follow the
>>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
>>> are locked and inidicating " multpath_member" hence not letting me create
>>> new bricks. And on the logs I see
>>> >
>>> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
>>> "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume
>>> '/dev/sdb' failed", "rc": 5}
>>> > Same thing for sdc, sdd
>>> >
>>> > Should I manually edit the filters inside the OS, what will be the
>>> impact?
>>> >
>>> > thanks again.
>>> > 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-16 Thread Fred Rolland
Sahina,
Can someone from your team review the steps done by Adrian?
Thanks,
Freddy

On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero 
wrote:

> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
> re-attach them to clear any possible issues and try out the suggestions
> provided.
>
> thank you!
>
> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov 
> wrote:
>
>> I have the same locks , despite I have blacklisted all local disks:
>>
>> # VDSM PRIVATE
>> blacklist {
>> devnode "*"
>> wwid Crucial_CT256MX100SSD1_14390D52DCF5
>> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>> wwid
>> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
>> }
>>
>> If you have multipath reconfigured, do not forget to rebuild the
>> initramfs (dracut -f). It's a linux issue , and not oVirt one.
>>
>> In your case you had something like this:
>>/dev/VG/LV
>>   /dev/disk/by-id/pvuuid
>>  /dev/mapper/multipath-uuid
>> /dev/sdb
>>
>> Linux will not allow you to work with /dev/sdb , when multipath is
>> locking the block device.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
>> adrianquint...@gmail.com> написа:
>>
>>
>> under Compute, hosts, select the host that has the locks on /dev/sdb,
>> /dev/sdc, etc.., select storage devices and in here is where you see a
>> small column with a bunch of lock images showing for each row.
>>
>>
>> However as a work around, on the newly added hosts (3 total), I had to
>> manually modify /etc/multipath.conf and add the following at the end as
>> this is what I noticed from the original 3 node setup.
>>
>> -
>> # VDSM REVISION 1.3
>> # VDSM PRIVATE
>> # BEGIN Added by gluster_hci role
>>
>> blacklist {
>> devnode "*"
>> }
>> # END Added by gluster_hci role
>> --
>> After this I restarted multipath and the lock went away and was able to
>> configure the new bricks thru the UI, however my concern is what will
>> happen if I reboot the server will the disks be read the same way by the OS?
>>
>> Also now able to expand the gluster with a new replicate 3 volume if
>> needed using http://host4.mydomain.com:9090.
>>
>>
>> thanks again
>>
>> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
>> wrote:
>>
>> In which menu do you see it this way ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
>> adrianquint...@gmail.com> написа:
>>
>>
>> Strahil,
>> this is the issue I am seeing now
>>
>> [image: image.png]
>>
>> The is thru the UI when I try to create a new brick.
>>
>> So my concern is if I modify the filters on the OS what impact will that
>> have after server reboots?
>>
>> thanks,
>>
>>
>>
>> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>>
>> I have edited my multipath.conf to exclude local disks , but you need to
>> set '#VDSM private' as per the comments in the header of the file.
>> Otherwise, use the /dev/mapper/multipath-device notation - as you would
>> do with any linux.
>>
>> Best Regards,
>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>> >
>> > Thanks Alex, that makes more sense now  while trying to follow the
>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
>> are locked and inidicating " multpath_member" hence not letting me create
>> new bricks. And on the logs I see
>> >
>> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
>> "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume
>> '/dev/sdb' failed", "rc": 5}
>> > Same thing for sdc, sdd
>> >
>> > Should I manually edit the filters inside the OS, what will be the
>> impact?
>> >
>> > thanks again.
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>>
>>
>>
>> --
>> Adrian Quintero
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>>
>>
>>
>> --
>> Adrian Quintero
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> 

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Adrian Quintero
Ok, I will remove the extra 3 hosts, rebuild them from scratch and
re-attach them to clear any possible issues and try out the suggestions
provided.

thank you!

On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov 
wrote:

> I have the same locks , despite I have blacklisted all local disks:
>
> # VDSM PRIVATE
> blacklist {
> devnode "*"
> wwid Crucial_CT256MX100SSD1_14390D52DCF5
> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
> wwid
> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
> }
>
> If you have multipath reconfigured, do not forget to rebuild the initramfs
> (dracut -f). It's a linux issue , and not oVirt one.
>
> In your case you had something like this:
>/dev/VG/LV
>   /dev/disk/by-id/pvuuid
>  /dev/mapper/multipath-uuid
> /dev/sdb
>
> Linux will not allow you to work with /dev/sdb , when multipath is locking
> the block device.
>
> Best Regards,
> Strahil Nikolov
>
> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> under Compute, hosts, select the host that has the locks on /dev/sdb,
> /dev/sdc, etc.., select storage devices and in here is where you see a
> small column with a bunch of lock images showing for each row.
>
>
> However as a work around, on the newly added hosts (3 total), I had to
> manually modify /etc/multipath.conf and add the following at the end as
> this is what I noticed from the original 3 node setup.
>
> -
> # VDSM REVISION 1.3
> # VDSM PRIVATE
> # BEGIN Added by gluster_hci role
>
> blacklist {
> devnode "*"
> }
> # END Added by gluster_hci role
> --
> After this I restarted multipath and the lock went away and was able to
> configure the new bricks thru the UI, however my concern is what will
> happen if I reboot the server will the disks be read the same way by the OS?
>
> Also now able to expand the gluster with a new replicate 3 volume if
> needed using http://host4.mydomain.com:9090.
>
>
> thanks again
>
> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
> wrote:
>
> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> Strahil,
> this is the issue I am seeing now
>
> [image: image.png]
>
> The is thru the UI when I try to create a new brick.
>
> So my concern is if I modify the filters on the OS what impact will that
> have after server reboots?
>
> thanks,
>
>
>
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>
> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do
> with any linux.
>
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
> >
> > Thanks Alex, that makes more sense now  while trying to follow the
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
> are locked and inidicating " multpath_member" hence not letting me create
> new bricks. And on the logs I see
> >
> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb'
> failed", "rc": 5}
> > Same thing for sdc, sdd
> >
> > Should I manually edit the filters inside the OS, what will be the
> impact?
> >
> > thanks again.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>
>
>
> --
> Adrian Quintero
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>
>
>
> --
> Adrian Quintero
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFRHVQUA5IUFAHLVG2ENK/
>


-- 
Adrian Quintero

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Strahil Nikolov
 I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE
blacklist {
    devnode "*"
    wwid Crucial_CT256MX100SSD1_14390D52DCF5
    wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
    wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
    wwid 
nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
}

If you have multipath reconfigured, do not forget to rebuild the initramfs 
(dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this:   /dev/VG/LV
  /dev/disk/by-id/pvuuid
 /dev/mapper/multipath-uuid
/dev/sdb

Linux will not allow you to work with /dev/sdb , when multipath is locking the 
block device.
Best Regards,Strahil Nikolov

В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 under Compute, hosts, select the host that has the locks on /dev/sdb, 
/dev/sdc, etc.., select storage devices and in here is where you see a small 
column with a bunch of lock images showing for each row.

However as a work around, on the newly added hosts (3 total), I had to manually 
modify /etc/multipath.conf and add the following at the end as this is what I 
noticed from the original 3 node setup.

-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
    devnode "*"
}
# END Added by gluster_hci role
--After this I 
restarted multipath and the lock went away and was able to configure the new 
bricks thru the UI, however my concern is what will happen if I reboot the 
server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed 
using http://host4.mydomain.com:9090.

thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov  wrote:

 In which menu do you see it this way ?
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 Strahil,this is the issue I am seeing now 


The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have 
after server reboots?
thanks,


On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/



-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
  


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFRHVQUA5IUFAHLVG2ENK/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZJQ7QELDG3NVG232HARWBVBYUMJLQEE/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Strahil Nikolov
 All my hosts have the same locks, so it seems to be OK.
Best Regards,Strahil Nikolov


В четвъртък, 25 април 2019 г., 8:28:31 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 under Compute, hosts, select the host that has the locks on /dev/sdb, 
/dev/sdc, etc.., select storage devices and in here is where you see a small 
column with a bunch of lock images showing for each row.

However as a work around, on the newly added hosts (3 total), I had to manually 
modify /etc/multipath.conf and add the following at the end as this is what I 
noticed from the original 3 node setup.

-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
    devnode "*"
}
# END Added by gluster_hci role
--After this I 
restarted multipath and the lock went away and was able to configure the new 
bricks thru the UI, however my concern is what will happen if I reboot the 
server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed 
using http://host4.mydomain.com:9090.

thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov  wrote:

 In which menu do you see it this way ?
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 Strahil,this is the issue I am seeing now 


The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have 
after server reboots?
thanks,


On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/



-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
  


-- 
Adrian Quintero
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NWXJBICHRWCBERF3GT6XR7QUB43FBBI/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Alex McWhirter
You don't create the brick on the /dev/sd* device 

You can see where i create the brick on highlighted multipath device
(see attachment), if for some reason you can't do that, you might need
to run wipefs -a on it as it probably has some leftover headers from
another FS 

On 2019-04-25 08:53, Adrian Quintero wrote:

> I understand, however the "create brick" option is greyed out (not enabled), 
> the only way I could get that option to be enabled is if I manually edit the 
> multipathd.conf file and add 
> - 
> # VDSM REVISION 1.3
> # VDSM PRIVATE
> # BEGIN Added by gluster_hci role
> 
> blacklist {
> devnode "*"
> }
> # END Added by gluster_hci role
> -- 
> 
> Then I go back to the UI and I can use sd* (multpath device). 
> 
> thanks, 
> 
> On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter  wrote: 
> 
> You create the brick on top of the multipath device. Look for one that is the 
> same size as the /dev/sd* device that you want to use. 
> 
> On 2019-04-25 08:00, Strahil Nikolov wrote: 
> 
> In which menu do you see it this way ? 
> 
> Best Regards, 
> Strahil Nikolov 
> 
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
>  написа: 
> 
> Strahil, 
> this is the issue I am seeing now 
> 
> The is thru the UI when I try to create a new brick. 
> 
> So my concern is if I modify the filters on the OS what impact will that have 
> after server reboots? 
> 
> thanks, 
> 
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote: I 
> have edited my multipath.conf to exclude local disks , but you need to set 
> '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do 
> with any linux.
> 
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>> 
>> Thanks Alex, that makes more sense now  while trying to follow the 
>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
>> are locked and inidicating " multpath_member" hence not letting me create 
>> new bricks. And on the logs I see 
>> 
>> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
>> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
>> failed", "rc": 5} 
>> Same thing for sdc, sdd 
>> 
>> Should I manually edit the filters inside the OS, what will be the impact? 
>> 
>> thanks again.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>>  
> 
> -- 
> Adrian Quintero 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAWW5G37ZKRPFLQUXLALJ/

-- 
Adrian Quintero___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DUPTKNF4TBYVUUDUOYBVZ5AFJKXFUXMB/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Adrian Quintero
I understand, however the "create brick" option is greyed out (not
enabled), the only way I could get that option to be enabled is if I
manually edit the multipathd.conf file and add
-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
devnode "*"
}
# END Added by gluster_hci role
--

Then I go back to the UI and I can use sd* (multpath device).

thanks,

On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter  wrote:

> You create the brick on top of the multipath device. Look for one that is
> the same size as the /dev/sd* device that you want to use.
>
> On 2019-04-25 08:00, Strahil Nikolov wrote:
>
>
> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> Strahil,
> this is the issue I am seeing now
>
> [image: image.png]
>
> The is thru the UI when I try to create a new brick.
>
> So my concern is if I modify the filters on the OS what impact will that
> have after server reboots?
>
> thanks,
>
>
>
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>
> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do
> with any linux.
>
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
> >
> > Thanks Alex, that makes more sense now  while trying to follow the
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
> are locked and inidicating " multpath_member" hence not letting me create
> new bricks. And on the logs I see
> >
> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb'
> failed", "rc": 5}
> > Same thing for sdc, sdd
> >
> > Should I manually edit the filters inside the OS, what will be the
> impact?
> >
> > thanks again.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>
>
>
> --
> Adrian Quintero
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAWW5G37ZKRPFLQUXLALJ/
>
>
>
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBEJILA5TRAKY2JNNI64YGKBOGHEZNL5/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Alex McWhirter
You create the brick on top of the multipath device. Look for one that
is the same size as the /dev/sd* device that you want to use. 

On 2019-04-25 08:00, Strahil Nikolov wrote:

> In which menu do you see it this way ? 
> 
> Best Regards, 
> Strahil Nikolov 
> 
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
>  написа: 
> 
> Strahil, 
> this is the issue I am seeing now 
> 
> The is thru the UI when I try to create a new brick. 
> 
> So my concern is if I modify the filters on the OS what impact will that have 
> after server reboots? 
> 
> thanks, 
> 
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote: 
> 
>> I have edited my multipath.conf to exclude local disks , but you need to set 
>> '#VDSM private' as per the comments in the header of the file.
>> Otherwise, use the /dev/mapper/multipath-device notation - as you would do 
>> with any linux.
>> 
>> Best Regards,
>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>>> 
>>> Thanks Alex, that makes more sense now  while trying to follow the 
>>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
>>> are locked and inidicating " multpath_member" hence not letting me create 
>>> new bricks. And on the logs I see 
>>> 
>>> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
>>> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
>>> failed", "rc": 5} 
>>> Same thing for sdc, sdd 
>>> 
>>> Should I manually edit the filters inside the OS, what will be the impact? 
>>> 
>>> thanks again.
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
> 
> -- 
> Adrian Quintero 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAWW5G37ZKRPFLQUXLALJ/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2APQ4ZM4F5CPO6KJKYJWH6J54MCLDJX/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Adrian Quintero
under Compute, hosts, select the host that has the locks on /dev/sdb,
/dev/sdc, etc.., select storage devices and in here is where you see a
small column with a bunch of lock images showing for each row.


However as a work around, on the newly added hosts (3 total), I had to
manually modify /etc/multipath.conf and add the following at the end as
this is what I noticed from the original 3 node setup.

-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
devnode "*"
}
# END Added by gluster_hci role
--
After this I restarted multipath and the lock went away and was able to
configure the new bricks thru the UI, however my concern is what will
happen if I reboot the server will the disks be read the same way by the OS?

Also now able to expand the gluster with a new replicate 3 volume if needed
using http://host4.mydomain.com:9090.


thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov 
wrote:

> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
> adrianquint...@gmail.com> написа:
>
>
> Strahil,
> this is the issue I am seeing now
>
> [image: image.png]
>
> The is thru the UI when I try to create a new brick.
>
> So my concern is if I modify the filters on the OS what impact will that
> have after server reboots?
>
> thanks,
>
>
>
> On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:
>
> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do
> with any linux.
>
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
> >
> > Thanks Alex, that makes more sense now  while trying to follow the
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
> are locked and inidicating " multpath_member" hence not letting me create
> new bricks. And on the logs I see
> >
> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb'
> failed", "rc": 5}
> > Same thing for sdc, sdd
> >
> > Should I manually edit the filters inside the OS, what will be the
> impact?
> >
> > thanks again.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>
>
>
> --
> Adrian Quintero
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFRHVQUA5IUFAHLVG2ENK/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Strahil Nikolov
 In which menu do you see it this way ?
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 Strahil,this is the issue I am seeing now 


The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have 
after server reboots?
thanks,


On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/



-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAWW5G37ZKRPFLQUXLALJ/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread Adrian Quintero
Strahil,
this is the issue I am seeing now

[image: image.png]

The is thru the UI when I try to create a new brick.

So my concern is if I modify the filters on the OS what impact will that
have after server reboots?

thanks,



On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

> I have edited my multipath.conf to exclude local disks , but you need to
> set '#VDSM private' as per the comments in the header of the file.
> Otherwise, use the /dev/mapper/multipath-device notation - as you would do
> with any linux.
>
> Best Regards,
> Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
> >
> > Thanks Alex, that makes more sense now  while trying to follow the
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
> are locked and inidicating " multpath_member" hence not letting me create
> new bricks. And on the logs I see
> >
> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb'
> failed", "rc": 5}
> > Same thing for sdc, sdd
> >
> > Should I manually edit the filters inside the OS, what will be the
> impact?
> >
> > thanks again.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>


-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-24 Thread wodel youchi
Hi,

I am not sure if I understood your question, but here is a statement from
the install guide of RHHI (Deploying RHHI) :

"You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans
across more than 3 nodes at a time."

Page 11 , 2.7 Scaling.

Regards.


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Le mar. 23 avr. 2019 à 06:56,  a écrit :

> Use the created multipath devices
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UQWBS3W23I3LTJQCZI7OI2467AW4JRO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQBYLEHE2GQTHJDA3WF3LEN6D6Z57HWH/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Strahil
I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RZGM2ECG5D53HUTDSKEAUD2XZUHE7G33/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread adrianquintero
Thanks Alex, that makes more sense now  while trying to follow the instructions 
provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and 
inidicating " multpath_member" hence not letting me create new bricks. And on 
the logs I see 

Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
failed", "rc": 5}
Same thing for sdc, sdd

Should I manually edit the filters inside the OS, what will be the impact?

thanks again.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Alex McWhirter

On 2019-04-22 17:33, adrianquint...@gmail.com wrote:

Found the following and answered part of my own questions, however I
think this sets a new set of Replica 3 Bricks, so if I have 2 hosts
fail from the first 3 hosts then I loose my hyperconverged?

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/scaling#task-cockpit-gluster_mgmt-expand_cluster

thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YVTQHOOLPM3Z73CJYCPRY6ACZ72KAUW/


When you add the new set of bricks, and rebalance, gluster will still 
respect your current replica value of 3. So every file that gets added, 
will get two copies placed on other bricks as well. You don't know which 
bricks will get a copy of said file. So the redundancy is exactly the 
same. You can lose 2 hosts out of all hosts in the cluster.


If that is not enough for you, you can increase the replica value at the 
cost of storage space.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y3M5W2E4C774RUOHS2HZN6N4HB4CKDVU/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Alex McWhirter

On 2019-04-22 14:48, adrianquint...@gmail.com wrote:

Hello,
I have a 3 node Hyperconverged setup with gluster and added 3 new
nodes to the cluster for a total of 6 servers.

I am now taking advantage of more compute power but cant scale out my
storage volumes.

Current Hyperconverged setup:
- host1.mydomain.com ---> Bricks: engine data1 vmstore1
- host2.mydomain.com ---> Bricks: engine data1 vmstore1
- host3.mydomain.com ---> Bricks: engine data1 vmstore1

From these 3 servers we get the following Volumes:
- engine(host1:engine, host2:engine, host3:engine)
- data1  (host1:data1, host2:data1, host3:data1)
- vmstore1  (host1:vmstore1, host2:vmstore1, host3:vmstore11)

The following are the newly added servers to the cluster, however as
you can see there are no gluster bricks
- host4.mydomain.com
- host5.mydomain.com
- host6.mydomain.com

I know that the bricks must be added in sets of 3 and per the first 3
hosts that is how it was deployed thru the web UI.

Questions:
-how can i extend the gluster volumes engine, data1 and vmstore using
host4, host5 and host6?
-Do I need to configure gluster volumes manually through the OS CLI in
order for them to span amongst all 6 servers?
-If I configure the fail storage scenario manually will oVirt know
about it?, will it still be hyperconverged?


I have only seen 3 host hyperconverged setup examples with gluster,
but have not found examples for 6, 9 or 12 host cluster with gluster.
I know it might be a lack of understanding from my end on how ovirt
and gluster integrate with one another.

If you can point me in the right direction would be great.

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJJ2JVIXTGLCHUSGEUMHSIWPKVREPTEJ/


Compute > Hosts >  > Storage Devices > Create Brick (do all 
three)
Storage > Volumes >  > Bricks > Add (add all three at 
once)
Storage > Volumes >  > right> > rebalance


Rebalance is needed to take advantage of the new storage immediately and 
make most efficient use of it, but it is very IO heavy. So only do this 
during down time, You can go ahead and add the bricks and wait for a 
maintenance window to do the rebalance.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IEUMXSH332GS2W5WNJ5NYPHZJ472XXU7/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread adrianquintero
Found the following and answered part of my own questions, however I think this 
sets a new set of Replica 3 Bricks, so if I have 2 hosts fail from the first 3 
hosts then I loose my hyperconverged?

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/scaling#task-cockpit-gluster_mgmt-expand_cluster

thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YVTQHOOLPM3Z73CJYCPRY6ACZ72KAUW/