Re: [ovirt-users] dracut-initqueue[488]: Warning: Could not boot.

2018-05-07 Thread Charles Lam
Dear Mr. Zanni: I have had what I believe to be similar issues. I am in no way an expert or even knowledgeable, but from experience I have found this to work: dd if=/tmp/ovirt-node-ng-installer-ovirt-4.2-2018050417.iso of=/dev/sdb This command assumes that you are on CentOS or similar; assumes

[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-04 Thread Charles Lam
rtmgmt > network. > > You must create the file. The contents of the file was on my first email, > adapt it to your needs and them run the vdsm-tool command. > > Sent from my iPhone > > On 4 Feb 2020, at 19:29, Charles Lam wrote: > >  > Thank you so very much Vin

[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-05 Thread Charles Lam
this fix? Thank you very much, Diamond Tours, Inc. Charles Lam 13100 Westlinks Terrace, Suite 1, Fort Myers, FL 33913-8625 O: 239. 437.7117 | F: 239.790.1130 | Cell: 239.227.7474 c...@diamondtours.com<mailto:c...@diamondtours.com> ___ Users mailin

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Charles Lam
Dear Strahil and Ritesh, Thank you both. I am back where I started with: "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be down. Please

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Charles Lam
t; On Tue, Jan 12, 2021, 2:04 AM Charles Lam wrote: > >> Dear Strahil and Ritesh, >> >> Thank you both. I am back where I started with: >> >> "One or more bricks could be down. Please execute the command again after >> bringing all bricks online a

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-12 Thread Charles Lam
I will check ‘/var/log/gluster’. I had commented out the filter in ‘/etc/lvm/lvm.conf’ - if I don’t the creation of volume groups fails because lvm drives are excluded by filter. Should I not be commenting it out but modifying it in some way? Thanks! Charles On Tue, Jan 12, 2021 at 12:11 AM

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Charles Lam
Still not able to deploy Gluster on oVirt Node Hyperconverged - same error; upgraded to v4.4.4 and "kvdo not installed" Tried suggestion and per https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table I also tried "gluster volume

[ovirt-users] New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-18 Thread Charles Lam
Dear friends, Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by disabling multipath on the nvme drives. The Gluster deployment is now failing on the three node hyperconverged oVirt v4.3.3 deployment at: TASK [gluster.features/roles/gluster_hci : Set

[ovirt-users] Re: v4.4.3 Node Cockpit Gluster deploy fails

2020-12-18 Thread Charles Lam
I have been asked if multipath has been disabled for the cluster's nvme drives. I have not enabled or disabled multipath for the nvme drives. In Gluster deploy Step 4 - Bricks I have checked "Multipath Configuration: Blacklist Gluster Devices." I have not performed any custom setup of nvme

[ovirt-users] Re: [EXT] Re: v4.4.3 Node Cockpit Gluster deploy fails

2020-12-18 Thread Charles Lam
Thank you Donald! Your and Strahil's suggested solutions regarding disabling multipath for the nvme drives were correct. The Gluster deployment progressed much further but stalled at TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] ** task path:

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Charles Lam
Thanks so very much Strahil for your continued assistance! [root@fmov1n1 conf.d]# gluster pool list UUIDHostnameState 16e921fb-99d3-4a2e-81e6-ba095dbc14cahost2.fqdn.tld Connected d4488961-c854-449a-a211-1593810df52fhost3.fqdn.tld

[ovirt-users] Re: v4.4.3 Node Cockpit Gluster deploy fails

2020-12-18 Thread Charles Lam
Hi Strahil, Yes, on each node before deploy I have - dmsetup remove for each drive - wipefs --all --force /dev/nvmeXn1 for each drive - nvme format -s 1 /dev/nvmeX for each drive (ref: https://nvmexpress.org/open-source-nvme-management-utility-nvme-command-line-interface-nvme-cli/) Then test

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-08 Thread Charles Lam
Dear Strahil, I have rebuilt everything fresh, switches, hosts, cabling - PHY-SEC shows 512 for all nvme drives being used as bricks. Name resolution via /etc/hosts for direct connect storage network works for all hosts to all hosts. I am still blocked by the same "vdo: ERROR - Kernel

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Charles Lam
Thank you Strahil. I have installed/updated: dnf install --enablerepo="baseos" --enablerepo="appstream" --enablerepo="extras" --enablerepo="ha" --enablerepo="plus" centos-release-gluster8.noarch centos-release-storage-common.noarch dnf upgrade --enablerepo="baseos" --enablerepo="appstream"

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Charles Lam
Dear Friends, Resolved! Gluster just deployed for me successfully. Turns out it was two typos in my /etc/hosts file. Why or how ping resolved properly and worked I am not sure. Special thanks to Ritesh and most especially Strahil Nikolov for their assistance in resolving other issues along

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-13 Thread Charles Lam
Dear Friends: I am still stuck at task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67 "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals", "Volume heal failed." I refined