Dear Mr. Zanni:
I have had what I believe to be similar issues. I am in no way an expert
or even knowledgeable, but from experience I have found this to work:
dd if=/tmp/ovirt-node-ng-installer-ovirt-4.2-2018050417.iso of=/dev/sdb
This command assumes that you are on CentOS or similar; assumes
rtmgmt
> network.
>
> You must create the file. The contents of the file was on my first email,
> adapt it to your needs and them run the vdsm-tool command.
>
> Sent from my iPhone
>
> On 4 Feb 2020, at 19:29, Charles Lam wrote:
>
>
> Thank you so very much Vin
this fix?
Thank you very much,
Diamond Tours, Inc.
Charles Lam
13100 Westlinks Terrace, Suite 1, Fort Myers, FL 33913-8625
O: 239. 437.7117 | F: 239.790.1130 | Cell: 239.227.7474
c...@diamondtours.com<mailto:c...@diamondtours.com>
___
Users mailin
Dear Strahil and Ritesh,
Thank you both. I am back where I started with:
"One or more bricks could be down. Please execute the command again after
bringing all bricks online and finishing any pending heals\nVolume heal
failed.", "stdout_lines": ["One or more bricks could be down. Please
t; On Tue, Jan 12, 2021, 2:04 AM Charles Lam wrote:
>
>> Dear Strahil and Ritesh,
>>
>> Thank you both. I am back where I started with:
>>
>> "One or more bricks could be down. Please execute the command again after
>> bringing all bricks online a
I will check ‘/var/log/gluster’. I had commented out the filter in
‘/etc/lvm/lvm.conf’ - if I don’t the creation of volume groups fails
because lvm drives are excluded by filter. Should I not be commenting it
out but modifying it in some way?
Thanks!
Charles
On Tue, Jan 12, 2021 at 12:11 AM
Still not able to deploy Gluster on oVirt Node Hyperconverged - same error;
upgraded to v4.4.4 and "kvdo not installed"
Tried suggestion and per
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table
I also tried "gluster volume
Dear friends,
Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by
disabling multipath on the nvme drives. The Gluster deployment is now failing
on the three node hyperconverged oVirt v4.3.3 deployment at:
TASK [gluster.features/roles/gluster_hci : Set
I have been asked if multipath has been disabled for the cluster's nvme drives.
I have not enabled or disabled multipath for the nvme drives. In Gluster
deploy Step 4 - Bricks I have checked "Multipath Configuration: Blacklist
Gluster Devices." I have not performed any custom setup of nvme
Thank you Donald! Your and Strahil's suggested solutions regarding disabling
multipath for the nvme drives were correct. The Gluster deployment progressed
much further but stalled at
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on]
**
task path:
Thanks so very much Strahil for your continued assistance!
[root@fmov1n1 conf.d]# gluster pool list
UUIDHostnameState
16e921fb-99d3-4a2e-81e6-ba095dbc14cahost2.fqdn.tld Connected
d4488961-c854-449a-a211-1593810df52fhost3.fqdn.tld
Hi Strahil,
Yes, on each node before deploy I have
- dmsetup remove for each drive
- wipefs --all --force /dev/nvmeXn1 for each drive
- nvme format -s 1 /dev/nvmeX for each drive (ref:
https://nvmexpress.org/open-source-nvme-management-utility-nvme-command-line-interface-nvme-cli/)
Then test
Dear Strahil,
I have rebuilt everything fresh, switches, hosts, cabling - PHY-SEC shows 512
for all nvme drives being used as bricks. Name resolution via /etc/hosts for
direct connect storage network works for all hosts to all hosts. I am still
blocked by the same
"vdo: ERROR - Kernel
Thank you Strahil. I have installed/updated:
dnf install --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"
centos-release-gluster8.noarch centos-release-storage-common.noarch
dnf upgrade --enablerepo="baseos" --enablerepo="appstream"
Dear Friends,
Resolved! Gluster just deployed for me successfully. Turns out it was two
typos in my /etc/hosts file. Why or how ping resolved properly and worked I am
not sure.
Special thanks to Ritesh and most especially Strahil Nikolov for their
assistance in resolving other issues along
Dear Friends:
I am still stuck at
task path:
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
"One or more bricks could be down. Please execute the command again after
bringing all bricks online and finishing any pending heals", "Volume heal
failed."
I refined
16 matches
Mail list logo