e?
From: Strahil Nikolov
Sent: Wednesday, January 27, 2021 7:11 AM
To: Robert Tongue ; users
Subject: Re: [ovirt-users] Re: VM templates You should create a file like mine,
cause vdsm manages /etc/multipathd.conf
# cat /etc/multipath/conf.d/blacklist.confblacklist { devnode "*&qu
21 7:11 AM
To: Robert Tongue ; users
Subject: Re: [ovirt-users] Re: VM templates
You should create a file like mine, cause vdsm manages /etc/multipathd.conf
# cat /etc/multipath/conf.d/blacklist.conf
blacklist {
devnode "*"
wwid nvme.1cc1-324a31313230303131343036-41444154412
You should create a file like mine, cause vdsm manages /etc/multipathd.conf
# cat /etc/multipath/conf.d/blacklist.confblacklist { devnode "*"
wwid nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001
wwid TOSHIBA-TR200_Z7KB600SK46S wwid
Correction, the issue came back, but I fixed it again, the actual issue was
multipathd. I had to set up device filters in /etc/multipath.conf
blacklist {
protocol "(scsi:adt|scsi:sbp)"
devnode "^hd[a-z]"
devnode "^sd[a-z]$"
devnode "^sd[a-z]"
devnode "^nvme0n1"
I fixed my own issue, and for everyone else that may run into this, the issue
was the fact that I created the first oVirt node VM inside VMware, and got it
fully configured with all the software/disks/partitioning/settings, then cloned
it to two more VMs. Then I ran the hosted-engine
Thanks for the reply. Here is my glusterfs options for the volume, am I
missing anything critical?
[root@cluster1-vm ~]# gluster volume info storage
Volume Name: storage
Type: Distributed-Disperse
Volume ID: 67112b70-e319-4629-b768-03df9d9a0e84
Status: Started
Snapshot Count: 0
Number of
First of all ,
verify the gluster volume options (gluster volume info ;
gluster volume status ).When you use HCI, ovirt sets up a lot
of optimized options in order to gain the maximum of the Gluster
storage.
Best Regards,Strahil Nikolov
В 15:03 + на 25.01.2021 (пн), Robert Tongue написа:
>
7 matches
Mail list logo