should i assign IP address on the physical network on my host enfs21?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
http
can someone help me try to assign gluster network
this is my current setup hosted-engine with stand alone gluster
ovirt host 1 host 2 host 3 (and so on)
eno1 192.168.0.10 ovirtmgmt
enfs20 vm network dmz_1 dmz_2 dmz3 (and so on)
enfs21 <--- i want to assign this physical network with gluster netow
fount it thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines
Hi guys could someone help me or guide me on how to assign a logical network to
a physical.
our last ovirt engine was version 4.0
but with this latest release 4.4 seems other options are gone missing now.
___
Users mailing list -- users@ovirt.org
To un
i got this already.. i think its a dns issue. i just put all the glusterfs and
nodes in the /etc/hosts using there hostname.. all is good
thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Stat
Dear friends,
Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by
disabling multipath on the nvme drives. The Gluster deployment is now failing
on the three node hyperconverged oVirt v4.3.3 deployment at:
TASK [gluster.features/roles/gluster_hci : Set granual-entry
Thank you Donald! Your and Strahil's suggested solutions regarding disabling
multipath for the nvme drives were correct. The Gluster deployment progressed
much further but stalled at
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on]
**
task path:
/etc/ansible/r
When I deployed my gluster hyperconverged setup using nvme drives I had to
disable multipath for all my drives. I'm not sure if this is your issue but
here are the instruction I followed to disable it.
Create a custom multipath configuration file.
# mkdir /etc/multipath/conf.d
# touch /etc/mul
I have been asked if multipath has been disabled for the cluster's nvme drives.
I have not enabled or disabled multipath for the nvme drives. In Gluster
deploy Step 4 - Bricks I have checked "Multipath Configuration: Blacklist
Gluster Devices." I have not performed any custom setup of nvme dri
Hi Strahil,
Yes, on each node before deploy I have
- dmsetup remove for each drive
- wipefs --all --force /dev/nvmeXn1 for each drive
- nvme format -s 1 /dev/nvmeX for each drive (ref:
https://nvmexpress.org/open-source-nvme-management-utility-nvme-command-line-interface-nvme-cli/)
Then test u
В четвъртък, 17 декември 2020 г., 22:32:14 Гринуич+2, Alex K
написа:
On Thu, Dec 17, 2020, 14:43 Strahil Nikolov wrote:
> Sadly no. I have used it on test Clusters with KVM VMs.
You mean clusters managed with pacemaker?
Yes, with pacemaker.
>
> If you manage to use oVirt as a n
Gianluca Cecchi writes:
> On Thu, Dec 17, 2020 at 5:30 PM Milan Zamazal wrote:
>
>> Gianluca Cecchi writes:
>>
>> > On Wed, Dec 16, 2020 at 8:59 PM Milan Zamazal
>> wrote:
>> >
>> >>
>> >> If the checkbox is unchecked, the migration shouldn't be prevented.
>> >> I think the TSC frequency shoul
12 matches
Mail list logo