Just an update that my routing issues are now resolved on the 192.168.0.0/24 
network, and that I was able to add the 2nd host to the cluster. I think part 
of my issue was that I had the Management network (10.0.0.0/24) on a separate 
switch VLAN from the 192.168 network. Out of desperation, I decided to put both 
networks onto the same switch vlan, at which point everything seemed to work 
fine.

The Engine now sees both hosts.

However, I did run into a strange issue where the Engine was running on the 2nd 
host, and then I "upgraded" the first host (I put the host into maintenance 
mode before the upgrade).

When the first host was rebooted, it appears that the engine got stopped on the 
2nd host.
I wound up doing this on the 2nd host:

[root@cha2-storage ~]# hosted-engine --check-livelinessHosted Engine is not 
up![root@cha2-storage ~]# hosted-engine --vm-startVM exists and its status is 
Up[root@cha2-storage ~]# hosted-engine --check-livelinessHosted Engine is up!

After I did this, the engine came right up and was responsive, and I was able 
to manage both hosts from within the engine.

Before I add the 3rd host to the engine, I want to finish setting up my VLANs 
and make sure all of my traffic is segmented properly, and get the management 
network and VM network separated out again onto their own vlans. 

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, March 26, 2021 9:38 PM, David White via Users <[email protected]> 
wrote:

> Hi all,
> I used the oVirt installer via cockpit to setup a hyperconverged cluster on 3 
> physical hosts running Red Hat 8.3. I used the following two resources to 
> guide my work:
> 

> -   
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>     

> -   
> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
> 

> As I mentioned in a previous email a week ago, it appears as if the wizard 
> successfully setup gluster on all 3 of the servers, and the wizard also 
> successfully setup the ovirt engine on the first host.
> 

> However, the oVirt engine only recognizes the first host, and only the first 
> host's physical resources are available to the cluster. 
> So I have Gluster 8 installed on the 3 hosts (RHEL 8), and oVirt 4.4.5 
> installed on the first host, along with the ovirt engine VM.
> 

> I run `gluster peer status` from the first node, can confirm that the other 
> two physical hosts are healthy:
> (example.com is replacing actual domain below)
> [root@cha1-storage ~]# gluster peer status
> Number of Peers: 2
> 

> Hostname: cha2-storage.mgt.example.com
> Uuid: 240a7ab1-ab52-4e5b-98ed-d978f848835e
> State: Peer in Cluster (Connected)
> 

> Hostname: cha3-storage.mgt.example.com
> Uuid: 0563c3e8-237d-4409-a09a-ec51719b0da6
> State: Peer in Cluster (Connected)
> 

> I am now trying to get the other two hosts added to the Engine. I navigate to 
> Compute -> Hosts, and click New, fill in the details (hostname, root 
> password, etc..) and begin the installation on the additional hosts.
> It keeps failing.
> 

> Checking the error logs in /var/log/ovirt-engine/host-deploy on the Engine 
> VM, I see the following near the bottom:
> 

> "msg" : "Failed to download metadata for repo 'ovirt-4.4-epel': Cannot 
> prepare internal mirrorlist: Curl error (7): Couldn't connect to server for 
> https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=$infra&content=$contentdir
>  [Failed to connect to mirrors.fedoraproject.org port 443: No route to host]",
> 

> I can confirm that the Engine and each of the hosts are able to get to 
> mirrors.fedoraproject.org:
> 

> I've run the following on both the Engine, as well as each of the hosts (the 
> first host where everything is installed, as well as the 2nd host where I'm 
> trying to get it installed):
> [root@ovirt-engine1 host-deploy]# curl -i https://mirrors.fedoraproject.org
> HTTP/2 302
> 

> Note, that this traffic is going out the management network.
> That may be an important distinction -- keep reading.
> 

> So this does lead me to another issue that may or may not be related.
> I've discovered that, from the RHEL 8 host's perspective, the public facing 
> network is unable to get out to the internet.
> 

> Management (Gluster & the Engine VM) is on: 10.1.0.0/24
> Each of the hosts are able to ping each other and communicate with each other.
> Each of the hosts are able to ping 8.8.8.8 whenever the frontend network 
> interface is disabled (see below).
> 

> The frontend network is on: 192.168.0.0/24
> Each of the hosts are able to ping each other on this network
> But this network isn't able to get out to the internet (yet).
> 

> Obviously I need to fix the routing and figure out why the 192.168.0.0/24 
> network is unable to reach the internet.
> But shouldn't all the traffic to install the ovirt functionality onto the 2nd 
> host go out the management network?
> 

> So to summarize, I have a few questions:
> 

> -   Why didn't the wizard properly install the ovirt functionality onto all 3 
> hosts to begin with when I did the initial installation?
> -   From the physical host's perspective, what should be the default route? 
> The internal management, or the front-end network?
> -   Is the following statement accurate?
> 

> -   The ovirt-engine's default route should be management -- it doesn't even 
> have a front-end IP address.
> 

> -   Why would the ovirt engine fail to install, and indicate that it cannot 
> get to mirrors.fedoraproject.org, when clearly it can? 
> 

> -   Again, the ovirt-engine VM is able to curl that URL and gets a valid http 
> response.
> 

> -   Any other tips or suggestions on how to troubleshoot this?
> 

> Sent with ProtonMail Secure Email.

Attachment: publickey - [email protected] - 0x320CD582.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/ZQ3O7IRQEKCRV45VCKT3ZVL5G6VCMDXM/

Reply via email to