[ovirt-users] Unable to add hosts to cluster & possible related routing issue
Hi all, I used the oVirt installer via cockpit to setup a hyperconverged cluster on 3 physical hosts running Red Hat 8.3. I used the following two resources to guide my work: - https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html - https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/ As I mentioned in a previous email a week ago, it appears as if the wizard successfully setup gluster on all 3 of the servers, and the wizard also successfully setup the ovirt engine on the first host. However, the oVirt engine only recognizes the first host, and only the first host's physical resources are available to the cluster. So I have Gluster 8 installed on the 3 hosts (RHEL 8), and oVirt 4.4.5 installed on the first host, along with the ovirt engine VM. I run `gluster peer status` from the first node, can confirm that the other two physical hosts are healthy: (example.com is replacing actual domain below) [root@cha1-storage ~]# gluster peer statusNumber of Peers: 2Hostname: cha2-storage.mgt.example.comUuid: 240a7ab1-ab52-4e5b-98ed-d978f848835eState: Peer in Cluster (Connected)Hostname: cha3-storage.mgt.example.comUuid: 0563c3e8-237d-4409-a09a-ec51719b0da6State: Peer in Cluster (Connected) I am now trying to get the other two hosts added to the Engine. I navigate to Compute -> Hosts, and click New, fill in the details (hostname, root password, etc..) and begin the installation on the additional hosts. It keeps failing. Checking the error logs in /var/log/ovirt-engine/host-deploy on the Engine VM, I see the following near the bottom: "msg" : "Failed to download metadata for repo 'ovirt-4.4-epel': Cannot prepare internal mirrorlist: Curl error (7): Couldn't connect to server for https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=$infra&content=$contentdir [Failed to connect to mirrors.fedoraproject.org port 443: No route to host]", I can confirm that the Engine and each of the hosts are able to get to mirrors.fedoraproject.org: I've run the following on both the Engine, as well as each of the hosts (the first host where everything is installed, as well as the 2nd host where I'm trying to get it installed): [root@ovirt-engine1 host-deploy]# curl -i https://mirrors.fedoraproject.org HTTP/2 302 Note, that this traffic is going out the management network. That may be an important distinction -- keep reading. So this does lead me to another issue that may or may not be related. I've discovered that, from the RHEL 8 host's perspective, the public facing network is unable to get out to the internet. Management (Gluster & the Engine VM) is on: 10.1.0.0/24 Each of the hosts are able to ping each other and communicate with each other. Each of the hosts are able to ping 8.8.8.8 whenever the frontend network interface is disabled (see below). The frontend network is on: 192.168.0.0/24 Each of the hosts are able to ping each other on this network But this network isn't able to get out to the internet (yet). Obviously I need to fix the routing and figure out why the 192.168.0.0/24 network is unable to reach the internet. But shouldn't all the traffic to install the ovirt functionality onto the 2nd host go out the management network? So to summarize, I have a few questions: - Why didn't the wizard properly install the ovirt functionality onto all 3 hosts to begin with when I did the initial installation? - From the physical host's perspective, what should be the default route? The internal management, or the front-end network? - Is the following statement accurate? - The ovirt-engine's default route should be management -- it doesn't even have a front-end IP address. - Why would the ovirt engine fail to install, and indicate that it cannot get to mirrors.fedoraproject.org, when clearly it can? - Again, the ovirt-engine VM is able to curl that URL and gets a valid http response. - Any other tips or suggestions on how to troubleshoot this? Sent with ProtonMail Secure Email. publickey - dmwhite823@protonmail.com - 0x320CD582.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YL23P7CLRHEEDHSNHW76VPAHT2AOYEOW/
[ovirt-users] Re: oVirt 4.4.4 not dislaying Sign-on Screen
Sorry for the long delay, I've run the soslogcollector and created a file with a size of 30.9MB. My ISP severally restricts my upload speed to DropBox and the bug reporting system for Red Hat won't let me upload a file more than 19.2MB. Do you have any suggestion how I might get the Log Collector file to oVirt Support? Thanks ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV47QKHN6K4NCH52XV5SFNDVZGYXKJZ3/
[ovirt-users] Re: Upgrade to release for oVirt 4.4.5 failing
Sorry if this appears twice. This was exactly the problem that sent me down my rabbit hole. Anyone found a way to fix this ? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHBOYMQHMK3IDC5WJIE3GJ2USCUQ36XN/
[ovirt-users] Re: Deployment issues
Forgot to add, it's Ovirt 4.4.5 and the issues seem to have started after I upgraded from 4.4.1. Should I try to roll back ? On 3/26/21 2:44 PM, Valerio Luccio wrote: Hello all, last September I deployed Ovirt on a CentOS 8 server, with storage on our gluster (replica 3). I then added some VMs, etc. A few days ago I managed to screw everything up and, after bannging my head for a couple of days, decided to start from scratch. I made a copy of all the data under the storage to a safe space, the ran ovirt-hosted-engine-cleanup and deleted everything under the storage and tried to create a new hosted engine (I tried both from the cockpit and from command line). Everything seems to work fine (I can ssh to the engine) until it tries to save the engine to storage and it fails with the error: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Error creating a storage domain's metadata]\". HTTP response code is 400."} I don't get any more details. I'm using exactly the same parameters I used before. I have no problems reaching the gluster storage and the process does create the top-level directory and /dom_md/ids with the correct ownership. I looked at the glusterfs log files, including the rhev-data-center-mnt-glusterSD-.log file, but I don't spot any specific error. What am I doing wrong ? Is there something else I need to clean up before trying a new deployment ? Should I just try to delete all of the ovirt configuration files ? Which ones ? Thanks, -- As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenb...@nyu.edu and/or Pablo Velasco at pablo.vela...@nyu.edu For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luc...@nyu.edu For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chr...@nyu.edu For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.man...@nyu.edu Valerio Luccio (212) 998-8736 Center for Brain Imaging4 Washington Place, Room 158 New York University New York, NY 10003 "In an open world, who needs windows or gates ?" ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I6LWBFDPLCAJWQXZFL3VDKCIK3LAZ26H/
[ovirt-users] Deployment issues
Hello all, last September I deployed Ovirt on a CentOS 8 server, with storage on our gluster (replica 3). I then added some VMs, etc. A few days ago I managed to screw everything up and, after bannging my head for a couple of days, decided to start from scratch. I made a copy of all the data under the storage to a safe space, the ran ovirt-hosted-engine-cleanup and deleted everything under the storage and tried to create a new hosted engine (I tried both from the cockpit and from command line). Everything seems to work fine (I can ssh to the engine) until it tries to save the engine to storage and it fails with the error: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Error creating a storage domain's metadata]\". HTTP response code is 400."} I don't get any more details. I'm using exactly the same parameters I used before. I have no problems reaching the gluster storage and the process does create the top-level directory and /dom_md/ids with the correct ownership. I looked at the glusterfs log files, including the rhev-data-center-mnt-glusterSD-.log file, but I don't spot any specific error. What am I doing wrong ? Is there something else I need to clean up before trying a new deployment ? Should I just try to delete all of the ovirt configuration files ? Which ones ? Thanks, -- As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenb...@nyu.edu and/or Pablo Velasco at pablo.vela...@nyu.edu For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luc...@nyu.edu For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chr...@nyu.edu For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.man...@nyu.edu Valerio Luccio (212) 998-8736 Center for Brain Imaging4 Washington Place, Room 158 New York University New York, NY 10003 "In an open world, who needs windows or gates ?" ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2UZ3ECQ6CDESPCOFVYC3DHHG4ZBAXT6B/
[ovirt-users] Re: Is it possible to upgrade 3 node HCI from 4.3 to 4.4?
If you got a backup of /etc/glusterd and /var/lib/glusterd you should be able to restart gluster on that node (so no syncing will be needed). Keep in mind that I haven't migrated yet, so I can't guarantee it will work ;) Best Regards,Strahil Nikolov On Thu, Mar 25, 2021 at 20:39, Jayme wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MCVU3COMB3SDXJIHL3L5DOO434DWPEE/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHHBBF7FAHG6Q73PLXB5MSFWXXEEIGIQ/
[ovirt-users] Re: Restored engine backup: The provided authorization grant for the auth code has expired.
Please, any help with this? El 24/3/21 a las 10:05, Nicolás escribió: Hi, I'm restoring a full ovirt engine backup, having used the --scope=all option, for oVirt 4.3. I restored the backup on a fresh CentOS7 machine. The process went well, but when trying to log into the restored authentication system I get the following message which won't allow me to log in: The provided authorization grant for the auth code has expired. What does that mean and how can it be fixed? Thanks. Nicolás ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PADF5SSC6XUYCOTJCNOSALL6U544KG5A/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VYLMSCUGDDBWVYJYEJTW6ZSTZSMOJWC/