[ovirt-users] oVirt 4.5.1 Hyperconverge Gluster install fails

2022-07-17 Thread david . lennox
Hello all, I am hoping someone can help me with a oVirt installation that has just gotten the better of me after weeks of trying. After setting up ssh-keys and making sure each host is known to the primary host (sr-svr04), I go though Cockpit and "Configure Gluster storage and oVirt hosted

[ovirt-users] Grafana login

2022-07-17 Thread markeczzz
Hi! I have installed oVirt 4.5.1 and I am using KeyCloak to login. I have created users in keycloak and with them i can login to ovirt Manager, but I can't login to Grafana. Also, I can login to Keycloak only with user "admin" and not with newly created users. I have tried to add @ovirt or

[ovirt-users] Gluster volume "deleted" by accident --- Is it possible to recover?

2022-07-17 Thread itforums51
hi everyone, I have a 3x node ovirt 4.4.6 cluster in HC setup. Today I was intending to extend the data and vmstore volume adding another brick each; then by accident I pressed the "cleanup" button. Basically it looks that the volume were deleted. I am wondering whether there is a process of

[ovirt-users] Re: oVirt 4.5.1 Hyperconverge Gluster install fails

2022-07-17 Thread Strahil Nikolov via Users
Can you cat this file /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml It seems that the VG creation is not idempotent .As a workaround, delete the VG 'gluster_vg_sdb' on all Gluster nodes: vgremove gluster_vg_sdb Best Regards,Strahil Nikolov  Hello all, I am

[ovirt-users] Engine storage Domain path change

2022-07-17 Thread michael
Hello I installed oVirt on 3 servers(hv1,hv2,hv3) with Self Hosted Engine couple years ago. Gluster is used as storage for VMs. The engine has its own storage volume. Then I added 3 more servers(hv4,hv5,hv6). Now I would like to replace the first 3 servers. I added 3 more

[ovirt-users] Re: Engine storage Domain path change

2022-07-17 Thread Strahil Nikolov via Users
You can check existing settings like this: [root@ovirt2 ~]# hosted-engine --get-shared-config mnt_options --type=he_shared mnt_options : backup-volfile-servers=gluster2:ovirt3, type : he_shared [root@ovirt2 ~]# hosted-engine --get-shared-config storage --type=he_local    storage :

[ovirt-users] Re: Gluster volume "deleted" by accident --- Is it possible to recover?

2022-07-17 Thread Strahil Nikolov via Users
Check if the cleanup has umounted the volume bricks.If they are still mounted, you can use a backup of the system to retrieve the definition of the gluster volumes (/var/lib/gluster). Once you copy the volume dir, stop glusterd (this is just management layer) on all nodes and then start them

[ovirt-users] oVirt over gluster: Replacing a dead host

2022-07-17 Thread Gilboa Davara
Hello all, I'm attempting to replace a dead host in a replica 2 + arbiter gluster setup and replace it with a new host. I've already set up a new host (same hostname..localdomain) and got into the cluster. $ gluster peer status Number of Peers: 2 Hostname: office-wx-hv3-lab-gfs Uuid:

[ovirt-users] Re: oVirt 4.5.1 Hyperconverge Gluster install fails

2022-07-17 Thread david . lennox
> Can you cat this file > /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml > > It seems that the VG creation is not idempotent .As a workaround, delete the > VG > 'gluster_vg_sdb' on all Gluster nodes: > vgremove gluster_vg_sdb > Best Regards,Strahil Nikolov  > >

[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-17 Thread Patrick Hibbs
What you are missing is the fact that gluster requires more than one set of bricks to recover from a dead host. I.e. In your set up, you'd need 6 hosts. 4x replicas and 2x arbiters with at least one set (2x replicas and 1x arbiter) operational bare minimum. Automated commands to fix the volume do

[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-17 Thread Gilboa Davara
Hello, Many thanks for your email. I should add that this is a test environment we set up in preparation for a planned CentOS 7 / oVirt 4.3 upgrade to CentOS 8 Streams / oVirt 4.5 upgrade in one of our old(er) oVirt clusters. In this case, we blew up the software RAID during the OS replacement