[ovirt-users] How to Fix: oVirt 4.4.10 Internal server error 500 after engine-setup (vdsm-jsonrpc-java )

2022-07-17 Thread Andrei Verovski
Hi,


Finally I managed to migrate 4.4.7 to fresh installation of 4.4.10.
However, after successful engine-setup I’ve got 500 - Internal Server Error

I found this:
https://bugzilla.redhat.com/show_bug.cgi?id=1918022

Bug 1918022 - oVirt Manager is not loading after engine-setup 

Article suggested to downgrade vdsm-jsonrpc-java to 1.5.x.

However, this is not possible:
dnf --showduplicates list vdsm-jsonrpc-java
dnf install vdsm-jsonrpc-java-1.5.7-1.el8

Last metadata expiration check: 0:39:52 ago on Fri 15 Jul 2022 02:32:36 PM EEST.
Error: 
Problem: problem with installed package 
ovirt-engine-backend-4.4.10.7-1.el8.noarch
 - package ovirt-engine-backend-4.4.10.7-1.el8.noarch requires 
vdsm-jsonrpc-java >= 1.6.0, but none of the providers can be installed
 - cannot install both vdsm-jsonrpc-java-1.5.7-1.el8.noarch and 
vdsm-jsonrpc-java-1.6.0-1.el8.noarch
 - cannot install both vdsm-jsonrpc-java-1.6.0-1.el8.noarch and 
vdsm-jsonrpc-java-1.5.7-1.el8.noarch


How to fix this?
Thanks in advance.


*   SERVER LOG *

2022-07-15 14:45:44,969+03 ERROR [org.jboss.as.controller.management-operation] 
(Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: 
([("deployment" => "engine.ear")]) - failure description: {"WFLYCTL0080: Failed 
services" => 
{"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START" 
=> "java.lang.IllegalStateException: WFLYEE0042: Failed to construct component 
instance
   Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct 
component instance
   Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: 
WELD-49: Unable to invoke protected void 
org.ovirt.engine.core.bll.TagsDirector.init() on 
org.ovirt.engine.core.bll.TagsDirector@648487d3
   Caused by: org.jboss.weld.exceptions.WeldException: WELD-49: Unable to 
invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on 
org.ovirt.engine.core.bll.TagsDirector@648487d3
   Caused by: java.lang.reflect.InvocationTargetException
   Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: 
Unable to determine the correct call signature - no 
procedure/function/signature for 'gettagsbyparent_id'"}}
2022-07-15 14:45:44,981+03 INFO  [org.jboss.as.server] (ServerService Thread 
Pool -- 25) WFLYSRV0010: Deployed "restapi.war" (runtime-name : "restapi.war")
2022-07-15 14:45:44,982+03 INFO  [org.jboss.as.server] (ServerService Thread 
Pool -- 25) WFLYSRV0010: Deployed "engine.ear" (runtime-name : "engine.ear")
2022-07-15 14:45:44,982+03 INFO  [org.jboss.as.server] (ServerService Thread 
Pool -- 25) WFLYSRV0010: Deployed "apidoc.war" (runtime-name : "apidoc.war")
2022-07-15 14:45:44,982+03 INFO  [org.jboss.as.server] (ServerService Thread 
Pool -- 25) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name : 
"ovirt-web-ui.war")
2022-07-15 14:45:45,015+03 INFO  [org.jboss.as.controller] (Controller Boot 
Thread) WFLYCTL0183: Service status report
WFLYCTL0186:   Services which failed to start:  service 
jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: 
java.lang.IllegalStateException: WFLYEE0042: Failed to construct component 
instance
WFLYCTL0448: 2 additional services are down due to their dependencies being 
missing or failed

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KT4STN5M6HXXW4WNRI2YUX7RGLZRV2T/


[ovirt-users] Re: Grafana login

2022-07-17 Thread markeczzz
Ok, i finally found a way to login, using username "admin" and password that I 
have created when installing hosted engine.

I thought the Grafana login was connected to Keycloak or the oVirt internal SSO 
database, but as far as I can see, Grafana account is created during the 
installation of Hosted-engine, and subsequent password changes in Keycloak or 
the internal SSO database do not affect the Grafana login credidentials, you 
have to use ones you have created during installation.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D6Q2JV65JM3Q3NVTRDDHU532V75652VL/


[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-17 Thread Gilboa Davara
Hello,

Many thanks for your email.
I should add that this is a test environment we set up in preparation for a
planned CentOS 7 / oVirt 4.3 upgrade to CentOS 8 Streams / oVirt 4.5
upgrade in one of our old(er) oVirt clusters.
In this case, we blew up the software RAID during the OS replacement
(CentOS 7 -> 8) so have a host, but no storage.
And as an added bonus, the FS locations are a bit different. (due MD
changes we made during the blowup).

So, essentially the host is alive, but we need to create a new brick using
a known good brick.
A couple of questions:
Assuming I have a known good brick to copy but the FS location is different
and given the fact I cannot simply remove/add brick, how do I change the
brick path?
Old location:
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick
New location:
office-wx-hv1-lab-gfs.localdomain:/gluster/brick/data/brick
Thanks again,
Gilboa

On Mon, Jul 18, 2022 at 1:32 AM Patrick Hibbs 
wrote:

> What you are missing is the fact that gluster requires more than one set
> of bricks to recover from a dead host. I.e. In your set up, you'd need 6
> hosts. 4x replicas and 2x arbiters with at least one set (2x replicas and
> 1x arbiter) operational bare minimum.
> Automated commands to fix the volume do not exist otherwise. (It's a
> Gluster limitation.) This can be fixed manually however.
>
> Standard Disclaimer: Back up your data first! Fixing this issue requires
> manual intervention. Reader assumes all responsiblity for any action
> resulting from the instructions below. Etc.
>
> If it's just a dead brick, (i.e. the host is still functional), all you
> really need to do is replace the underlying storage:
>
> 1. Take the gluster volume offline.
> 2. Remove the bad storage device, and attach the replacement.
> 3. rsync / scp / etc. the data from a known good brick (be sure to include
> hidden files / preserve file times and ownership / SELinux labels / etc. ).
> 4. Restart the gluster volume.
>
> Gluster *might* still need to heal everything after all of that, but it
> should start the volume and get it running again.
>
> If the host itself is dead, (and the underlying storage is still
> functional), you can just move the underlying storage over to the new host:
>
> 1. Take the gluster volume offline.
> 2. Attach the old storage.
> 3. Fix up the ids on the volume file. (
> https://serverfault.com/questions/631365/rename-a-glusterfs-peer)
> 4. Restart the gluster volume.
>
> If both the host and underlying storage are dead, you'll need to do both
> tasks:
>
> 1. Take the gluster volume offline.
> 2. Attach the new storage.
> 3. rsync / scp / etc. the data from a known good brick (be sure to
> include hidden files / preserve file times and ownership / SELinux labels /
> etc. ).
> 4. Fix up the ids on the volume file.
> 5. Restart the gluster volume.
>
> Keep in mind one thing however: If the gluster host you are replacing is
> used by oVirt to connect to the volume (I.e. It's the host named in the
> volume config in the Admin portal). The new host will need to retain the
> old hostname / IP, or you'll need to update oVirt's config. Otherwise the
> VM hosts will wind up in Unassigned / Non-functional status.
>
> - Patrick Hibbs
>
> On Sun, 2022-07-17 at 22:15 +0300, Gilboa Davara wrote:
>
> Hello all,
>
> I'm attempting to replace a dead host in a replica 2 + arbiter gluster
> setup and replace it with a new host.
> I've already set up a new host (same hostname..localdomain) and got into
> the cluster.
>
> $ gluster peer status
> Number of Peers: 2
>
> Hostname: office-wx-hv3-lab-gfs
> Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
> State: Peer in Cluster (Connected)
>
> Hostname: office-wx-hv1-lab-gfs.localdomain <-- This is a new host.
> Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
> State: Peer in Cluster (Connected)
>
> $ gluster volume info GV2Data
>  Volume Name: GV2Data
> Type: Replicate
> Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick  <-- This is the
> dead host.
> Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
> Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)
> ...
>
> Looking at the docs, it seems that I need to remove the dead brick.
>
> $ gluster volume remove-brick GV2Data
> office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick start
> Running remove-brick with cluster.force-migration enabled can result in
> data corruption. It is safer to disable this option so that files that
> receive writes during migration are not migrated.
> Files that are not migrated can then be manually copied after the
> remove-brick commit operation.
> Do you want to continue with your current cluster.force-migration
> settings? (y/n) y
> volume remove-brick start: failed: Removing bricks from replicate
> configuration is not allowed without reducing replica count explicitly
>
> So I guess I need to drop from replica 2 + ar

[ovirt-users] Re: oVirt 4.5.1 Hyperconverge Gluster install fails

2022-07-17 Thread david . lennox
> Can you cat this file
> /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
>  
> It seems that the VG creation is not idempotent .As a workaround, delete the 
> VG
> 'gluster_vg_sdb' on all Gluster nodes:
> vgremove gluster_vg_sdb
> Best Regards,Strahil Nikolov 
> 
>  

Strahil,

Thank you for your response, I have added the file for you, but with my 
continued efforts over the past day 
I have finally managed to get oVirt to install but not without issues. Over a 
period of 2 weeks I have kicked off the install process over 20 times. 
Occasionally it would return the error posted here, but most of the time the 
install process would hang on this step.  I posted the error as that was the 
only log/error I could get, when it hung no log files or any footprint was 
created it just sat there until I either reboot the host or cancelled the 
install.

The problem I have found is that nodectl doesn't return while the installer is 
in progress, so I assume that when the installer tries to ssh to the localhost 
it never gets to a shell because nodectl is waiting indefinitely for something. 
So I removed nodectl-motd.sh and nodectl-run-banner.sh from /etc/profile.d and 
now the Gluster install wizards works perfectly. 

Next, the Cockpit wizard refused to identify any of my network devices, but the 
command line installer was fine, so I have now got a self hosted engine running 
on one node via: hosted-engine --deploy

However my next issue now is that when I try to login to the Administration 
Portal with the admin user, it is giving me "Invalid username or password". I 
can log into the Monitoring Portal just fine, but the Administration and VM 
Portal doesn't like the admin credentials. So now to work out why 
authentication isn't working.

- Dave.

- /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml 
-
---
# We have to set the dataalignment for physical volumes, and physicalextentsize
# for volume groups. For JBODs we use a constant alignment value of 256K
# however, for RAID we calculate it by multiplying the RAID stripe unit size
# with the number of data disks. Hence in case of RAID stripe_unit_size and data
# disks are mandatory parameters.

- name: Check if valid disktype is provided
  fail:
msg: "Unknown disktype. Allowed disktypes: JBOD, RAID6, RAID10, RAID5."
  when: gluster_infra_disktype not in [ 'JBOD', 'RAID6', 'RAID10', 'RAID5' ]


# Set data alignment for JBODs, by default it is 256K. This set_fact is not
# needed if we can always assume 256K for JBOD, however we provide this extra
# variable to override it.
- name: Set PV data alignment for JBOD
  set_fact:
pv_dataalign: "{{ gluster_infra_dalign | default('256K') }}"
  when: gluster_infra_disktype == 'JBOD'

# Set data alignment for RAID
# We need KiB: ensure to keep the trailing `K' in the pv_dataalign calculation.
- name: Set PV data alignment for RAID
  set_fact:
pv_dataalign: >
{{ gluster_infra_diskcount|int *
   gluster_infra_stripe_unit_size|int }}K
  when: >
  gluster_infra_disktype == 'RAID6' or
  gluster_infra_disktype == 'RAID10' or
  gluster_infra_disktype == 'RAID5'

- name: Set VG physical extent size for RAID
  set_fact:
vg_pesize: >
 {{ gluster_infra_diskcount|int *
gluster_infra_stripe_unit_size|int }}K
  when: >
 gluster_infra_disktype == 'RAID6' or
 gluster_infra_disktype == 'RAID10' or
 gluster_infra_disktype == 'RAID5'

- include_tasks: get_vg_groupings.yml
  vars: 
   volume_groups: "{{ gluster_infra_volume_groups }}"
  when: gluster_infra_volume_groups is defined and gluster_infra_volume_groups 
is not none and gluster_infra_volume_groups|length >0 

- name: Record for missing devices for phase 2
  set_fact:
   gluster_phase2_has_missing_devices: true
  loop: "{{ vg_device_exists.results }}"
  when: item.stdout_lines is defined and "0" in item.stdout_lines

- name: Print the gateway for each host when defined
  ansible.builtin.debug:
msg: vg names {{ gluster_volumes_by_groupname }}

# Tasks to create a volume group
# The devices in `pvs' can be a regular device or a VDO device
# Please take note; only the first item per volume group will define the actual 
configuraton!
#TODO: fix pesize // {{ ((item.value | first).vg_pesize || vg_pesize) | 
default(4) }}
- name: Create volume groups
  register: gluster_changed_vgs
  command: vgcreate --dataalignment {{ item.value.pv_dataalign | 
default(pv_dataalign) }} -s {{ vg_pesize | default(4) }} {{ (item.value | 
first).vgname }} {{ item.value | ovirt.ovirt.json_query('[].pvname') | unique | 
join(',') }}
 # lvg:
 #   state: present
 #   vg: "{{ (item.value | first).vgname }}"
 #   pvs: "{{ item.value | json_query('[].pvname') | unique | join(',') }}"
 #   pv_options: "--dataalignment {{ item.value.pv_dataalign | 
default(pv_dataalign) }}"
# pesize is 4m by default for JBODs
 #   pesize: "{{ vg_pesize | default(4) }}"
  loop: "{{gluster_volumes_by_

[ovirt-users] Re: oVirt over gluster: Replacing a dead host

2022-07-17 Thread Patrick Hibbs
What you are missing is the fact that gluster requires more than one
set of bricks to recover from a dead host. I.e. In your set up, you'd
need 6 hosts. 4x replicas and 2x arbiters with at least one set (2x
replicas and 1x arbiter) operational bare minimum.
Automated commands to fix the volume do not exist otherwise. (It's a
Gluster limitation.) This can be fixed manually however.

Standard Disclaimer: Back up your data first! Fixing this issue
requires manual intervention. Reader assumes all responsiblity for any
action resulting from the instructions below. Etc.

If it's just a dead brick, (i.e. the host is still functional), all you
really need to do is replace the underlying storage:

1. Take the gluster volume offline.
2. Remove the bad storage device, and attach the replacement.
3. rsync / scp / etc. the data from a known good brick (be sure to
include hidden files / preserve file times and ownership / SELinux
labels / etc. ). 
4. Restart the gluster volume.

Gluster *might* still need to heal everything after all of that, but it
should start the volume and get it running again.

If the host itself is dead, (and the underlying storage is still
functional), you can just move the underlying storage over to the new
host:

1. Take the gluster volume offline.
2. Attach the old storage.
3. Fix up the ids on the volume file.
(https://serverfault.com/questions/631365/rename-a-glusterfs-peer)
4. Restart the gluster volume.

If both the host and underlying storage are dead, you'll need to do
both tasks:

1. Take the gluster volume offline.
2. Attach the new storage.
3. rsync / scp / etc. the data from a known good brick (be sure to
include hidden files / preserve file times and ownership / SELinux
labels / etc. ).
4. Fix up the ids on the volume file.
5. Restart the gluster volume.

Keep in mind one thing however: If the gluster host you are replacing
is used by oVirt to connect to the volume (I.e. It's the host named in
the volume config in the Admin portal). The new host will need to
retain the old hostname / IP, or you'll need to update oVirt's config.
Otherwise the VM hosts will wind up in Unassigned / Non-functional
status.

- Patrick Hibbs

On Sun, 2022-07-17 at 22:15 +0300, Gilboa Davara wrote:
> Hello all,
> 
> I'm attempting to replace a dead host in a replica 2 + arbiter
> gluster setup and replace it with a new host.
> I've already set up a new host (same hostname..localdomain) and got
> into the cluster.
> 
> $ gluster peer status
> Number of Peers: 2
> 
> Hostname: office-wx-hv3-lab-gfs
> Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
> State: Peer in Cluster (Connected)
> 
> Hostname: office-wx-hv1-lab-gfs.localdomain <-- This is a new
> host.
> Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
> State: Peer in Cluster (Connected)
> 
> $ gluster volume info GV2Data
>  Volume Name: GV2Data
> Type: Replicate
> Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick  <-- This is
> the dead host.
> Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
> Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)
> ...
> 
> Looking at the docs, it seems that I need to remove the dead brick.
> 
> $ gluster volume remove-brick GV2Data office-wx-hv1-lab-
> gfs:/mnt/LogGFSData/brick start
> Running remove-brick with cluster.force-migration enabled can result
> in data corruption. It is safer to disable this option so that files
> that receive writes during migration are not migrated.
> Files that are not migrated can then be manually copied after the
> remove-brick commit operation.
> Do you want to continue with your current cluster.force-migration
> settings? (y/n) y
> volume remove-brick start: failed: Removing bricks from replicate
> configuration is not allowed without reducing replica count
> explicitly
> 
> So I guess I need to drop from replica 2 + arbiter to replica 1 +
> arbiter (?).
> 
> $ gluster volume remove-brick GV2Data replica 1 office-wx-hv1-lab-
> gfs:/mnt/LogGFSData/brick start
> Running remove-brick with cluster.force-migration enabled can result
> in data corruption. It is safer to disable this option so that files
> that receive writes during migration are not migrated.
> Files that are not migrated can then be manually copied after the
> remove-brick commit operation.
> Do you want to continue with your current cluster.force-migration
> settings? (y/n) y
> volume remove-brick start: failed: need 2(xN) bricks for reducing
> replica count of the volume from 3 to 1
> 
> ... What am I missing?
> 
> - Gilboa
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org

[ovirt-users] oVirt over gluster: Replacing a dead host

2022-07-17 Thread Gilboa Davara
Hello all,

I'm attempting to replace a dead host in a replica 2 + arbiter gluster
setup and replace it with a new host.
I've already set up a new host (same hostname..localdomain) and got into
the cluster.

$ gluster peer status
Number of Peers: 2

Hostname: office-wx-hv3-lab-gfs
Uuid: 4e13f796-b818-4e07-8523-d84eb0faa4f9
State: Peer in Cluster (Connected)

Hostname: office-wx-hv1-lab-gfs.localdomain <-- This is a new host.
Uuid: eee17c74-0d93-4f92-b81d-87f6b9c2204d
State: Peer in Cluster (Connected)

$ gluster volume info GV2Data
 Volume Name: GV2Data
Type: Replicate
Volume ID: c1946fc2-ed94-4b9f-9da3-f0f1ee90f303
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick  <-- This is the
dead host.
Brick2: office-wx-hv2-lab-gfs:/mnt/LogGFSData/brick
Brick3: office-wx-hv3-lab-gfs:/mnt/LogGFSData/brick (arbiter)
...

Looking at the docs, it seems that I need to remove the dead brick.

$ gluster volume remove-brick GV2Data
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick start
Running remove-brick with cluster.force-migration enabled can result in
data corruption. It is safer to disable this option so that files that
receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the
remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings?
(y/n) y
volume remove-brick start: failed: Removing bricks from replicate
configuration is not allowed without reducing replica count explicitly

So I guess I need to drop from replica 2 + arbiter to replica 1 + arbiter
(?).

$ gluster volume remove-brick GV2Data replica 1
office-wx-hv1-lab-gfs:/mnt/LogGFSData/brick start
Running remove-brick with cluster.force-migration enabled can result in
data corruption. It is safer to disable this option so that files that
receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the
remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings?
(y/n) y
volume remove-brick start: failed: need 2(xN) bricks for reducing replica
count of the volume from 3 to 1

... What am I missing?

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIXTFTJREUAHGP3WUW7DFL3VJNEMFJLF/


[ovirt-users] Re: Gluster volume "deleted" by accident --- Is it possible to recover?

2022-07-17 Thread Strahil Nikolov via Users
Check if the cleanup has umounted the volume bricks.If they are still mounted, 
you can use a backup of the system to retrieve the definition of the gluster 
volumes (/var/lib/gluster). Once you copy the volume dir, stop glusterd (this 
is just management layer) on all nodes and then start them one by one. Keep in 
mind that nodes to sync between each other , so restarting the nodes 1 by 1 is 
useless.
You can also try to create the volume (keep the name) on the sane bricks via 
the force flag and just hope it will have the data in the bricks (never done 
that).
Best Regards,Strahil Nikolov 
 
 
hi everyone,

I have a 3x node ovirt 4.4.6 cluster in HC setup.

Today I was intending to extend the data and vmstore volume adding another 
brick each; then by accident I pressed the "cleanup" button. Basically it looks 
that the volume were deleted.

I am wondering whether there is a process of trying to recover these volumes 
and therefore all VMs (including the Hosted-Engine).

```
lvs
  LV                                VG              Attr      LSize  Pool       
                     Origin                          Data%  Meta%  Move Log 
Cpy%Sync Convert
  gluster_lv_data                    gluster_vg_sda4 Vwi---t--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                                
                        
  gluster_lv_data-brick1            gluster_vg_sda4 Vwi-aot--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                  0.45          
                        
  gluster_lv_engine                  gluster_vg_sda4 -wi-a- 100.00g         
                                                                                
                 
  gluster_lv_vmstore                gluster_vg_sda4 Vwi---t--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                                
                        
  gluster_lv_vmstore-brick1          gluster_vg_sda4 Vwi-aot--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                  0.33          
                        
  gluster_thinpool_gluster_vg_sda4  gluster_vg_sda4 twi-aot---  <7.07t          
                                                        11.46  0.89
```  
I would appreciate any advice. 

TIA
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ITF5IYLWGG2MPAPG2JBD2GWA5QZDPVSA/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OJOPZARTB7F3BCZ3NRSQMGHEJRJXDTC/


[ovirt-users] Re: Engine storage Domain path change

2022-07-17 Thread Strahil Nikolov via Users
You can check existing settings like this:
[root@ovirt2 ~]# hosted-engine --get-shared-config mnt_options --type=he_shared
mnt_options : backup-volfile-servers=gluster2:ovirt3, type : he_shared
[root@ovirt2 ~]# hosted-engine --get-shared-config storage --type=he_local   
storage : ovirt2:/engine44, type : he_local
[root@ovirt2 ~]# hosted-engine --get-shared-config storage --type=he_shared
storage : ovirt2:/engine44, type : he_shared
You have to use the --set-shared-config to point to the new Gluster host and 
new backup-volfile-servers in both he_local & he_shared.
Best Regards,Strahil Nikolov  
 
Hello

I installed oVirt on 3 servers(hv1,hv2,hv3) with Self Hosted Engine couple 
years ago. Gluster is used as storage for VMs. The engine has its own storage 
volume.

Then I added 3 more servers(hv4,hv5,hv6).

Now I would like to replace the first 3 servers. I added 3 more 
servers(hv7,hv8,hv9). I created new gluster volumes on hv7-hv9, moved disks 
from old volumes.

Now the question is how to migrate engine storage? I decided to do it on 
Gluster level by replacing each of the brick from the old servers(hv1-hv3) with 
new ones (hv7-hv9). I successfully replaced bricks from hv1 and hv2 to hv7 and 
hv8. InoVirt engine storage Domain is created with the path hv3:/engine. I am 
afraid to replace hv3 brick with hv9, to not break the HostedEngine VM. To 
change storage Domain path I need to move it to Maintenance, but it will 
unmount it from all the hosts and HostedEngine will stop working. Is there any 
otherway to change a storage Domain path?

I already changed storage value in /etc/ovirt-hosted-engine/hosted-engine.conf. 
I did something like this:
# on hv7
hosted-engine --vm-shutdown

# on all hosts
systemctl stop ovirt-ha-agent
systemctl stop ovirt-ha-broker
hosted-engine --disconnect-storage
sed -i 's/hv3/hv7/g' /etc/ovirt-hosted-engine/hosted-engine.conf
hosted-engine --connect-storage
systemctl restart ovirt-ha-broker
systemctl status ovirt-ha-broker
systemctl restart ovirt-ha-agent

# on hv7
hosted-engine --vm-start

and now the Hosted Engine is up and running and Gluster engine volume is 
mounted like hv7:/engine

So the main question remains - how to change the engine storage Domain path?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HYCWQVKNTPRJVIVTGGPW7WJJJRM5CGN/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57TRC6RL5YBWEFR3VJD3HJLU4IKHWFZ6/


[ovirt-users] Re: oVirt 4.5.1 Hyperconverge Gluster install fails

2022-07-17 Thread Strahil Nikolov via Users
Can you cat this file 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
 
It seems that the VG creation is not idempotent .As a workaround, delete the VG 
'gluster_vg_sdb' on all Gluster nodes:
vgremove gluster_vg_sdb
Best Regards,Strahil Nikolov 

 
Hello all,

I am hoping someone can help me with a oVirt installation that has just gotten 
the better of me after weeks of trying.

After setting up ssh-keys and making sure each host is known to the primary 
host (sr-svr04), I go though Cockpit and "Configure Gluster storage and oVirt 
hosted engine", enter all of the details with 
.san.lennoxconsulting.com.au for the storage network FQDN and 
.core.lennoxconsulting.com.au for the public interfaces. Connectivity on 
each of the VLAN's test out as basically working (everything is pingable and 
ssh connections work) and the hosts are generally usable on the network. 

But the install ultimately dies with the following ansible error:

--- gluster-deployment.log -
:
:

TASK [gluster.infra/roles/backend_setup : Create volume groups] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:63
failed: [sr-svr04.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 
'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", 
"--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], 
"delta": "0:00:00.058528", "end": "2022-07-16 16:18:37.018563", "item": {"key": 
"gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": 
"gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": 
"2022-07-16 16:18:36.960035", "stderr": "  A volume group called gluster_vg_sdb 
already exists.", "stderr_lines": ["  A volume group called gluster_vg_sdb 
already exists."], "stdout": "", "stdout_lines": []}
failed: [sr-svr05.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 
'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", 
"--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], 
"delta": "0:00:00.057186", "end": "2022-07-16 16:18:37.784063", "item": {"key": 
"gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": 
"gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": 
"2022-07-16 16:18:37.726877", "stderr": "  A volume group called gluster_vg_sdb 
already exists.", "stderr_lines": ["  A volume group called gluster_vg_sdb 
already exists."], "stdout": "", "stdout_lines": []}
failed: [sr-svr06.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 
'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", 
"--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], 
"delta": "0:00:00.062212", "end": "2022-07-16 16:18:37.250371", "item": {"key": 
"gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": 
"gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": 
"2022-07-16 16:18:37.188159", "stderr": "  A volume group called gluster_vg_sdb 
already exists.", "stderr_lines": ["  A volume group called gluster_vg_sdb 
already exists."], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
sr-svr04.san.lennoxconsulting.com.au : ok=32  changed=13  unreachable=0    
failed=1    skipped=27  rescued=0    ignored=1  
sr-svr05.san.lennoxconsulting.com.au : ok=31  changed=12  unreachable=0    
failed=1    skipped=27  rescued=0    ignored=1  
sr-svr06.san.lennoxconsulting.com.au : ok=31  changed=12  unreachable=0    
failed=1    skipped=27  rescued=0    ignored=1  

--- gluster-deployment.log -


A "gluster v status" gives me no volumes present and that is where I am stuck! 
Any ideas of what I start trying next?

I have tried this with oVirt Node 4.5.1 el8 and el9 images as well as 4.5 el8 
images so it has got to be somewhere in my infrastructure configuration but I 
am out of ideas.

My hardware configuration is 3 x HP DL360's with oVirt Node 4.5.1 el8 installed 
on 2x146gb RAID1 array and a gluster 6x900gb RAID5  array.

Network configuration is:

root@sr-svr04 ~]# ip addr show up
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
      valid_lft forever preferred_lft forever
2: eno1:  mtu 1500 qdisc mq master bond0 
state UP group default qlen 1000
    link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff
3: eno2:  mtu 1500 qdisc mq master bond0 
state UP group default qlen 1000
    link

[ovirt-users] Gluster volume "deleted" by accident --- Is it possible to recover?

2022-07-17 Thread itforums51
hi everyone,

I have a 3x node ovirt 4.4.6 cluster in HC setup.

Today I was intending to extend the data and vmstore volume adding another 
brick each; then by accident I pressed the "cleanup" button. Basically it looks 
that the volume were deleted.

I am wondering whether there is a process of trying to recover these volumes 
and therefore all VMs (including the Hosted-Engine).

```
lvs
  LV VG  Attr   LSize   Pool
 Origin   Data%  Meta%  Move 
Log Cpy%Sync Convert
  gluster_lv_datagluster_vg_sda4 Vwi---t--- 500.00g 
gluster_thinpool_gluster_vg_sda4
 
  gluster_lv_data-brick1 gluster_vg_sda4 Vwi-aot--- 500.00g 
gluster_thinpool_gluster_vg_sda4  0.45  
 
  gluster_lv_engine  gluster_vg_sda4 -wi-a- 100.00g 

 
  gluster_lv_vmstore gluster_vg_sda4 Vwi---t--- 500.00g 
gluster_thinpool_gluster_vg_sda4
 
  gluster_lv_vmstore-brick1  gluster_vg_sda4 Vwi-aot--- 500.00g 
gluster_thinpool_gluster_vg_sda4  0.33  
 
  gluster_thinpool_gluster_vg_sda4   gluster_vg_sda4 twi-aot---  <7.07t 
  11.46  0.89
```  
I would appreciate any advice. 

TIA
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ITF5IYLWGG2MPAPG2JBD2GWA5QZDPVSA/


[ovirt-users] Engine storage Domain path change

2022-07-17 Thread michael
Hello

I installed oVirt on 3 servers(hv1,hv2,hv3) with Self Hosted Engine couple 
years ago. Gluster is used as storage for VMs. The engine has its own storage 
volume.

Then I added 3 more servers(hv4,hv5,hv6).

Now I would like to replace the first 3 servers. I added 3 more 
servers(hv7,hv8,hv9). I created new gluster volumes on hv7-hv9, moved disks 
from old volumes.

Now the question is how to migrate engine storage? I decided to do it on 
Gluster level by replacing each of the brick from the old servers(hv1-hv3) with 
new ones (hv7-hv9). I successfully replaced bricks from hv1 and hv2 to hv7 and 
hv8. InoVirt engine storage Domain is created with the path hv3:/engine. I am 
afraid to replace hv3 brick with hv9, to not break the HostedEngine VM. To 
change storage Domain path I need to move it to Maintenance, but it will 
unmount it from all the hosts and HostedEngine will stop working. Is there any 
otherway to change a storage Domain path?

I already changed storage value in /etc/ovirt-hosted-engine/hosted-engine.conf. 
I did something like this:
# on hv7
hosted-engine --vm-shutdown

# on all hosts
systemctl stop ovirt-ha-agent
systemctl stop ovirt-ha-broker
hosted-engine --disconnect-storage
sed -i 's/hv3/hv7/g' /etc/ovirt-hosted-engine/hosted-engine.conf
hosted-engine --connect-storage
systemctl restart ovirt-ha-broker
systemctl status ovirt-ha-broker
systemctl restart ovirt-ha-agent

# on hv7
hosted-engine --vm-start

and now the Hosted Engine is up and running and Gluster engine volume is 
mounted like hv7:/engine

So the main question remains - how to change the engine storage Domain path?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HYCWQVKNTPRJVIVTGGPW7WJJJRM5CGN/


[ovirt-users] Grafana login

2022-07-17 Thread markeczzz
Hi!

I have installed oVirt 4.5.1 and I am using KeyCloak to login.
I have created users in keycloak and with them i can login to ovirt Manager, 
but I can't login to Grafana.

Also, I can login to Keycloak only with user "admin" and not with newly created 
users.
I have tried to add @ovirt or @ovirt@internal, but every time i get "Invalid 
username or password"

I haven't really changed default options in keycloak and can't find good manual 
for ovirt + keyclock.

Can anyone tell me how to create user in keycloak that cant login to Grafana?

Regards,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABQL4ALXDBNWYFIC3SZCOXLJTVHOAN4S/


[ovirt-users] oVirt 4.5.1 Hyperconverge Gluster install fails

2022-07-17 Thread david . lennox
Hello all,

I am hoping someone can help me with a oVirt installation that has just gotten 
the better of me after weeks of trying.

After setting up ssh-keys and making sure each host is known to the primary 
host (sr-svr04), I go though Cockpit and "Configure Gluster storage and oVirt 
hosted engine", enter all of the details with 
.san.lennoxconsulting.com.au for the storage network FQDN and 
.core.lennoxconsulting.com.au for the public interfaces. Connectivity on 
each of the VLAN's test out as basically working (everything is pingable and 
ssh connections work) and the hosts are generally usable on the network. 

But the install ultimately dies with the following ansible error:

--- gluster-deployment.log -
:
:

TASK [gluster.infra/roles/backend_setup : Create volume groups] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:63
failed: [sr-svr04.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 
'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", 
"--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], 
"delta": "0:00:00.058528", "end": "2022-07-16 16:18:37.018563", "item": {"key": 
"gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": 
"gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": 
"2022-07-16 16:18:36.960035", "stderr": "  A volume group called gluster_vg_sdb 
already exists.", "stderr_lines": ["  A volume group called gluster_vg_sdb 
already exists."], "stdout": "", "stdout_lines": []}
failed: [sr-svr05.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 
'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", 
"--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], 
"delta": "0:00:00.057186", "end": "2022-07-16 16:18:37.784063", "item": {"key": 
"gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": 
"gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": 
"2022-07-16 16:18:37.726877", "stderr": "  A volume group called gluster_vg_sdb 
already exists.", "stderr_lines": ["  A volume group called gluster_vg_sdb 
already exists."], "stdout": "", "stdout_lines": []}
failed: [sr-svr06.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 
'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", 
"--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], 
"delta": "0:00:00.062212", "end": "2022-07-16 16:18:37.250371", "item": {"key": 
"gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": 
"gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": 
"2022-07-16 16:18:37.188159", "stderr": "  A volume group called gluster_vg_sdb 
already exists.", "stderr_lines": ["  A volume group called gluster_vg_sdb 
already exists."], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
sr-svr04.san.lennoxconsulting.com.au : ok=32   changed=13   unreachable=0
failed=1skipped=27   rescued=0ignored=1   
sr-svr05.san.lennoxconsulting.com.au : ok=31   changed=12   unreachable=0
failed=1skipped=27   rescued=0ignored=1   
sr-svr06.san.lennoxconsulting.com.au : ok=31   changed=12   unreachable=0
failed=1skipped=27   rescued=0ignored=1   

--- gluster-deployment.log -


A "gluster v status" gives me no volumes present and that is where I am stuck! 
Any ideas of what I start trying next?

I have tried this with oVirt Node 4.5.1 el8 and el9 images as well as 4.5 el8 
images so it has got to be somewhere in my infrastructure configuration but I 
am out of ideas.

My hardware configuration is 3 x HP DL360's with oVirt Node 4.5.1 el8 installed 
on 2x146gb RAID1 array and a gluster 6x900gb RAID5  array.

Network configuration is:

root@sr-svr04 ~]# ip addr show up
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eno1:  mtu 1500 qdisc mq master bond0 
state UP group default qlen 1000
link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff
3: eno2:  mtu 1500 qdisc mq master bond0 
state UP group default qlen 1000
link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff permaddr 
1c:98:ec:29:41:69
4: eno3:  mtu 1500 qdisc mq state UP group 
default qlen 1000
link/ether 1c:98:ec:29:41:6a brd ff:ff:ff:ff:ff:ff
5: eno4:  mtu 1500 qdisc mq state DOWN group 
default qlen 1000
link/