[ovirt-users] After restored hosted engine as a new vms, all the existing & live hosts and storage domain are showing offline

2022-01-10 Thread dhanaraj.ramesh--- via Users
Hi Team

due to some reasons one of our  slef hosted engine went down, hence we 
provisioned new centos vm and restored the hosted engine backup file. After 
restoring we could see all the Datacenter, cluster, storage, hosts and vm 
entities in restored ovirt engine but they all are offline. 

please help us how to we connect all the running hosts and storage domain, vms 
back to restored hosted engine.  do we need to get a down time for all the vms 
to restore the hosts? how do I activate all the existing storage domains as is. 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU57GHJOVFOLLWRDZPLC4ITVWA4NMX42/


[ovirt-users] After restored hosted engine as a new vms, all the existing & live hosts and storage domain are showing offline

2022-01-10 Thread dhanaraj.ramesh--- via Users
Hi Team

due to some reasons one of our  slef hosted engine went down, hence we 
provisioned new centos vm and restored the hosted engine backup file. After 
restoring we could see all the Datacenter, cluster, storage, hosts and vm 
entities in restored ovirt engine but they all are offline. 

please help us how to we connect all the running hosts and storage domain, vms 
back to restored hosted engine.  do we need to get a down time for all the vms 
to restore the hosts? how do I activate all the existing storage domains as is. 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SXXJP7AY4QXHQNE6MILWWPPQHNCHRJXP/


[ovirt-users] Re: Error while removing snapshot: Unable to get volume info

2022-01-10 Thread Nir Soffer
On Mon, Jan 10, 2022 at 5:22 PM Francesco Lorenzini via Users <
users@ovirt.org> wrote:

> My problem should be the same as the one filed here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1948599
>
> So, if I'm correct I must edit DB entries to fix the situations. Although
> I don't like to operate directly the DB, I'll try that and let you know if
> I resolve it.
>

It looks like the volume on vdsm side was already removed, so when engine
try to merge the merge fails.

This is an engine bug - it should handle this case and remove the illegal
snapshot
in the db. But since it does not, you have to do this manually.

Please file an engine bug for this issue.


>
> In the meanwhile, if anyone has any tips or suggestion that doesn't
> involve editing the DB, much appreciate it.
>

I don't think there is another way.

Nir


>
> Regards,
> Francesco
>
> Il 10/01/2022 10:33, francesco--- via Users ha scritto:
>
> Hi all,
>
> I'm trying to remove a snapshot from a HA VM in a setup with glusterfs (2 
> nodes C8 stream oVirt 4.4 + 1 arbiter C8). The error that appears in the vdsm 
> log of the host is:
>
> 2022-01-10 09:33:03,003+0100 ERROR (jsonrpc/4) [api] FINISH merge error=Merge 
> failed: {'top': '441354e7-c234-4079-b494-53fa99cdce6f', 'base': 
> 'fdf38f20-3416-4d75-a159-2a341b1ed637', 'job': 
> '50206e3a-8018-4ea8-b191-e4bc859ae0c7', 'reason': 'Unable to get volume info 
> for domain 574a3cd1-5617-4742-8de9-4732be4f27e0 volume 
> 441354e7-c234-4079-b494-53fa99cdce6f'} (api:131)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 285, 
> in merge
> drive.domainID, drive.poolID, drive.imageID, job.top)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5988, in 
> getVolumeInfo
> (domainID, volumeID))
> vdsm.virt.errors.StorageUnavailableError: Unable to get volume info for 
> domain 574a3cd1-5617-4742-8de9-4732be4f27e0 volume 
> 441354e7-c234-4079-b494-53fa99cdce6f
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in 
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 776, in merge
> drive, baseVolUUID, topVolUUID, bandwidth, jobUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5833, in merge
> driveSpec, baseVolUUID, topVolUUID, bandwidth, jobUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 288, 
> in merge
> str(e), top=top, base=job.base, job=job_id)
>
> The volume list in the host differs from the engine one:
>
> HOST:
>
> vdsm-tool dump-volume-chains 574a3cd1-5617-4742-8de9-4732be4f27e0 | grep -A10 
> 0b995271-e7f3-41b3-aff7-b5ad7942c10d
>image:0b995271-e7f3-41b3-aff7-b5ad7942c10d
>
>  - fdf38f20-3416-4d75-a159-2a341b1ed637
>status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, 
> type: SPARSE, capacity: 53687091200, truesize: 44255387648
>
>  - 10df3adb-38f4-41d1-be84-b8b5b86e92cc
>status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: 
> SPARSE, capacity: 53687091200, truesize: 7335407616
>
> ls -1 0b995271-e7f3-41b3-aff7-b5ad7942c10d
> 10df3adb-38f4-41d1-be84-b8b5b86e92cc
> 10df3adb-38f4-41d1-be84-b8b5b86e92cc.lease
> 10df3adb-38f4-41d1-be84-b8b5b86e92cc.meta
> fdf38f20-3416-4d75-a159-2a341b1ed637
> fdf38f20-3416-4d75-a159-2a341b1ed637.lease
> fdf38f20-3416-4d75-a159-2a341b1ed637.meta
>
>
> ENGINE:
>
> engine=# select * from images where 
> image_group_id='0b995271-e7f3-41b3-aff7-b5ad7942c10d';
> -[ RECORD 1 ]-+-
> image_guid| 10df3adb-38f4-41d1-be84-b8b5b86e92cc
> creation_date | 2022-01-07 11:23:43+01
> size  | 53687091200
> it_guid   | ----
> parentid  | 441354e7-c234-4079-b494-53fa99cdce6f
> imagestatus   | 1
> lastmodified  | 2022-01-07 11:23:39.951+01
> vm_snapshot_id| bd2291a4-8018-4874-a400-8d044a95347d
> volume_type   | 2
> volume_format | 4
> image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
> _create_date  | 2022-01-07 11:23:41.448463+01
> _update_date  | 2022-01-07 11:24:10.414777+01
> active| t
> volume_classification | 0
> qcow_compat   | 2
> -[ RECORD 2 ]-+-
> image_guid| 441354e7-c234-4079-b494-53fa99cdce6f
> creation_date | 2021-12-15 07:16:31.647+01
> size  | 53687091200
> it_guid   | ----
> parentid  | fdf38f20-3416-4d75-a159-2a341b1ed637
> imagestatus   | 1
> lastmodified  | 2022-01-07 11:23:41.448+01
> vm_snapshot_id| 2d610958-59e3-4685-b209-139b4266012f
> volume_type   | 2
> 

[ovirt-users] Re: Error while removing snapshot: Unable to get volume info

2022-01-10 Thread Francesco Lorenzini via Users
My problem should be the same as the one filed here: 
https://bugzilla.redhat.com/show_bug.cgi?id=1948599


So, if I'm correct I must edit DB entries to fix the situations. 
Although I don't like to operate directly the DB, I'll try that and let 
you know if I resolve it.


In the meanwhile, if anyone has any tips or suggestion that doesn't 
involve editing the DB, much appreciate it.


Regards,
Francesco

Il 10/01/2022 10:33, francesco--- via Users ha scritto:

Hi all,

I'm trying to remove a snapshot from a HA VM in a setup with glusterfs (2 nodes 
C8 stream oVirt 4.4 + 1 arbiter C8). The error that appears in the vdsm log of 
the host is:

2022-01-10 09:33:03,003+0100 ERROR (jsonrpc/4) [api] FINISH merge error=Merge 
failed: {'top': '441354e7-c234-4079-b494-53fa99cdce6f', 'base': 
'fdf38f20-3416-4d75-a159-2a341b1ed637', 'job': 
'50206e3a-8018-4ea8-b191-e4bc859ae0c7', 'reason': 'Unable to get volume info 
for domain 574a3cd1-5617-4742-8de9-4732be4f27e0 volume 
441354e7-c234-4079-b494-53fa99cdce6f'} (api:131)
Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 285, in 
merge
 drive.domainID, drive.poolID, drive.imageID, job.top)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5988, in 
getVolumeInfo
 (domainID, volumeID))
vdsm.virt.errors.StorageUnavailableError: Unable to get volume info for domain 
574a3cd1-5617-4742-8de9-4732be4f27e0 volume 441354e7-c234-4079-b494-53fa99cdce6f

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in 
method
 ret = func(*args, **kwargs)
   File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 776, in merge
 drive, baseVolUUID, topVolUUID, bandwidth, jobUUID)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5833, in merge
 driveSpec, baseVolUUID, topVolUUID, bandwidth, jobUUID)
   File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 288, in 
merge
 str(e), top=top, base=job.base, job=job_id)

The volume list in the host differs from the engine one:

HOST:

vdsm-tool dump-volume-chains 574a3cd1-5617-4742-8de9-4732be4f27e0 | grep -A10 
0b995271-e7f3-41b3-aff7-b5ad7942c10d
image:0b995271-e7f3-41b3-aff7-b5ad7942c10d

  - fdf38f20-3416-4d75-a159-2a341b1ed637
status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, 
type: SPARSE, capacity: 53687091200, truesize: 44255387648

  - 10df3adb-38f4-41d1-be84-b8b5b86e92cc
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: 
SPARSE, capacity: 53687091200, truesize: 7335407616

ls -1 0b995271-e7f3-41b3-aff7-b5ad7942c10d
10df3adb-38f4-41d1-be84-b8b5b86e92cc
10df3adb-38f4-41d1-be84-b8b5b86e92cc.lease
10df3adb-38f4-41d1-be84-b8b5b86e92cc.meta
fdf38f20-3416-4d75-a159-2a341b1ed637
fdf38f20-3416-4d75-a159-2a341b1ed637.lease
fdf38f20-3416-4d75-a159-2a341b1ed637.meta


ENGINE:

engine=# select * from images where 
image_group_id='0b995271-e7f3-41b3-aff7-b5ad7942c10d';
-[ RECORD 1 ]-+-
image_guid| 10df3adb-38f4-41d1-be84-b8b5b86e92cc
creation_date | 2022-01-07 11:23:43+01
size  | 53687091200
it_guid   | ----
parentid  | 441354e7-c234-4079-b494-53fa99cdce6f
imagestatus   | 1
lastmodified  | 2022-01-07 11:23:39.951+01
vm_snapshot_id| bd2291a4-8018-4874-a400-8d044a95347d
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2022-01-07 11:23:41.448463+01
_update_date  | 2022-01-07 11:24:10.414777+01
active| t
volume_classification | 0
qcow_compat   | 2
-[ RECORD 2 ]-+-
image_guid| 441354e7-c234-4079-b494-53fa99cdce6f
creation_date | 2021-12-15 07:16:31.647+01
size  | 53687091200
it_guid   | ----
parentid  | fdf38f20-3416-4d75-a159-2a341b1ed637
imagestatus   | 1
lastmodified  | 2022-01-07 11:23:41.448+01
vm_snapshot_id| 2d610958-59e3-4685-b209-139b4266012f
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2021-12-15 07:16:32.37005+01
_update_date  | 2022-01-07 11:23:41.448463+01
active| f
volume_classification | 1
qcow_compat   | 0
-[ RECORD 3 ]-+-
image_guid| fdf38f20-3416-4d75-a159-2a341b1ed637
creation_date | 2020-08-12 17:16:07+02
size  | 53687091200
it_guid   | ----
parentid  | ----
imagestatus   | 

[ovirt-users] Re: Gluster Hook differences between fresh and old clusters

2022-01-10 Thread Strahil Nikolov via Users
 Hi Ritesh ,

I'm 90% confident it is a problem from the latest (4.3 to 4.4 ) or older 
migrations (4.2 to 4.3).

[root@ovirt2 ~]# ll 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post 
lrwxrwxrwx. 1 root root 64 9 яну 23,55 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post -> 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
[root@ovirt2 ~]# ll -Z 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post 
lrwxrwxrwx. 1 root root unconfined_u:object_r:glusterd_var_lib_t:s0 64 9 яну 
23,55 /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post -> 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
[root@ovirt2 ~]# ls -lZ 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
-rwxr-xr-x. 1 root root system_u:object_r:bin_t:s0 1883 12 окт 13,09 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
[root@ovirt2 ~]# file 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py: Python 
script, ASCII text executable


I've tried with SELINUX in permissive mode, so it's something not related to 
SELINUX. Also the sync works on the new cluster.

Any idea how to debug it and find what is the reason it doesn't like it ?

Best Regards,
Strahil Nikolov
 В понеделник, 10 януари 2022 г., 08:12:17 Гринуич+2, Ritesh Chikatwar 
 написа:  
 
 Hello Strahil,
I have a setup with version (4.4.9.3) but I don't see an issue, Maybe after 
migrating/upgrading. We are seeing this issue, can you share the content from 
this hook (delete-POST-57glusterfind-delete-post).
On Mon, Jan 10, 2022 at 3:55 AM Strahil Nikolov via Users  
wrote:

Hi All,

recently I have migrated from 4.3.10 to 4.4.9 and it seems something odd is 
happening.

Symptoms:
- A lot of warnings for Gluster hook discrepancies
- Trying to refresh the hooks via the sync button fails (engine error: 
https://justpaste.it/827zo )
- Existing "Default" cluster tracks more hooks than a fresh new cluster 
New cluster hooks: http://i.imgur.com/FEL2Z1D.png
Migrated cluster: https://i.imgur.com/L8dWYZY.png

What can I do to resolve the issue ? I've tried to resync the hooks, move away 
/var/lib/glusterd/hooks/1/ and reinstall gluster packages, try to resolve via 
the "Resolve Conflicts" in the UI and nothing helped so far.


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYSNQTAGXEAX2O677ELEAYRXDAUX52IQ/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WDH6EQFSPHVMZLHXBVIQ3DCCZZDXIL23/


[ovirt-users] Error while removing snapshot: Unable to get volume info

2022-01-10 Thread francesco--- via Users
Hi all,

I'm trying to remove a snapshot from a HA VM in a setup with glusterfs (2 nodes 
C8 stream oVirt 4.4 + 1 arbiter C8). The error that appears in the vdsm log of 
the host is:

2022-01-10 09:33:03,003+0100 ERROR (jsonrpc/4) [api] FINISH merge error=Merge 
failed: {'top': '441354e7-c234-4079-b494-53fa99cdce6f', 'base': 
'fdf38f20-3416-4d75-a159-2a341b1ed637', 'job': 
'50206e3a-8018-4ea8-b191-e4bc859ae0c7', 'reason': 'Unable to get volume info 
for domain 574a3cd1-5617-4742-8de9-4732be4f27e0 volume 
441354e7-c234-4079-b494-53fa99cdce6f'} (api:131)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 285, in 
merge
drive.domainID, drive.poolID, drive.imageID, job.top)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5988, in 
getVolumeInfo
(domainID, volumeID))
vdsm.virt.errors.StorageUnavailableError: Unable to get volume info for domain 
574a3cd1-5617-4742-8de9-4732be4f27e0 volume 441354e7-c234-4079-b494-53fa99cdce6f

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in 
method
ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 776, in merge
drive, baseVolUUID, topVolUUID, bandwidth, jobUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5833, in merge
driveSpec, baseVolUUID, topVolUUID, bandwidth, jobUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/livemerge.py", line 288, in 
merge
str(e), top=top, base=job.base, job=job_id)

The volume list in the host differs from the engine one:

HOST:

vdsm-tool dump-volume-chains 574a3cd1-5617-4742-8de9-4732be4f27e0 | grep -A10 
0b995271-e7f3-41b3-aff7-b5ad7942c10d
   image:0b995271-e7f3-41b3-aff7-b5ad7942c10d

 - fdf38f20-3416-4d75-a159-2a341b1ed637
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, 
type: SPARSE, capacity: 53687091200, truesize: 44255387648

 - 10df3adb-38f4-41d1-be84-b8b5b86e92cc
   status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: 
SPARSE, capacity: 53687091200, truesize: 7335407616

ls -1 0b995271-e7f3-41b3-aff7-b5ad7942c10d
10df3adb-38f4-41d1-be84-b8b5b86e92cc
10df3adb-38f4-41d1-be84-b8b5b86e92cc.lease
10df3adb-38f4-41d1-be84-b8b5b86e92cc.meta
fdf38f20-3416-4d75-a159-2a341b1ed637
fdf38f20-3416-4d75-a159-2a341b1ed637.lease
fdf38f20-3416-4d75-a159-2a341b1ed637.meta


ENGINE:

engine=# select * from images where 
image_group_id='0b995271-e7f3-41b3-aff7-b5ad7942c10d';
-[ RECORD 1 ]-+-
image_guid| 10df3adb-38f4-41d1-be84-b8b5b86e92cc
creation_date | 2022-01-07 11:23:43+01
size  | 53687091200
it_guid   | ----
parentid  | 441354e7-c234-4079-b494-53fa99cdce6f
imagestatus   | 1
lastmodified  | 2022-01-07 11:23:39.951+01
vm_snapshot_id| bd2291a4-8018-4874-a400-8d044a95347d
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2022-01-07 11:23:41.448463+01
_update_date  | 2022-01-07 11:24:10.414777+01
active| t
volume_classification | 0
qcow_compat   | 2
-[ RECORD 2 ]-+-
image_guid| 441354e7-c234-4079-b494-53fa99cdce6f
creation_date | 2021-12-15 07:16:31.647+01
size  | 53687091200
it_guid   | ----
parentid  | fdf38f20-3416-4d75-a159-2a341b1ed637
imagestatus   | 1
lastmodified  | 2022-01-07 11:23:41.448+01
vm_snapshot_id| 2d610958-59e3-4685-b209-139b4266012f
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2021-12-15 07:16:32.37005+01
_update_date  | 2022-01-07 11:23:41.448463+01
active| f
volume_classification | 1
qcow_compat   | 0
-[ RECORD 3 ]-+-
image_guid| fdf38f20-3416-4d75-a159-2a341b1ed637
creation_date | 2020-08-12 17:16:07+02
size  | 53687091200
it_guid   | ----
parentid  | ----
imagestatus   | 4
lastmodified  | 2021-12-15 07:16:32.369+01
vm_snapshot_id| 603811ba-3cdd-4388-a971-05e300ced0c3
volume_type   | 2
volume_format | 4
image_group_id| 0b995271-e7f3-41b3-aff7-b5ad7942c10d
_create_date  | 2020-08-12 17:16:07.506823+02
_update_date  | 2021-12-15 07:16:32.37005+01
active| f
volume_classification | 1
qcow_compat   | 2

However in the engine gui I see only two snapshots ID:

1- 

[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-10 Thread Yedidyah Bar David
On Fri, Jan 7, 2022 at 1:09 AM  wrote:
>
> Hi Didi
>
> downgrading qemu-kvm fixed the issue.

Thanks for the update!

> What is the reason it is not working with version 6.1.0.

https://bugzilla.redhat.com/show_bug.cgi?id=2024605


> Currently this is the version installed on my host
>
> #yum info qemu-kvm
> Last metadata expiration check: 2:03:58 ago on Thu 06 Jan 2022 03:18:40 PM 
> UTC.
> Installed Packages
> Name : qemu-kvm
> Epoch: 15
> Version  : 6.0.0
> Release  : 33.el8s
> Architecture : x86_64
> Size : 0.0
> Source   : qemu-kvm-6.0.0-33.el8s.src.rpm
> Repository   : @System
> From repo: ovirt-4.4-centos-stream-advanced-virtualization
> Summary  : QEMU is a machine emulator and virtualizer
> URL  : http://www.qemu.org/
> License  : GPLv2 and GPLv2+ and CC-BY
> Description  : qemu-kvm is an open source virtualizer that provides hardware
>  : emulation for the KVM hypervisor. qemu-kvm acts as a virtual
>  : machine monitor together with the KVM kernel modules, and 
> emulates the
>  : hardware for a full system such as a PC and its associated 
> peripherals.
>
> Available Packages
> Name : qemu-kvm
> Epoch: 15
> Version  : 6.1.0
> Release  : 5.module_el8.6.0+1040+0ae94936
> Architecture : x86_64
> Size : 156 k
> Source   : qemu-kvm-6.1.0-5.module_el8.6.0+1040+0ae94936.src.rpm
> Repository   : appstream
> Summary  : QEMU is a machine emulator and virtualizer
> URL  : http://www.qemu.org/
> License  : GPLv2 and GPLv2+ and CC-BY
> Description  : qemu-kvm is an open source virtualizer that provides hardware
>  : emulation for the KVM hypervisor. qemu-kvm acts as a virtual
>  : machine monitor together with the KVM kernel modules, and 
> emulates the
>  : hardware for a full system such as a PC and its associated 
> peripherals.
>
> Many thanks for your help
>
> Regards
> Sohail
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6S4CYOYQLD3M5YBGPPWB7Z7OK5BKVHE/



--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAYMBAS2PECY4WZ6AQGN5RUXD42GV7XZ/