[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-13 Thread Darrell Budic
Well, now that I’ve gone and read through that bug again in detail, I’m not 
sure I’ve worked around it after all. I do seem to recall additional discussion 
on the original bug for HA engine ligfapi a mention that RR-DNS would work to 
resolve the issue, but can’t remember the bug ID at the moment. I will test 
thoroughly the next time I update my glusterfs servers. But I firmly believe 
that I’ve never encountered that issue in over 3 years of running gluster with 
libgfapi enabled.. 

I use round robin DNS, and in theory, QEMU retries until it gets a working 
server. I also have said DNS setup in host files on all my hosts and gluster 
servers, having discovered the hard way that when your DNS server runs on an 
ovirt managed VM, you have a bootstrap problem when thing break badly :) 
Somewhere around gluster 3.12, I added backup servers to the mount options for 
my gluster storage volumes as well, and have’t had any issues with that.

And to be frank, the significant performance bonus from libgfapi is still 
absolutely worth it to me even if it means automatic HA won’t work if one 
particular server is down. I can always intervene in the DNS on my hosts if I 
have to, and it just hasn’t come up yet. 

  -Darrell


> On Feb 13, 2020, at 5:19 PM, Strahil Nikolov  wrote:
> 
> On February 13, 2020 11:51:41 PM GMT+02:00, Stephen Panicho 
> mailto:s.pani...@gmail.com>> wrote:
>> Darrell, would you care to elaborate on your HA workaround?
>> 
>> As far as I understand, only the primary Gluster host is visible to
>> libvirt
>> when using gfapi, so if that host goes down, all VMs break. I imagine
>> you're using a round-robin DNS entry for the primary Gluster host, but
>> I'd
>> like to be sure.
>> 
>> On Wed, Feb 12, 2020 at 11:01 AM Darrell Budic 
>> wrote:
>> 
>>> Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just
>> fine,
>>> but I don’t use snapshots. You can work around the HA issue with DNS
>> and
>>> backup server entries on the storage domain as well. Worth it to me
>> for the
>>> performance, YMMV.
>>> 
>>> On Feb 12, 2020, at 8:04 AM, Jayme  wrote:
>>> 
>>> From my understanding it's not a default option but many users are
>> still
>>> using libgfapi successfully. I'm not sure about its status in the
>> latest
>>> 4.3.8 release but I know it is/was working for people in previous
>> versions.
>>> The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but
>> it
>>> should still be working otherwise, unless like I said something
>> changed in
>>> more recent releases of oVirt.
>>> 
>>> On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese <
>>> guillaume.pav...@interactiv-group.com> wrote:
>>> 
 Libgfapi is not supported because of an old bug in qemu. That qemu
>> bug is
 slowly getting fixed, but the bugs about Libgfapi support in ovirt
>> have
 since been closed as WONTFIX and DEFERRED
 
 See :
 https://bugzilla.redhat.com/show_bug.cgi?id=1465810
 https://bugzilla.redhat.com/show_bug.cgi?id=1484660
 > : "No plans to
 enable libgfapi in RHHI-V for now. Closing this bug"
 https://bugzilla.redhat.com/show_bug.cgi?id=1484227 
  : "No plans to
 enable libgfapi in RHHI-V for now. Closing this bug"
 https://bugzilla.redhat.com/show_bug.cgi?id=1633642 
  : "Closing this
>> as
 no action taken from long back.Please reopen if required."
 
 Would be nice if someone could reopen the closed bugs so this
>> feature
 doesn't get forgotten
 
 Guillaume Pavese
 Ingénieur Système et Réseau
 Interactiv-Group
 
 
 On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho
>> mailto:s.pani...@gmail.com>>
 wrote:
 
> I used the cockpit-based hc setup and "option
>> rpc-auth-allow-insecure"
> is absent from /etc/glusterfs/glusterd.vol.
> 
> I'm going to redo the cluster this week and report back. Thanks for
>> the
> tip!
> 
> On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic
>> mailto:bu...@onholyground.com>>
> wrote:
> 
>> The hosts will still mount the volume via FUSE, but you might
>> double
>> check you set the storage up as Gluster and not NFS.
>> 
>> Then gluster used to need some config in glusterd.vol to set
>> 
>>option rpc-auth-allow-insecure on
>> 
>> I’m not sure if that got added to a hyper converged setup or not,
>> but
>> I’d check it.
>> 
>> On Feb 10, 2020, at 4:41 PM, Stephen Panicho > >
>> wrote:
>> 
>> No, this was a relatively new cluster-- only a couple days old.
>> Just a
>> handful of VMs including the engine.
>> 
>> On Mon, Feb 10, 2020 at 5:26 PM Jayme > > wrote:
>> 
>>> Curious do the vms have 

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-13 Thread Strahil Nikolov
On February 13, 2020 11:51:41 PM GMT+02:00, Stephen Panicho 
 wrote:
>Darrell, would you care to elaborate on your HA workaround?
>
>As far as I understand, only the primary Gluster host is visible to
>libvirt
>when using gfapi, so if that host goes down, all VMs break. I imagine
>you're using a round-robin DNS entry for the primary Gluster host, but
>I'd
>like to be sure.
>
>On Wed, Feb 12, 2020 at 11:01 AM Darrell Budic 
>wrote:
>
>> Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just
>fine,
>> but I don’t use snapshots. You can work around the HA issue with DNS
>and
>> backup server entries on the storage domain as well. Worth it to me
>for the
>> performance, YMMV.
>>
>> On Feb 12, 2020, at 8:04 AM, Jayme  wrote:
>>
>> From my understanding it's not a default option but many users are
>still
>> using libgfapi successfully. I'm not sure about its status in the
>latest
>> 4.3.8 release but I know it is/was working for people in previous
>versions.
>> The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but
>it
>> should still be working otherwise, unless like I said something
>changed in
>> more recent releases of oVirt.
>>
>> On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> Libgfapi is not supported because of an old bug in qemu. That qemu
>bug is
>>> slowly getting fixed, but the bugs about Libgfapi support in ovirt
>have
>>> since been closed as WONTFIX and DEFERRED
>>>
>>> See :
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1465810
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1484660
>>>  : "No plans to
>>> enable libgfapi in RHHI-V for now. Closing this bug"
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1484227 : "No plans to
>>> enable libgfapi in RHHI-V for now. Closing this bug"
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1633642 : "Closing this
>as
>>> no action taken from long back.Please reopen if required."
>>>
>>> Would be nice if someone could reopen the closed bugs so this
>feature
>>> doesn't get forgotten
>>>
>>> Guillaume Pavese
>>> Ingénieur Système et Réseau
>>> Interactiv-Group
>>>
>>>
>>> On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho
>
>>> wrote:
>>>
 I used the cockpit-based hc setup and "option
>rpc-auth-allow-insecure"
 is absent from /etc/glusterfs/glusterd.vol.

 I'm going to redo the cluster this week and report back. Thanks for
>the
 tip!

 On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic
>
 wrote:

> The hosts will still mount the volume via FUSE, but you might
>double
> check you set the storage up as Gluster and not NFS.
>
> Then gluster used to need some config in glusterd.vol to set
>
> option rpc-auth-allow-insecure on
>
> I’m not sure if that got added to a hyper converged setup or not,
>but
> I’d check it.
>
> On Feb 10, 2020, at 4:41 PM, Stephen Panicho 
> wrote:
>
> No, this was a relatively new cluster-- only a couple days old.
>Just a
> handful of VMs including the engine.
>
> On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:
>
>> Curious do the vms have active snapshots?
>>
>> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>>
>>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster
>>> running on CentOS 7.7 hosts. I was investigating poor Gluster
>performance
>>> and heard about libgfapi, so I thought I'd give it a shot.
>Looking through
>>> the documentation, followed by lots of threads and BZ reports,
>I've done
>>> the following to enable it:
>>>
>>> First, I shut down all VMs except the engine. Then...
>>>
>>> On the hosts:
>>> 1. setsebool -P virt_use_glusterfs on
>>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>>
>>> On the engine VM:
>>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>>> 2. systemctl restart ovirt-engine
>>>
>>> VMs now fail to launch. Am I doing this correctly? I should also
>note
>>> that the hosts still have the Gluster domain mounted via FUSE.
>>>
>>> Here's a relevant bit from engine.log:
>>>
>>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>>>
>node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>>> Could not read qcow2 header: Invalid argument.
>>>
>>> The full engine.log from one of the attempts:
>>>
>>> 2020-02-06 16:38:24,909Z INFO
>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>> (ForkJoinPool-1-worker-12) [] add VM
>>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun
>treatment
>>> 2020-02-06 

[ovirt-users] Re: Enabling Libgfapi in 4.3.8 - VMs won't start

2020-02-13 Thread Stephen Panicho
Darrell, would you care to elaborate on your HA workaround?

As far as I understand, only the primary Gluster host is visible to libvirt
when using gfapi, so if that host goes down, all VMs break. I imagine
you're using a round-robin DNS entry for the primary Gluster host, but I'd
like to be sure.

On Wed, Feb 12, 2020 at 11:01 AM Darrell Budic 
wrote:

> Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just fine,
> but I don’t use snapshots. You can work around the HA issue with DNS and
> backup server entries on the storage domain as well. Worth it to me for the
> performance, YMMV.
>
> On Feb 12, 2020, at 8:04 AM, Jayme  wrote:
>
> From my understanding it's not a default option but many users are still
> using libgfapi successfully. I'm not sure about its status in the latest
> 4.3.8 release but I know it is/was working for people in previous versions.
> The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but it
> should still be working otherwise, unless like I said something changed in
> more recent releases of oVirt.
>
> On Wed, Feb 12, 2020 at 9:43 AM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> Libgfapi is not supported because of an old bug in qemu. That qemu bug is
>> slowly getting fixed, but the bugs about Libgfapi support in ovirt have
>> since been closed as WONTFIX and DEFERRED
>>
>> See :
>> https://bugzilla.redhat.com/show_bug.cgi?id=1465810
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484660
>>  : "No plans to
>> enable libgfapi in RHHI-V for now. Closing this bug"
>> https://bugzilla.redhat.com/show_bug.cgi?id=1484227 : "No plans to
>> enable libgfapi in RHHI-V for now. Closing this bug"
>> https://bugzilla.redhat.com/show_bug.cgi?id=1633642 : "Closing this as
>> no action taken from long back.Please reopen if required."
>>
>> Would be nice if someone could reopen the closed bugs so this feature
>> doesn't get forgotten
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Tue, Feb 11, 2020 at 9:58 AM Stephen Panicho 
>> wrote:
>>
>>> I used the cockpit-based hc setup and "option rpc-auth-allow-insecure"
>>> is absent from /etc/glusterfs/glusterd.vol.
>>>
>>> I'm going to redo the cluster this week and report back. Thanks for the
>>> tip!
>>>
>>> On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic 
>>> wrote:
>>>
 The hosts will still mount the volume via FUSE, but you might double
 check you set the storage up as Gluster and not NFS.

 Then gluster used to need some config in glusterd.vol to set

 option rpc-auth-allow-insecure on

 I’m not sure if that got added to a hyper converged setup or not, but
 I’d check it.

 On Feb 10, 2020, at 4:41 PM, Stephen Panicho 
 wrote:

 No, this was a relatively new cluster-- only a couple days old. Just a
 handful of VMs including the engine.

 On Mon, Feb 10, 2020 at 5:26 PM Jayme  wrote:

> Curious do the vms have active snapshots?
>
> On Mon, Feb 10, 2020 at 5:59 PM  wrote:
>
>> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster
>> running on CentOS 7.7 hosts. I was investigating poor Gluster performance
>> and heard about libgfapi, so I thought I'd give it a shot. Looking 
>> through
>> the documentation, followed by lots of threads and BZ reports, I've done
>> the following to enable it:
>>
>> First, I shut down all VMs except the engine. Then...
>>
>> On the hosts:
>> 1. setsebool -P virt_use_glusterfs on
>> 2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
>>
>> On the engine VM:
>> 1. engine-config -s LibgfApiSupported=true --cver=4.3
>> 2. systemctl restart ovirt-engine
>>
>> VMs now fail to launch. Am I doing this correctly? I should also note
>> that the hosts still have the Gluster domain mounted via FUSE.
>>
>> Here's a relevant bit from engine.log:
>>
>> 2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://
>> node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native:
>> Could not read qcow2 header: Invalid argument.
>>
>> The full engine.log from one of the attempts:
>>
>> 2020-02-06 16:38:24,909Z INFO
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>> (ForkJoinPool-1-worker-12) [] add VM
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
>> 2020-02-06 16:38:25,010Z ERROR
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
>> (ForkJoinPool-1-worker-12) [] Rerun VM
>> 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS '
>> 

[ovirt-users] Re: Can't install virtio-win with EL7.7/Ovirt-4.3.8 -- rpm error

2020-02-13 Thread Dominic Coulombe
Confirmed as working.

Thanks.

On Thu, Feb 13, 2020 at 5:00 AM Cole Robinson  wrote:

> Thanks for the cc Gal. Latest published virtio-win RPMs, 0.1.173-7, are
> back to using xz compression now. Seems like the new compression got
> picked up automatically by building on Fedora 31.
>
> Thanks,
> Cole
>
> On 2/9/20 3:20 AM, Gal Zaidman wrote:
> > Forwarding this to virtio-win developers and packagers.
> > Notice that virtio-win is a package in Fedora/Centos/RHEL and it is not
> > an "ovirt/RHV" package so ovirt doesn't package it.
> >
> > On Sun, Feb 9, 2020 at 4:59 AM  > > wrote:
> >
> > Same problem.  Looks like the virtio rpm is now built with the new
> > compression method, but rpm for EL7 hasn't been updated to support
> it.
> > ___
> > Users mailing list -- users@ovirt.org 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4AQYIVCAQY6JWFTNJWOHNXZPQD4IEI/
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUBGCDBSILOEAMS3XFQ43IZVE3OHYPNB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IWD3OBHCZQ24SX4RPRKJTZ5XKMGAK5FA/


[ovirt-users] Re: hosted engine storage does not heal

2020-02-13 Thread George Vasilopoulos

it seems that this exactly was the problem

Thank you so much!

Στις 13/2/20 5:33 μ.μ., ο Strahil Nikolov έγραψε:

On February 13, 2020 5:20:18 PM GMT+02:00, g.vasilopou...@uoc.gr wrote:

Hello
We have a problem with hosted engine storage after updating one host
which serves as a gluster server for the engine (the setup is gluster
replica 3 with local disks from 3 hypervisors)
Volume heal command shows
[root@o5-car0118 engine]# gluster volume heal engine info
Brick o5-car0118.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta

/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta

Status: Connected
Number of entries: 2

Brick o2-car0121.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta

/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta

Status: Connected
Number of entries: 2

Brick o9-car0114.gfs-int.uoc.gr:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

On all the gluster servers
I notice that the affected directories have date in 1970.
[root@o5-car0118 images]# ls -al
σύνολο 24
drwxr-xr-x. 23 vdsm kvm 8192 Σεπ  24 12:07 .
drwxr-xr-x.  6 vdsm kvm   64 Σεπ  19  2018 ..
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
2bac658f-70ce-4adb-ab68-a0f0c205c70c
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
3034a69c-b5b5-46fa-a393-59ea46635142
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
5538ae6b-ccc6-4861-b71b-6b2c7af2e0ab
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
66dbce25-8863-42b5-904a-484f8e9c225a
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
6c049108-28f7-47d9-8d54-4ac2697dcba8
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
72702607-1896-420d-931a-42c9f01d37f1
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
7c617da4-ab6b-4791-80be-541f5be60dd8
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
902a16d3-6494-4840-a528-b49972f9c332
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
96fd6116-7983-4385-bca6-e6ca8edc94ca
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
abd875cd-96b6-47a6-b6a3-ae35300a21cc
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
add7bc92-1a40-474d-9255-53ac861b75ed
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
b7b06df7-465f-4fc7-a214-033b7dca6bc7
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
c0ecacac-26c6-40d9-87da-af17d9de8d21
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
c4d2d5da-2a15-4735-8919-324ae8372064
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
c7e0c784-bb8e-4024-95df-b6f4267b0208
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
d1f1ff5a-387d-442c-9240-1c58e4d6f8a7
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
d3e172cb-b6dd-4867-a9cd-f4fa006648bc
drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970
e3a3ef50-56b6-48b0-a9f8-2d6382e2286e  <-
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
e477ec02-11ab-4d92-b5fd-44e91fbde7f9
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
e839485b-b0be-47f6-9847-b691e02ce9a4
drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970
f5e576d4-eea7-431b-a0f0-f8a557006471 <-

I think this has something to do with a gluster bug.
Is there a way to correct this and heal the volume?
Thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EHC5TPN6DU5ZV7UY6ZYB4YMEVE6JO3SY/

For the meta files - check on the bricks  which is newest. Usually only the 
timestamp inside  is different.

If data is the same -> rsync the newest file from the brick to the other bricks and 
use 'gluster volume heal  full' (for me it  never worked  without 'full').

We are expecting fix for the meta files  with different gfid very soon.


For the timestamp - write an e-mail to the gluster-users mail list, as they can 
help better.

Best Regards,
Strahil Nikolov


--
Βασιλόπουλος Γιώργος
Ηλεκτρολόγος Μηχανικός Τ.Ε.
Διαχειριστής Υπολ. Συστημάτων

Πανεπιστήμιο Κρήτης
Κ.Υ.Υ.Τ.Π.Ε.
Τμήμα Επικοινωνιών και Δικτύων
Βούτες Ηρακλείου 70013
Τηλ   : 2810393310
email : g.vasilopou...@uoc.gr
http://www.ucnet.uoc.gr
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E4NI3QNGY7S32IVJ4LURHYHSKV5RP6ZM/


[ovirt-users] Re: Reimport disks

2020-02-13 Thread Robert Webb
Meant to add this link:

https://www.ovirt.org/documentation/admin-guide/chap-Storage.html#importing-existing-storage-domains


From: Christian Reiss 
Sent: Thursday, February 13, 2020 9:30 AM
To: users
Subject: [ovirt-users] Reimport disks

Hey folks,

I created a new cluster with a new engine, everything is green and
running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7 hosts).

I do have a backup of the /images/ directory from the old installation.
I tried copying (and preserving user/ permissions) into the new images
gluster dir and trying a domain -> scan to no avail.

What is the correct way to introduce oVirt to "new" (or unknown) images?

-Chris.

--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTYWGVHXFDKVPSELBRYOCM7Z/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4BV7FUDAYY45SHWBLC6G5DGKSQ5JYOH/


[ovirt-users] Re: Reimport disks

2020-02-13 Thread Vinícius Ferrão
Import domain will work. The VM metadata is available on the OVF_STORE 
container, inside the domain. So even the names and settings come back.

Them you gradually start moving the VMs to the Gluster storage.

Sent from my iPhone

> On 13 Feb 2020, at 11:42, Robert Webb  wrote:
> 
> Off the top of my head, would you use the "Import Domain" option?
> 
> 
> From: Christian Reiss 
> Sent: Thursday, February 13, 2020 9:30 AM
> To: users
> Subject: [ovirt-users] Reimport disks
> 
> Hey folks,
> 
> I created a new cluster with a new engine, everything is green and
> running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7 hosts).
> 
> I do have a backup of the /images/ directory from the old installation.
> I tried copying (and preserving user/ permissions) into the new images
> gluster dir and trying a domain -> scan to no avail.
> 
> What is the correct way to introduce oVirt to "new" (or unknown) images?
> 
> -Chris.
> 
> --
> with kind regards,
> mit freundlichen Gruessen,
> 
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTYWGVHXFDKVPSELBRYOCM7Z/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2TV6CGLZYBHC4ZEWUIXIUJOBNQOMWUN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TLETAKMXSFELOUXCSPIFWMCZH3QPL2EI/


[ovirt-users] Re: hosted engine storage does not heal

2020-02-13 Thread Strahil Nikolov
On February 13, 2020 5:20:18 PM GMT+02:00, g.vasilopou...@uoc.gr wrote:
>Hello 
>We have a problem with hosted engine storage after updating one host
>which serves as a gluster server for the engine (the setup is gluster
>replica 3 with local disks from 3 hypervisors)
>Volume heal command shows 
>[root@o5-car0118 engine]# gluster volume heal engine info
>Brick o5-car0118.gfs-int.uoc.gr:/gluster_bricks/engine/engine
>/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
>
>/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
>
>Status: Connected
>Number of entries: 2
>
>Brick o2-car0121.gfs-int.uoc.gr:/gluster_bricks/engine/engine
>/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
>
>/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
>
>Status: Connected
>Number of entries: 2
>
>Brick o9-car0114.gfs-int.uoc.gr:/gluster_bricks/engine/engine
>Status: Connected
>Number of entries: 0
>
>On all the gluster servers 
>I notice that the affected directories have date in 1970. 
>[root@o5-car0118 images]# ls -al
>σύνολο 24
>drwxr-xr-x. 23 vdsm kvm 8192 Σεπ  24 12:07 .
>drwxr-xr-x.  6 vdsm kvm   64 Σεπ  19  2018 ..
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>2bac658f-70ce-4adb-ab68-a0f0c205c70c
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>3034a69c-b5b5-46fa-a393-59ea46635142
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>5538ae6b-ccc6-4861-b71b-6b2c7af2e0ab
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>66dbce25-8863-42b5-904a-484f8e9c225a
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>6c049108-28f7-47d9-8d54-4ac2697dcba8
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>72702607-1896-420d-931a-42c9f01d37f1
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>7c617da4-ab6b-4791-80be-541f5be60dd8
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>902a16d3-6494-4840-a528-b49972f9c332
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>96fd6116-7983-4385-bca6-e6ca8edc94ca
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>abd875cd-96b6-47a6-b6a3-ae35300a21cc
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>add7bc92-1a40-474d-9255-53ac861b75ed
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>b7b06df7-465f-4fc7-a214-033b7dca6bc7
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>c0ecacac-26c6-40d9-87da-af17d9de8d21
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>c4d2d5da-2a15-4735-8919-324ae8372064
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>c7e0c784-bb8e-4024-95df-b6f4267b0208
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>d1f1ff5a-387d-442c-9240-1c58e4d6f8a7
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>d3e172cb-b6dd-4867-a9cd-f4fa006648bc
>drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970
>e3a3ef50-56b6-48b0-a9f8-2d6382e2286e  <-
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>e477ec02-11ab-4d92-b5fd-44e91fbde7f9
>drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019
>e839485b-b0be-47f6-9847-b691e02ce9a4
>drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970
>f5e576d4-eea7-431b-a0f0-f8a557006471 <-
>
>I think this has something to do with a gluster bug. 
>Is there a way to correct this and heal the volume?
>Thank you!
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/EHC5TPN6DU5ZV7UY6ZYB4YMEVE6JO3SY/

For the meta files - check on the bricks  which is newest. Usually only the 
timestamp inside  is different.

If data is the same -> rsync the newest file from the brick to the other bricks 
and use 'gluster volume heal  full' (for me it  never worked  without 
'full').

We are expecting fix for the meta files  with different gfid very soon.


For the timestamp - write an e-mail to the gluster-users mail list, as they can 
help better.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUVBJF5TJLF542V7VYHRGDCNUN6VGWYG/


[ovirt-users] Re: Reimport disks

2020-02-13 Thread Strahil Nikolov
On February 13, 2020 4:38:06 PM GMT+02:00, Robert Webb  
wrote:
>Off the top of my head, would you use the "Import Domain" option?
>
>
>From: Christian Reiss 
>Sent: Thursday, February 13, 2020 9:30 AM
>To: users
>Subject: [ovirt-users] Reimport disks
>
>Hey folks,
>
>I created a new cluster with a new engine, everything is green and
>running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7
>hosts).
>
>I do have a backup of the /images/ directory from the old installation.
>I tried copying (and preserving user/ permissions) into the new images
>gluster dir and trying a domain -> scan to no avail.
>
>What is the correct way to introduce oVirt to "new" (or unknown)
>images?
>
>-Chris.
>
>--
>with kind regards,
>mit freundlichen Gruessen,
>
>Christian Reiss
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTYWGVHXFDKVPSELBRYOCM7Z/
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2TV6CGLZYBHC4ZEWUIXIUJOBNQOMWUN/

Create a temp storage, then detach and remove.
Manually copy the backup to the same directory .

Attach again the storage domain and there should be a 'Import  VMs' & ' Import 
templates' tabs in that domain.

Then you just need  to select VM, pick a cluster and import.

Best Regards,
Strqhil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJL7JNTJS3BOTRXU6YOOMVJY7JB5NRYV/


[ovirt-users] Re: Unable to create VMs under ovirt 4.4

2020-02-13 Thread Strahil Nikolov
On February 13, 2020 12:51:51 PM GMT+02:00, Shani Leviim  
wrote:
>Hi Dirk,
>
>The hle and rtm are seems to be CPU flags.
>In order to see the flags, you can run lscpu.
>(I got this clue from here: [1])
>
>In case you're using the virt-manager, this one may help: (taken from
>[2])
>1.Open virt-manager
>2.Go to parameters of VM
>3.Go to cpu section
>4.Check "Copy host CPU configuration" and click Apply
>
>[1] https://bugzilla.redhat.com/show_bug.cgi?id=1467599
>[2] https://bugzilla.redhat.com/show_bug.cgi?id=1609818
>
>
>*Regards,*
>
>*Shani Leviim*
>
>
>On Thu, Feb 13, 2020 at 12:36 PM Dirk Streubel
>
>wrote:
>
>> Hi,
>>
>> i am using for my engine this version and this packages with the
>latest
>> updates:
>>
>> [root@engine ~]# cat /etc/redhat-release
>> CentOS Linux release 7.7.1908 (Core)
>>
>> [root@engine ~]# rpm -qa | grep ovirt*
>> ovirt-web-ui-1.6.1-0.20191208.git0112715.el7.noarch
>>
>>
>ovirt-engine-websocket-proxy-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-ansible-cluster-upgrade-1.2.1-1.el7.noarch
>> ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
>> ovirt-ansible-manageiq-1.2.1-1.el7.noarch
>> ovirt-engine-ui-extensions-1.0.13-1.el7.noarch
>>
>ovirt-engine-restapi-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-engine-vmconsole-proxy-helper-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-host-deploy-common-1.9.0-0.0.master.20191125083355.gitd2b9fa5.el7.noarch
>> ovirt-vmconsole-1.0.7-3.el7.noarch
>> ovirt-provider-ovn-1.2.29-1.el7.noarch
>>
>>
>ovirt-engine-extensions-api-impl-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-engine-wildfly-18.0.0-1.el7.x86_64
>> python2-ovirt-setup-lib-1.3.0-1.el7.noarch
>> ovirt-vmconsole-proxy-1.0.7-3.el7.noarch
>> ovirt-imageio-proxy-setup-1.6.3-0.el7.noarch
>> ovirt-engine-dwh-4.4.0-0.0.master.20191119095914.el7.noarch
>>
>>
>ovirt-engine-setup-plugin-websocket-proxy-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-engine-tools-backup-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-ansible-image-template-1.2.1-1.el7.noarch
>> ovirt-ansible-engine-setup-1.2.1-1.el7.noarch
>> ovirt-ansible-hosted-engine-setup-1.0.35-1.el7.noarch
>> ovirt-ansible-repositories-1.2.1-1.el7.noarch
>> ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
>>
>ovirt-engine-setup-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-engine-webadmin-portal-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-engine-dbscripts-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>ovirt-engine-tools-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-engine-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>ovirt-engine-setup-plugin-ovirt-engine-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>>
>python2-ovirt-host-deploy-1.9.0-0.0.master.20191125083355.gitd2b9fa5.el7.noarch
>> ovirt-cockpit-sso-0.1.2-1.el7.noarch
>> ovirt-engine-dwh-setup-4.4.0-1.el7.noarch
>>
>>
>python2-ovirt-engine-lib-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-imageio-common-1.6.3-0.el7.x86_64
>> python-ovirt-engine-sdk4-4.4.1-2.el7.x86_64
>>
>>
>ovirt-engine-setup-plugin-ovirt-engine-common-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-imageio-proxy-1.6.3-0.el7.noarch
>> ovirt-iso-uploader-4.4.0-1.el7.noarch
>> ovirt-engine-extension-aaa-jdbc-1.1.90-1.el7.noarch
>> ovirt-ansible-vm-infra-1.2.1-1.el7.noarch
>>
>ovirt-engine-metrics-1.3.5-0.0.master.20191124152203.git2a61c41.el7.noarch
>> ovirt-ansible-infra-1.2.1-1.el7.noarch
>> ovirt-ansible-roles-1.2.1-1.el7.noarch
>>
>ovirt-engine-backend-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>>
>ovirt-engine-api-explorer-0.0.6-0.alpha.1.20190917git98bf54c.el7.noarch
>>
>>
>ovirt-engine-setup-plugin-cinderlib-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> ovirt-engine-wildfly-overlay-18.0.0-1.el7.noarch
>>
>ovirt-release44-pre-4.4.0-0.4.alpha.20200212093606.git5bce5d1.el7.noarch
>>
>>
>ovirt-engine-setup-base-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>> [root@engine ~]#
>>
>>
>> and this for my engine:
>>
>> [root@hypervisor ~]# cat /etc/redhat-release
>> CentOS Linux release 8.1.1911 (Core)
>>
>> [root@hypervisor ~]# rpm -qa | grep ovirt*
>> ovirt-host-4.4.0-0.3.alpha.el8.x86_64
>> python3-ovirt-setup-lib-1.3.0-1.el8.noarch
>> ovirt-provider-ovn-driver-1.2.29-1.el8.noarch
>> ovirt-vmconsole-host-1.0.7-3.el8.noarch
>> ovirt-imageio-daemon-1.6.3-0.el8.noarch
>> cockpit-ovirt-dashboard-0.14.1-1.el8.noarch
>>
>>
>ovirt-host-deploy-common-1.9.0-0.0.master.20191125083550.gitd2b9fa5.el8.noarch
>> ovirt-imageio-common-1.6.3-0.el8.x86_64
>> ovirt-vmconsole-1.0.7-3.el8.noarch
>> ovirt-ansible-engine-setup-1.2.1-1.el8.noarch
>> ovirt-host-dependencies-4.4.0-0.3.alpha.el8.x86_64
>> ovirt-hosted-engine-setup-2.4.1-1.el8.noarch
>>

[ovirt-users] hosted engine storage does not heal

2020-02-13 Thread g . vasilopoulos
Hello 
We have a problem with hosted engine storage after updating one host which 
serves as a gluster server for the engine (the setup is gluster replica 3 with 
local disks from 3 hypervisors)
Volume heal command shows 
[root@o5-car0118 engine]# gluster volume heal engine info
Brick o5-car0118.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
 
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
 
Status: Connected
Number of entries: 2

Brick o2-car0121.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
 
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
 
Status: Connected
Number of entries: 2

Brick o9-car0114.gfs-int.uoc.gr:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

On all the gluster servers 
I notice that the affected directories have date in 1970. 
[root@o5-car0118 images]# ls -al
σύνολο 24
drwxr-xr-x. 23 vdsm kvm 8192 Σεπ  24 12:07 .
drwxr-xr-x.  6 vdsm kvm   64 Σεπ  19  2018 ..
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 2bac658f-70ce-4adb-ab68-a0f0c205c70c
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 3034a69c-b5b5-46fa-a393-59ea46635142
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 5538ae6b-ccc6-4861-b71b-6b2c7af2e0ab
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 66dbce25-8863-42b5-904a-484f8e9c225a
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 6c049108-28f7-47d9-8d54-4ac2697dcba8
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 72702607-1896-420d-931a-42c9f01d37f1
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 7c617da4-ab6b-4791-80be-541f5be60dd8
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 902a16d3-6494-4840-a528-b49972f9c332
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 96fd6116-7983-4385-bca6-e6ca8edc94ca
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 abd875cd-96b6-47a6-b6a3-ae35300a21cc
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 add7bc92-1a40-474d-9255-53ac861b75ed
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 b7b06df7-465f-4fc7-a214-033b7dca6bc7
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 c0ecacac-26c6-40d9-87da-af17d9de8d21
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 c4d2d5da-2a15-4735-8919-324ae8372064
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 c7e0c784-bb8e-4024-95df-b6f4267b0208
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 d1f1ff5a-387d-442c-9240-1c58e4d6f8a7
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 d3e172cb-b6dd-4867-a9cd-f4fa006648bc
drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970 e3a3ef50-56b6-48b0-a9f8-2d6382e2286e 
 <-
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 e477ec02-11ab-4d92-b5fd-44e91fbde7f9
drwxr-xr-x.  2 vdsm kvm  149 Αύγ   2  2019 e839485b-b0be-47f6-9847-b691e02ce9a4
drwxr-xr-x.  2 vdsm kvm 8192 Ιαν   1  1970 f5e576d4-eea7-431b-a0f0-f8a557006471 
<-

I think this has something to do with a gluster bug. 
Is there a way to correct this and heal the volume?
Thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EHC5TPN6DU5ZV7UY6ZYB4YMEVE6JO3SY/


[ovirt-users] Re: Reimport disks

2020-02-13 Thread Robert Webb
Off the top of my head, would you use the "Import Domain" option?


From: Christian Reiss 
Sent: Thursday, February 13, 2020 9:30 AM
To: users
Subject: [ovirt-users] Reimport disks

Hey folks,

I created a new cluster with a new engine, everything is green and
running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7 hosts).

I do have a backup of the /images/ directory from the old installation.
I tried copying (and preserving user/ permissions) into the new images
gluster dir and trying a domain -> scan to no avail.

What is the correct way to introduce oVirt to "new" (or unknown) images?

-Chris.

--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTYWGVHXFDKVPSELBRYOCM7Z/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2TV6CGLZYBHC4ZEWUIXIUJOBNQOMWUN/


[ovirt-users] Reimport disks

2020-02-13 Thread Christian Reiss

Hey folks,

I created a new cluster with a new engine, everything is green and 
running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7 hosts).


I do have a backup of the /images/ directory from the old installation. 
I tried copying (and preserving user/ permissions) into the new images 
gluster dir and trying a domain -> scan to no avail.


What is the correct way to introduce oVirt to "new" (or unknown) images?

-Chris.

--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTYWGVHXFDKVPSELBRYOCM7Z/


[ovirt-users] Re: Unable to create VMs under ovirt 4.4

2020-02-13 Thread Shani Leviim
Hi Dirk,

The hle and rtm are seems to be CPU flags.
In order to see the flags, you can run lscpu.
(I got this clue from here: [1])

In case you're using the virt-manager, this one may help: (taken from [2])
1.Open virt-manager
2.Go to parameters of VM
3.Go to cpu section
4.Check "Copy host CPU configuration" and click Apply

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1467599
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1609818


*Regards,*

*Shani Leviim*


On Thu, Feb 13, 2020 at 12:36 PM Dirk Streubel 
wrote:

> Hi,
>
> i am using for my engine this version and this packages with the latest
> updates:
>
> [root@engine ~]# cat /etc/redhat-release
> CentOS Linux release 7.7.1908 (Core)
>
> [root@engine ~]# rpm -qa | grep ovirt*
> ovirt-web-ui-1.6.1-0.20191208.git0112715.el7.noarch
>
> ovirt-engine-websocket-proxy-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-ansible-cluster-upgrade-1.2.1-1.el7.noarch
> ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
> ovirt-ansible-manageiq-1.2.1-1.el7.noarch
> ovirt-engine-ui-extensions-1.0.13-1.el7.noarch
> ovirt-engine-restapi-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-engine-vmconsole-proxy-helper-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-host-deploy-common-1.9.0-0.0.master.20191125083355.gitd2b9fa5.el7.noarch
> ovirt-vmconsole-1.0.7-3.el7.noarch
> ovirt-provider-ovn-1.2.29-1.el7.noarch
>
> ovirt-engine-extensions-api-impl-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-engine-wildfly-18.0.0-1.el7.x86_64
> python2-ovirt-setup-lib-1.3.0-1.el7.noarch
> ovirt-vmconsole-proxy-1.0.7-3.el7.noarch
> ovirt-imageio-proxy-setup-1.6.3-0.el7.noarch
> ovirt-engine-dwh-4.4.0-0.0.master.20191119095914.el7.noarch
>
> ovirt-engine-setup-plugin-websocket-proxy-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-engine-tools-backup-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-ansible-image-template-1.2.1-1.el7.noarch
> ovirt-ansible-engine-setup-1.2.1-1.el7.noarch
> ovirt-ansible-hosted-engine-setup-1.0.35-1.el7.noarch
> ovirt-ansible-repositories-1.2.1-1.el7.noarch
> ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
> ovirt-engine-setup-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-engine-webadmin-portal-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-engine-dbscripts-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-engine-tools-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-engine-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> ovirt-engine-setup-plugin-ovirt-engine-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
>
> python2-ovirt-host-deploy-1.9.0-0.0.master.20191125083355.gitd2b9fa5.el7.noarch
> ovirt-cockpit-sso-0.1.2-1.el7.noarch
> ovirt-engine-dwh-setup-4.4.0-1.el7.noarch
>
> python2-ovirt-engine-lib-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-imageio-common-1.6.3-0.el7.x86_64
> python-ovirt-engine-sdk4-4.4.1-2.el7.x86_64
>
> ovirt-engine-setup-plugin-ovirt-engine-common-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-imageio-proxy-1.6.3-0.el7.noarch
> ovirt-iso-uploader-4.4.0-1.el7.noarch
> ovirt-engine-extension-aaa-jdbc-1.1.90-1.el7.noarch
> ovirt-ansible-vm-infra-1.2.1-1.el7.noarch
> ovirt-engine-metrics-1.3.5-0.0.master.20191124152203.git2a61c41.el7.noarch
> ovirt-ansible-infra-1.2.1-1.el7.noarch
> ovirt-ansible-roles-1.2.1-1.el7.noarch
> ovirt-engine-backend-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-engine-api-explorer-0.0.6-0.alpha.1.20190917git98bf54c.el7.noarch
>
> ovirt-engine-setup-plugin-cinderlib-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> ovirt-engine-wildfly-overlay-18.0.0-1.el7.noarch
> ovirt-release44-pre-4.4.0-0.4.alpha.20200212093606.git5bce5d1.el7.noarch
>
> ovirt-engine-setup-base-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
> [root@engine ~]#
>
>
> and this for my engine:
>
> [root@hypervisor ~]# cat /etc/redhat-release
> CentOS Linux release 8.1.1911 (Core)
>
> [root@hypervisor ~]# rpm -qa | grep ovirt*
> ovirt-host-4.4.0-0.3.alpha.el8.x86_64
> python3-ovirt-setup-lib-1.3.0-1.el8.noarch
> ovirt-provider-ovn-driver-1.2.29-1.el8.noarch
> ovirt-vmconsole-host-1.0.7-3.el8.noarch
> ovirt-imageio-daemon-1.6.3-0.el8.noarch
> cockpit-ovirt-dashboard-0.14.1-1.el8.noarch
>
> ovirt-host-deploy-common-1.9.0-0.0.master.20191125083550.gitd2b9fa5.el8.noarch
> ovirt-imageio-common-1.6.3-0.el8.x86_64
> ovirt-vmconsole-1.0.7-3.el8.noarch
> ovirt-ansible-engine-setup-1.2.1-1.el8.noarch
> ovirt-host-dependencies-4.4.0-0.3.alpha.el8.x86_64
> ovirt-hosted-engine-setup-2.4.1-1.el8.noarch
> ovirt-release44-pre-4.4.0-0.4.alpha.20200212093729.git5bce5d1.el8.noarch
> ovirt-ansible-hosted-engine-setup-1.0.35-1.el8.noarch
> python3-ovirt-engine-sdk4-4.4.1-1.el8.x86_64
>
> 

[ovirt-users] Re: Unable to create VMs under ovirt 4.4

2020-02-13 Thread Dirk Streubel
Hi,

i am using for my engine this version and this packages with the latest
updates:

[root@engine ~]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@engine ~]# rpm -qa | grep ovirt*
ovirt-web-ui-1.6.1-0.20191208.git0112715.el7.noarch
ovirt-engine-websocket-proxy-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-ansible-cluster-upgrade-1.2.1-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-ansible-manageiq-1.2.1-1.el7.noarch
ovirt-engine-ui-extensions-1.0.13-1.el7.noarch
ovirt-engine-restapi-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-host-deploy-common-1.9.0-0.0.master.20191125083355.gitd2b9fa5.el7.noarch
ovirt-vmconsole-1.0.7-3.el7.noarch
ovirt-provider-ovn-1.2.29-1.el7.noarch
ovirt-engine-extensions-api-impl-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-wildfly-18.0.0-1.el7.x86_64
python2-ovirt-setup-lib-1.3.0-1.el7.noarch
ovirt-vmconsole-proxy-1.0.7-3.el7.noarch
ovirt-imageio-proxy-setup-1.6.3-0.el7.noarch
ovirt-engine-dwh-4.4.0-0.0.master.20191119095914.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-tools-backup-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-ansible-image-template-1.2.1-1.el7.noarch
ovirt-ansible-engine-setup-1.2.1-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.35-1.el7.noarch
ovirt-ansible-repositories-1.2.1-1.el7.noarch
ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
ovirt-engine-setup-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-webadmin-portal-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-dbscripts-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-tools-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
python2-ovirt-host-deploy-1.9.0-0.0.master.20191125083355.gitd2b9fa5.el7.noarch
ovirt-cockpit-sso-0.1.2-1.el7.noarch
ovirt-engine-dwh-setup-4.4.0-1.el7.noarch
python2-ovirt-engine-lib-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-imageio-common-1.6.3-0.el7.x86_64
python-ovirt-engine-sdk4-4.4.1-2.el7.x86_64
ovirt-engine-setup-plugin-ovirt-engine-common-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-imageio-proxy-1.6.3-0.el7.noarch
ovirt-iso-uploader-4.4.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.90-1.el7.noarch
ovirt-ansible-vm-infra-1.2.1-1.el7.noarch
ovirt-engine-metrics-1.3.5-0.0.master.20191124152203.git2a61c41.el7.noarch
ovirt-ansible-infra-1.2.1-1.el7.noarch
ovirt-ansible-roles-1.2.1-1.el7.noarch
ovirt-engine-backend-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-api-explorer-0.0.6-0.alpha.1.20190917git98bf54c.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
ovirt-engine-wildfly-overlay-18.0.0-1.el7.noarch
ovirt-release44-pre-4.4.0-0.4.alpha.20200212093606.git5bce5d1.el7.noarch
ovirt-engine-setup-base-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
[root@engine ~]#


and this for my engine:

[root@hypervisor ~]# cat /etc/redhat-release
CentOS Linux release 8.1.1911 (Core)

[root@hypervisor ~]# rpm -qa | grep ovirt*
ovirt-host-4.4.0-0.3.alpha.el8.x86_64
python3-ovirt-setup-lib-1.3.0-1.el8.noarch
ovirt-provider-ovn-driver-1.2.29-1.el8.noarch
ovirt-vmconsole-host-1.0.7-3.el8.noarch
ovirt-imageio-daemon-1.6.3-0.el8.noarch
cockpit-ovirt-dashboard-0.14.1-1.el8.noarch
ovirt-host-deploy-common-1.9.0-0.0.master.20191125083550.gitd2b9fa5.el8.noarch
ovirt-imageio-common-1.6.3-0.el8.x86_64
ovirt-vmconsole-1.0.7-3.el8.noarch
ovirt-ansible-engine-setup-1.2.1-1.el8.noarch
ovirt-host-dependencies-4.4.0-0.3.alpha.el8.x86_64
ovirt-hosted-engine-setup-2.4.1-1.el8.noarch
ovirt-release44-pre-4.4.0-0.4.alpha.20200212093729.git5bce5d1.el8.noarch
ovirt-ansible-hosted-engine-setup-1.0.35-1.el8.noarch
python3-ovirt-engine-sdk4-4.4.1-1.el8.x86_64
python3-ovirt-host-deploy-1.9.0-0.0.master.20191125083550.gitd2b9fa5.el8.noarch
ovirt-hosted-engine-ha-2.4.1-1.el8.noarch
[root@hypervisor ~]#


And yes, i know this is a test version but i think for you it is very
helpful that somebody will test the release and gave you an feedback ;)

Regards

Dirk


Am 13.02.20 um 11:03 schrieb Yedidyah Bar David:

> On Thu, Feb 13, 2020 at 11:52 AM Dirk Streubel  
> wrote:
>> Hello List,
>>
>> after a few Problems with the package "java-client-kubevirt" and some
>> updates Problems on my Hypervisor and my ovirt-engine  i make a totally
>> fresh installation of ovirt 4.4.
>>
>> So, after the new installation i can't create a Linux VM.  I get a
>> message that i don't understand:
>>
>> Feb 13 10:44:12 hypervisor vdsm[6644]: WARN 

[ovirt-users] Re: Unable to create VMs under ovirt 4.4

2020-02-13 Thread Yedidyah Bar David
On Thu, Feb 13, 2020 at 11:52 AM Dirk Streubel  wrote:
>
> Hello List,
>
> after a few Problems with the package "java-client-kubevirt" and some
> updates Problems on my Hypervisor and my ovirt-engine  i make a totally
> fresh installation of ovirt 4.4.
>
> So, after the new installation i can't create a Linux VM.  I get a
> message that i don't understand:
>
> Feb 13 10:44:12 hypervisor vdsm[6644]: WARN Attempting to add an
> existing net user: ovirtmgmt/9ab115dd-1c21-476d-955f-d4164e4bc5c7
> Feb 13 10:44:14 hypervisor journal[2698]: the CPU is incompatible with
> host CPU: Host-CPU stellt nicht die erforderlichen Funktionen bereit:
> hle, rtm
> Feb 13 10:44:15 hypervisor vdsm[6644]: WARN Attempting to remove a non
> existing network: ovirtmgmt/9ab115dd-1c21-476d-955f-d4164e4bc5c7
> Feb 13 10:44:15 hypervisor vdsm[6644]: WARN Attempting to remove a non
> existing net user: ovirtmgmt/9ab115dd-1c21-476d-955f-d4164e4bc5c7
>
> So, which function are missing? I don't know what "hle" and "rtm" stands
> for? And searching with Google get no Results.
>
> And it is interesting before the new Installation i was possible to
> create a Debian and CentOS VM under ovirt 4.4

Hi,

Which version exactly are you using?

Please note that current build for el7 is out-of-date, and that work
is on-going on moving to el8.

If you want to try current code, the best way is to build from source.

Generally speaking, 4.4 is not released yet, and can be broken at any
given point in time.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2BMMY326QKLPRZF66NP34LCUBUKI74XX/


[ovirt-users] Re: Can't install virtio-win with EL7.7/Ovirt-4.3.8 -- rpm error

2020-02-13 Thread Cole Robinson
Thanks for the cc Gal. Latest published virtio-win RPMs, 0.1.173-7, are
back to using xz compression now. Seems like the new compression got
picked up automatically by building on Fedora 31.

Thanks,
Cole

On 2/9/20 3:20 AM, Gal Zaidman wrote:
> Forwarding this to virtio-win developers and packagers.
> Notice that virtio-win is a package in Fedora/Centos/RHEL and it is not
> an "ovirt/RHV" package so ovirt doesn't package it.
> 
> On Sun, Feb 9, 2020 at 4:59 AM  > wrote:
> 
> Same problem.  Looks like the virtio rpm is now built with the new
> compression method, but rpm for EL7 hasn't been updated to support it.
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4AQYIVCAQY6JWFTNJWOHNXZPQD4IEI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUBGCDBSILOEAMS3XFQ43IZVE3OHYPNB/


[ovirt-users] Unable to create VMs under ovirt 4.4

2020-02-13 Thread Dirk Streubel
Hello List,

after a few Problems with the package "java-client-kubevirt" and some
updates Problems on my Hypervisor and my ovirt-engine  i make a totally
fresh installation of ovirt 4.4.

So, after the new installation i can't create a Linux VM.  I get a
message that i don't understand:

Feb 13 10:44:12 hypervisor vdsm[6644]: WARN Attempting to add an
existing net user: ovirtmgmt/9ab115dd-1c21-476d-955f-d4164e4bc5c7
Feb 13 10:44:14 hypervisor journal[2698]: the CPU is incompatible with
host CPU: Host-CPU stellt nicht die erforderlichen Funktionen bereit:
hle, rtm
Feb 13 10:44:15 hypervisor vdsm[6644]: WARN Attempting to remove a non
existing network: ovirtmgmt/9ab115dd-1c21-476d-955f-d4164e4bc5c7
Feb 13 10:44:15 hypervisor vdsm[6644]: WARN Attempting to remove a non
existing net user: ovirtmgmt/9ab115dd-1c21-476d-955f-d4164e4bc5c7

So, which function are missing? I don't know what "hle" and "rtm" stands
for? And searching with Google get no Results.

And it is interesting before the new Installation i was possible to
create a Debian and CentOS VM under ovirt 4.4

Regards

Dirk





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EKIPSV5LF3277GYRTY2ZSJNJU7I457RZ/