[ovirt-users] Re: Failing to restore a backup

2020-07-15 Thread Yedidyah Bar David
On Wed, Jul 15, 2020 at 6:21 PM Andrea Chierici
 wrote:
>
> Dear all,
> I think I finally understood the issue, even if I don't know how to fix it.
>
> Trying to install a new HE from a backup I get the error:
>  "The host has been set in non_operational status, please check engine logs, 
> more info can be found in the engine logs, fix accordingly and re-deploy."
>
> The host, not the hosted engine. This is more clear in another log:
> Host  is set to Non-Operational, it is missing the 
> following networks: 'iscsi_net,sgsi_iscsi,sgsi_priv,sgsi_vpn'
>
> The fact is that those networks are present on the host:
> # ip addr
> 
> 26: sgsi_priv:  mtu 1500 qdisc noqueue state 
> UP group default qlen 1000
> link/ether 90:e2:ba:63:2e:bc brd ff:ff:ff:ff:ff:ff
> inet6 fe80::92e2:baff:fe63:2ebc/64 scope link
> 28: sgsi_vpn:  mtu 1500 qdisc noqueue state 
> UP group default qlen 1000
> link/ether 90:e2:ba:63:2e:bc brd ff:ff:ff:ff:ff:ff
> inet6 fe80::92e2:baff:fe63:2ebc/64 scope link
>valid_lft forever preferred_lft forever
>
> The other two are configured on ovirt but not configurable on bare metal 
> system, indeed if I issue "ip addr" on a production host I don't see those 
> nets at all: I am puzzled. The problem is definitely this one, can anyone 
> provide any suggestion on how to proceed?
> Why is it complaining about sgsi_priv and sgsi_vpn that are not missing at 
> all?

If you pass --restore-from-file, you should be prompted, at some
point, IMO (copying from the code, didn't test recently):

'Pause the execution after adding this host to the '
'engine?\n'
'You will be able to iteratively connect to '
'the restored engine in order to manually '
'review and remediate its configuration before '
'proceeding with the deployment:\nplease ensure that '
'all the datacenter hosts and storage domain are '
'listed as up or in maintenance mode before '
'proceeding.\nThis is normally not required when '
'restoring an up to date and coherent backup. '
'(@VALUES@)[@DEFAULT@]: '

Were you? If so, you can reply 'Yes', and then, later on, you should
get a message:

  - name: Pause the execution to let the user interactively reconfigure the host
  - name: Let the user connect to the bootstrap engine to manually
fix host configuration
  msg: >-
You can now connect to {{ bootstrap_engine_url }} and
check the status of this host and
eventually remediate it, please continue only when the
host is listed as 'up'

- name: Pause execution until {{ he_setup_lock_file.path }} is
removed, delete it once ready to proceed

At this point, the deploy process will wait until you remove this
file, before continuing.
Then, you can login to the engine admin ui, change whatever needed on
the host - including
configuring networks or whatever, until you manage to bring it 'Up'.
Then remove the file.

Good luck and best regards,

>
> Andrea
>
>
> On 15/07/2020 08:33, Yedidyah Bar David wrote:
>
> On Tue, Jul 14, 2020 at 6:04 PM Andrea Chierici
>  wrote:
>
> Hi,
> thank you for your help.
>
>
> I think this is not a critical failure, and is not what failed the restore.
>
>
>
> Recently I tried the 4.3.11 beta and 4.4.1 and the error now is different:
>
> [ INFO  ] Upgrading CA\n[ ERROR ] Failed to execute stage 'Misc 
> configuration': (2, 'No such file or directory')\n[ INFO  ] DNF Performing 
> DNF transaction rollback\n
>
> This is part of 'engine-setup' output, which 'hosted-engine' runs inside the 
> engine VM. If you can access the engine VM, you can try finding more 
> information in /var/log/ovirt-engine/setup/* there. Otherwise, the 
> hosted-engine deploy script might have managed to get a copy to 
> /var/log/ovirt-hosted-engine-setup/engine-logs*. Please check/share these. 
> Thanks.
>
>
> Unfortunately the installation procedures when exiting, deletes the vm, hence 
> I can't log in.
>
> Are you sure? Did you check with 'ps', searching qemu processes?
>
> If it's still up, but still using a local IP address, you can find it
> by searching the hosted-engine logs for 'local_vm_ip' and login there
> from the host.
>
> Here are the ERROR messages I got on the logs copied on the host:
>
> engine.log:2020-07-08 15:05:04,178+02 ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) 
> [45a7e7f3] Can not run fence action on host '', no 
> suitable proxy host was found.
>
> That's ok.
>
> server.log:2020-07-08 15:09:23,081+02 ERROR 
> [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) RESTEASY002010: 
> Failed to execute: javax.ws.rs.WebApplicationException: HTTP 404 Not Found
> server.log:2020-07-08 15:14:19,804+02 ERROR 
> 

[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Strahil Nikolov via Users
You first add the node (assign Datacenter and Cluster).
Then you define the storage volume and give details and options (for example 
option like: 'backup-volfile-servers=host2:host3').
Then during Host activation phase - all storage domains are  mounted  on the 
host and during maintenance -> umounted on the host.

Best Regards,
Strahil Nikolov

На 16 юли 2020 г. 1:21:33 GMT+03:00, Philip  Brown  написа:
>Awesome thats good news.
>
>So... does that happen automatically?
>
>ie: install ovirt "node" image, then tell ovirt hosted engine "go add
>that node to cluster", and it automagically gets done?
>
>Kinda sounds likely, but I'd like to set my expectations appropriately
>
>
>- Original Message -
>From: "Jayme" 
>To: "Philip Brown" 
>Cc: "Strahil Nikolov" , "users"
>
>Sent: Wednesday, July 15, 2020 3:03:39 PM
>Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
>Your other hosts that aren’t participating in gluster storage would
>just
>mount the gluster storage domains.
>
>On Wed, Jul 15, 2020 at 6:44 PM Philip Brown  wrote:
>
>> Hmm...
>>
>>
>> Are you then saying, that YES, all host nodes need to be able to talk
>to
>> the glusterfs filesystem?
>>
>>
>> on a related note, I'd like to have as few nodes actually holding
>> glusterfs data as possible, since I want that data on SSD.
>> Rather than multiple "replication set" hosts, and one arbiter.. is it
>> instead possible to have only 2 replication set hosts, and multiple
>> (arbitrariliy many) arbiter nodes?
>>
>>
>> - Original Message -
>> From: "Strahil Nikolov" 
>> To: "users" , "Philip Brown" 
>> Sent: Wednesday, July 15, 2020 1:59:40 PM
>> Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>>
>> You can use   a distributed replicated volume of type 'replica 3
>arbiter
>> 1'.
>> For example, NodeA  and NodeB are contain  replica  set 1  with NodeC
>as
>> their arbiter and NodeD and NodeE as the second  replica set  2  with
>NodeC
>> as thir arbiter also.
>>
>> In such case you got only 2 copies of a single shard, but you are
>fully
>> "supported" from gluster perspective.
>> Also, all  hosts can have an external storage like  your  NAS.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown
>
>> написа:
>> >arg. when I said "add 2 more nodes that arent part of the cluster",
>I
>> >meant,
>> >"part of the glusterfs cluster".
>> >
>> >or at minimum, maybe some kind of client-only setup, if they need
>> >access?
>> >
>> >
>> >- Original Message -
>> >From: "Philip Brown" 
>> >To: "users" 
>> >Sent: Wednesday, July 15, 2020 10:37:48 AM
>> >Subject: [ovirt-users] mixed hyperconverged?
>> >
>> >I'm thinking of doing an SSD based hyperconverged setup (for 4.3),
>but
>> >am wondering about certain design issues.
>> >
>> >seems like the optimal number is 3 nodes for the glusterfs.
>> >but.. I want 5 host nodes, not 3
>> >and I want the main storage for VMs to be separate iSCSI NAS boxes.
>> >Is it possible to have 3 nodes be the hyperconverged stuff.. but
>then
>> >add in 2 "regular" nodes, that dont store anything and arent part of
>> >the cluster?
>> >
>> >is it required to be part of the gluster cluster, to also be part of
>> >the ovirt cluster, if thats where the hosted-engine lives?
>> >or can I just have the hosted engine be switchable between the 3
>nodes,
>> >and the other 2 be VM-only hosts?
>> >
>> >Any recommendations here?
>> >
>> >I dont what 5 way replication going on. Nor do I want to have to pay
>> >for large SSDs on all my host nodes.
>> >(Im planning to run them with the ovirt 3.4 node image)
>> >
>> >
>> >
>> >--
>> >Philip Brown| Sr. Linux System Administrator | Medata, Inc.
>> >5 Peters Canyon Rd Suite 250
>> >Irvine CA 92606
>> >Office 714.918.1310| Fax 714.918.1325
>> >pbr...@medata.com| www.medata.com
>> >___
>> >Users mailing list -- users@ovirt.org
>> >To unsubscribe send an email to users-le...@ovirt.org
>> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
>> >___
>> >Users mailing list -- users@ovirt.org
>> >To unsubscribe send an email to users-le...@ovirt.org
>> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >oVirt Code of Conduct:
>> >https://www.ovirt.org/community/about/community-guidelines/
>> >List Archives:
>> >
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>

[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Strahil Nikolov via Users


На 16 юли 2020 г. 0:41:22 GMT+03:00, Philip  Brown  написа:
>Hmm...
>
>
>Are you then saying, that YES, all host nodes need to be able to talk
>to the glusterfs filesystem?
>
No, but you sounded like you need that.
You can have 'replica 3' and 2 hosts won't have gluster server running (still 
able to mount).

>on a related note, I'd like to have as few nodes actually holding
>glusterfs data as possible, since I want that data on SSD.
>Rather than multiple "replication set" hosts, and one arbiter.. is it
>instead possible to have only 2 replication set hosts, and multiple
>(arbitrariliy many) arbiter nodes?
>
You need 2 copies of the data, but 'replica 3' is most optimal.


>- Original Message -
>From: "Strahil Nikolov" 
>To: "users" , "Philip Brown" 
>Sent: Wednesday, July 15, 2020 1:59:40 PM
>Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
>You can use   a distributed replicated volume of type 'replica 3
>arbiter 1'.
>For example, NodeA  and NodeB are contain  replica  set 1  with NodeC
>as their arbiter and NodeD and NodeE as the second  replica set  2 
>with NodeC as thir arbiter also.
>
>In such case you got only 2 copies of a single shard, but you are fully
>"supported" from gluster perspective.
>Also, all  hosts can have an external storage like  your  NAS.
>
>Best Regards,
>Strahil Nikolov
>
>На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown 
>написа:
>>arg. when I said "add 2 more nodes that arent part of the cluster", I
>>meant,
>>"part of the glusterfs cluster".
>>
>>or at minimum, maybe some kind of client-only setup, if they need
>>access?
>>
>>
>>- Original Message -
>>From: "Philip Brown" 
>>To: "users" 
>>Sent: Wednesday, July 15, 2020 10:37:48 AM
>>Subject: [ovirt-users] mixed hyperconverged?
>>
>>I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
>>am wondering about certain design issues.
>>
>>seems like the optimal number is 3 nodes for the glusterfs.
>>but.. I want 5 host nodes, not 3
>>and I want the main storage for VMs to be separate iSCSI NAS boxes.
>>Is it possible to have 3 nodes be the hyperconverged stuff.. but then
>>add in 2 "regular" nodes, that dont store anything and arent part of
>>the cluster?
>>
>>is it required to be part of the gluster cluster, to also be part of
>>the ovirt cluster, if thats where the hosted-engine lives?
>>or can I just have the hosted engine be switchable between the 3
>nodes,
>>and the other 2 be VM-only hosts?
>>
>>Any recommendations here?
>>
>>I dont what 5 way replication going on. Nor do I want to have to pay
>>for large SSDs on all my host nodes.
>>(Im planning to run them with the ovirt 3.4 node image)
>>
>>
>>
>>--
>>Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
>>5 Peters Canyon Rd Suite 250 
>>Irvine CA 92606 
>>Office 714.918.1310| Fax 714.918.1325 
>>pbr...@medata.com| www.medata.com
>>___

Best Regards,
Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4G57WQOYI7IAPIP5KDHOCI6KJX6AR4Z5/


[ovirt-users] Re: Ovirt 4.4.1 Install failure at OVF_Store check

2020-07-15 Thread AK via Users
Didi, et.al.  

I took the info Alistair referenced, changed my FQDN's of all servers to an 
XX.info and the install has successfully completed.   Big thanks for the info 
Alistair.  If you would like the install logs to validate anything please let 
me know and I will attach them.  

thanks

Andy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BEH2BVT2TBWGE4YLEINH5QAZV6ABH3ZU/


[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Philip Brown
Awesome thats good news.

So... does that happen automatically?

ie: install ovirt "node" image, then tell ovirt hosted engine "go add that node 
to cluster", and it automagically gets done?

Kinda sounds likely, but I'd like to set my expectations appropriately


- Original Message -
From: "Jayme" 
To: "Philip Brown" 
Cc: "Strahil Nikolov" , "users" 
Sent: Wednesday, July 15, 2020 3:03:39 PM
Subject: Re: [ovirt-users] Re: mixed hyperconverged?

Your other hosts that aren’t participating in gluster storage would just
mount the gluster storage domains.

On Wed, Jul 15, 2020 at 6:44 PM Philip Brown  wrote:

> Hmm...
>
>
> Are you then saying, that YES, all host nodes need to be able to talk to
> the glusterfs filesystem?
>
>
> on a related note, I'd like to have as few nodes actually holding
> glusterfs data as possible, since I want that data on SSD.
> Rather than multiple "replication set" hosts, and one arbiter.. is it
> instead possible to have only 2 replication set hosts, and multiple
> (arbitrariliy many) arbiter nodes?
>
>
> - Original Message -
> From: "Strahil Nikolov" 
> To: "users" , "Philip Brown" 
> Sent: Wednesday, July 15, 2020 1:59:40 PM
> Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
> You can use   a distributed replicated volume of type 'replica 3 arbiter
> 1'.
> For example, NodeA  and NodeB are contain  replica  set 1  with NodeC as
> their arbiter and NodeD and NodeE as the second  replica set  2  with NodeC
> as thir arbiter also.
>
> In such case you got only 2 copies of a single shard, but you are fully
> "supported" from gluster perspective.
> Also, all  hosts can have an external storage like  your  NAS.
>
> Best Regards,
> Strahil Nikolov
>
> На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown 
> написа:
> >arg. when I said "add 2 more nodes that arent part of the cluster", I
> >meant,
> >"part of the glusterfs cluster".
> >
> >or at minimum, maybe some kind of client-only setup, if they need
> >access?
> >
> >
> >- Original Message -
> >From: "Philip Brown" 
> >To: "users" 
> >Sent: Wednesday, July 15, 2020 10:37:48 AM
> >Subject: [ovirt-users] mixed hyperconverged?
> >
> >I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
> >am wondering about certain design issues.
> >
> >seems like the optimal number is 3 nodes for the glusterfs.
> >but.. I want 5 host nodes, not 3
> >and I want the main storage for VMs to be separate iSCSI NAS boxes.
> >Is it possible to have 3 nodes be the hyperconverged stuff.. but then
> >add in 2 "regular" nodes, that dont store anything and arent part of
> >the cluster?
> >
> >is it required to be part of the gluster cluster, to also be part of
> >the ovirt cluster, if thats where the hosted-engine lives?
> >or can I just have the hosted engine be switchable between the 3 nodes,
> >and the other 2 be VM-only hosts?
> >
> >Any recommendations here?
> >
> >I dont what 5 way replication going on. Nor do I want to have to pay
> >for large SSDs on all my host nodes.
> >(Im planning to run them with the ovirt 3.4 node image)
> >
> >
> >
> >--
> >Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> >5 Peters Canyon Rd Suite 250
> >Irvine CA 92606
> >Office 714.918.1310| Fax 714.918.1325
> >pbr...@medata.com| www.medata.com
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/46IWO6CTOGJVZN2M6DMNB3AOX6B347S3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJ2FUSHYJG7WQVEPDZSYEJ5QYGYXQCMS/


[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Jayme
Your other hosts that aren’t participating in gluster storage would just
mount the gluster storage domains.

On Wed, Jul 15, 2020 at 6:44 PM Philip Brown  wrote:

> Hmm...
>
>
> Are you then saying, that YES, all host nodes need to be able to talk to
> the glusterfs filesystem?
>
>
> on a related note, I'd like to have as few nodes actually holding
> glusterfs data as possible, since I want that data on SSD.
> Rather than multiple "replication set" hosts, and one arbiter.. is it
> instead possible to have only 2 replication set hosts, and multiple
> (arbitrariliy many) arbiter nodes?
>
>
> - Original Message -
> From: "Strahil Nikolov" 
> To: "users" , "Philip Brown" 
> Sent: Wednesday, July 15, 2020 1:59:40 PM
> Subject: Re: [ovirt-users] Re: mixed hyperconverged?
>
> You can use   a distributed replicated volume of type 'replica 3 arbiter
> 1'.
> For example, NodeA  and NodeB are contain  replica  set 1  with NodeC as
> their arbiter and NodeD and NodeE as the second  replica set  2  with NodeC
> as thir arbiter also.
>
> In such case you got only 2 copies of a single shard, but you are fully
> "supported" from gluster perspective.
> Also, all  hosts can have an external storage like  your  NAS.
>
> Best Regards,
> Strahil Nikolov
>
> На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown 
> написа:
> >arg. when I said "add 2 more nodes that arent part of the cluster", I
> >meant,
> >"part of the glusterfs cluster".
> >
> >or at minimum, maybe some kind of client-only setup, if they need
> >access?
> >
> >
> >- Original Message -
> >From: "Philip Brown" 
> >To: "users" 
> >Sent: Wednesday, July 15, 2020 10:37:48 AM
> >Subject: [ovirt-users] mixed hyperconverged?
> >
> >I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
> >am wondering about certain design issues.
> >
> >seems like the optimal number is 3 nodes for the glusterfs.
> >but.. I want 5 host nodes, not 3
> >and I want the main storage for VMs to be separate iSCSI NAS boxes.
> >Is it possible to have 3 nodes be the hyperconverged stuff.. but then
> >add in 2 "regular" nodes, that dont store anything and arent part of
> >the cluster?
> >
> >is it required to be part of the gluster cluster, to also be part of
> >the ovirt cluster, if thats where the hosted-engine lives?
> >or can I just have the hosted engine be switchable between the 3 nodes,
> >and the other 2 be VM-only hosts?
> >
> >Any recommendations here?
> >
> >I dont what 5 way replication going on. Nor do I want to have to pay
> >for large SSDs on all my host nodes.
> >(Im planning to run them with the ovirt 3.4 node image)
> >
> >
> >
> >--
> >Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> >5 Peters Canyon Rd Suite 250
> >Irvine CA 92606
> >Office 714.918.1310| Fax 714.918.1325
> >pbr...@medata.com| www.medata.com
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/46IWO6CTOGJVZN2M6DMNB3AOX6B347S3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GGWUV4PJUL2HL6P6IW6PMVGNQZF5C35Z/


[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Philip Brown
Hmm...


Are you then saying, that YES, all host nodes need to be able to talk to the 
glusterfs filesystem?


on a related note, I'd like to have as few nodes actually holding glusterfs 
data as possible, since I want that data on SSD.
Rather than multiple "replication set" hosts, and one arbiter.. is it instead 
possible to have only 2 replication set hosts, and multiple (arbitrariliy many) 
arbiter nodes?


- Original Message -
From: "Strahil Nikolov" 
To: "users" , "Philip Brown" 
Sent: Wednesday, July 15, 2020 1:59:40 PM
Subject: Re: [ovirt-users] Re: mixed hyperconverged?

You can use   a distributed replicated volume of type 'replica 3 arbiter 1'.
For example, NodeA  and NodeB are contain  replica  set 1  with NodeC as their 
arbiter and NodeD and NodeE as the second  replica set  2  with NodeC as thir 
arbiter also.

In such case you got only 2 copies of a single shard, but you are fully 
"supported" from gluster perspective.
Also, all  hosts can have an external storage like  your  NAS.

Best Regards,
Strahil Nikolov

На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown  написа:
>arg. when I said "add 2 more nodes that arent part of the cluster", I
>meant,
>"part of the glusterfs cluster".
>
>or at minimum, maybe some kind of client-only setup, if they need
>access?
>
>
>- Original Message -
>From: "Philip Brown" 
>To: "users" 
>Sent: Wednesday, July 15, 2020 10:37:48 AM
>Subject: [ovirt-users] mixed hyperconverged?
>
>I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
>am wondering about certain design issues.
>
>seems like the optimal number is 3 nodes for the glusterfs.
>but.. I want 5 host nodes, not 3
>and I want the main storage for VMs to be separate iSCSI NAS boxes.
>Is it possible to have 3 nodes be the hyperconverged stuff.. but then
>add in 2 "regular" nodes, that dont store anything and arent part of
>the cluster?
>
>is it required to be part of the gluster cluster, to also be part of
>the ovirt cluster, if thats where the hosted-engine lives?
>or can I just have the hosted engine be switchable between the 3 nodes,
>and the other 2 be VM-only hosts?
>
>Any recommendations here?
>
>I dont what 5 way replication going on. Nor do I want to have to pay
>for large SSDs on all my host nodes.
>(Im planning to run them with the ovirt 3.4 node image)
>
>
>
>--
>Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
>5 Peters Canyon Rd Suite 250 
>Irvine CA 92606 
>Office 714.918.1310| Fax 714.918.1325 
>pbr...@medata.com| www.medata.com
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/46IWO6CTOGJVZN2M6DMNB3AOX6B347S3/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-15 Thread Strahil Nikolov via Users
I guess  your only option is to edit 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml  and  
replace  'package' with 'dnf' (keep the beginning 2  "spaces" deeper  than  '- 
name' -> just where  "package" starts).

Best Regards,
Strahil Nikolov

На 15 юли 2020 г. 22:39:09 GMT+03:00, clam2...@gmail.com написа:
>Thank you very much Strahil for your continued assistance.  I have
>tried cleaning up and redeploying four additional times and am still
>experiencing the same error.
>
>To Summarize
>
>(1)
>Attempt 1: change gluster_infra_thick_lvs --> size: 100G to size:
>'100%PVS' and change gluster_infra_thinpools --> lvsize: 500G to
>lvsize: '100%PVS'
>Result 1: deployment failed -->
>TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
>for RHEL systems.] ***
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>(2)
>Attempt 2: same as Attempt 1, but substituted 99G for '100%PVS'
>Result 2: same as Result 1
>
>(3)
>Attempt 3: same as Attempt 1, but added
>vars:
>  ansible_python_interpreter: /usr/bin/python3
>Result 3: same as Result 1
>
>(4)
>Attempt 4: reboot all three nodes, same as Attempt 1 but omitted
>previously edited size arguments as I read in documentation at
>https://github.com/gluster/gluster-ansible-infra that the size/lvsize
>arguements for variables gluster_infra_thick_lvs and
>gluster_infra_lv_logicalvols are optional and default to 100% size of
>LV.
>
>At the end of this post are the latest version of the playbook and log
>output.  As best I can tell the nodes are fully updated, default
>installs using verified images of v4.4.1.1.
>
>From /var/log/cockpit/ovirt-dashboard/gluster-deployment.log I see that
>line 33 in task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml is
>what is causing the deployment to fail at this point
>
>- name: Change to Install lvm tools for RHEL systems.
>  package:
>name: device-mapper-persistent-data
>state: present
>  when: ansible_os_family == 'RedHat'
>
>But package device-mapper-persistent-data is installed:
>
>[root@fmov1n1 ~]# dnf install device-mapper-persistent-data
>Last metadata expiration check: 0:32:10 ago on Wed 15 Jul 2020 06:44:19
>PM UTC.
>Package device-mapper-persistent-data-0.8.5-3.el8.x86_64 is already
>installed.
>Dependencies resolved.
>Nothing to do.
>Complete!
>
>[root@fmov1n1 ~]# dnf info device-mapper-persistent-data
>Last metadata expiration check: 0:31:44 ago on Wed 15 Jul 2020 06:44:19
>PM UTC.
>Installed Packages
>Name : device-mapper-persistent-data
>Version  : 0.8.5
>Release  : 3.el8
>Architecture : x86_64
>Size : 1.4 M
>Source   : device-mapper-persistent-data-0.8.5-3.el8.src.rpm
>Repository   : @System
>Summary  : Device-mapper Persistent Data Tools
>URL  : https://github.com/jthornber/thin-provisioning-tools
>License  : GPLv3+
>Description  : thin-provisioning-tools contains
>check,dump,restore,repair,rmap
>: and metadata_size tools to manage device-mapper thin provisioning
>  : target metadata devices; cache check,dump,metadata_size,restore
>  : and repair tools to manage device-mapper cache metadata devices
>   : are included and era check, dump, restore and invalidate to manage
> : snapshot eras
>
>I can't figure out why Ansible v2.9.10 is not calling DNF.  Ansible DNF
>package is installed:
>
>[root@fmov1n1 modules]# ansible-doc -t module dnf
>> DNF   
>(/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py)
>
>Installs, upgrade, removes, and lists packages and groups with the
>`dnf' package
>manager.
>
>  * This module is maintained by The Ansible Core Team
>...
>
>
>I am unsure how to further troubleshoot from here!
>
>Thank you again!!!
>Charles
>
>---
>Latest Gluster Playbook (edited from Wizard output)
>
>hc_nodes:
>  hosts:
>fmov1n1.sn.dtcorp.com:
>  gluster_infra_volume_groups:
>- vgname: gluster_vg_nvme0n1
>  pvname: /dev/mapper/vdo_nvme0n1
>- vgname: gluster_vg_nvme2n1
>  pvname: /dev/mapper/vdo_nvme2n1
>- vgname: gluster_vg_nvme1n1
>  pvname: /dev/mapper/vdo_nvme1n1
>  gluster_infra_mount_devices:
>- path: /gluster_bricks/engine
>  lvname: gluster_lv_engine
>  vgname: gluster_vg_nvme0n1
>- path: /gluster_bricks/data
>  lvname: gluster_lv_data
>

[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Strahil Nikolov via Users
You can use   a distributed replicated volume of type 'replica 3 arbiter 1'.
For example, NodeA  and NodeB are contain  replica  set 1  with NodeC as their 
arbiter and NodeD and NodeE as the second  replica set  2  with NodeC as thir 
arbiter also.

In such case you got only 2 copies of a single shard, but you are fully 
"supported" from gluster perspective.
Also, all  hosts can have an external storage like  your  NAS.

Best Regards,
Strahil Nikolov

На 15 юли 2020 г. 21:11:34 GMT+03:00, Philip  Brown  написа:
>arg. when I said "add 2 more nodes that arent part of the cluster", I
>meant,
>"part of the glusterfs cluster".
>
>or at minimum, maybe some kind of client-only setup, if they need
>access?
>
>
>- Original Message -
>From: "Philip Brown" 
>To: "users" 
>Sent: Wednesday, July 15, 2020 10:37:48 AM
>Subject: [ovirt-users] mixed hyperconverged?
>
>I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but
>am wondering about certain design issues.
>
>seems like the optimal number is 3 nodes for the glusterfs.
>but.. I want 5 host nodes, not 3
>and I want the main storage for VMs to be separate iSCSI NAS boxes.
>Is it possible to have 3 nodes be the hyperconverged stuff.. but then
>add in 2 "regular" nodes, that dont store anything and arent part of
>the cluster?
>
>is it required to be part of the gluster cluster, to also be part of
>the ovirt cluster, if thats where the hosted-engine lives?
>or can I just have the hosted engine be switchable between the 3 nodes,
>and the other 2 be VM-only hosts?
>
>Any recommendations here?
>
>I dont what 5 way replication going on. Nor do I want to have to pay
>for large SSDs on all my host nodes.
>(Im planning to run them with the ovirt 3.4 node image)
>
>
>
>--
>Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
>5 Peters Canyon Rd Suite 250 
>Irvine CA 92606 
>Office 714.918.1310| Fax 714.918.1325 
>pbr...@medata.com| www.medata.com
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EA7FHVUTBLHQTBRJOYELUIAR4TCODUV6/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
On Wed, Jul 15, 2020 at 7:54 PM Arsène Gschwind
 wrote:
>
> On Wed, 2020-07-15 at 17:46 +0300, Nir Soffer wrote:
>
> What we see in the data you sent:
>
>
> Qemu chain:
>
>
> $ qemu-img info --backing-chain
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> image: 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> file format: qcow2
>
> virtual size: 150G (161061273600 bytes)
>
> disk size: 0
>
> cluster_size: 65536
>
> backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path:
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)
>
> backing file format: qcow2
>
> Format specific information:
>
> compat: 1.1
>
> lazy refcounts: false
>
> refcount bits: 16
>
> corrupt: false
>
>
> image: 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> file format: qcow2
>
> virtual size: 150G (161061273600 bytes)
>
> disk size: 0
>
> cluster_size: 65536
>
> Format specific information:
>
> compat: 1.1
>
> lazy refcounts: false
>
> refcount bits: 16
>
> corrupt: false
>
>
> Vdsm chain:
>
>
> $ cat 6197b30d-0732-4cc7-aef0-12f9f6e9565b.meta
>
> CAP=161061273600
>
> CTIME=1594060718
>
> DESCRIPTION=
>
> DISKTYPE=DATA
>
> DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
>
> FORMAT=COW
>
> GEN=0
>
> IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
>
> LEGALITY=ILLEGAL
>
>
> ^^
>
> This is the issue, the top volume is illegal.
>
>
> PUUID=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
>
> TYPE=SPARSE
>
> VOLTYPE=LEAF
>
>
> $ cat 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8.meta
>
> CAP=161061273600
>
> CTIME=1587646763
>
> DESCRIPTION={"DiskAlias":"cpslpd01_Disk1","DiskDescription":"SAP SLCM
>
> H11 HDB D13"}
>
> DISKTYPE=DATA
>
> DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
>
> FORMAT=COW
>
> GEN=0
>
> IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
>
> LEGALITY=LEGAL
>
> PUUID=----
>
> TYPE=SPARSE
>
> VOLTYPE=INTERNAL
>
>
> We set volume to ILLEGAL when we merge the top volume into the parent volume,
>
> and both volumes contain the same data.
>
>
> After we mark the volume as ILLEGAL, we pivot to the parent volume
>
> (8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8).
>
>
> If the pivot was successful, the parent volume may have new data, and starting
>
> the vm using the top volume may corrupt the vm filesystem. The ILLEGAL state
>
> prevent this.
>
>
> If the pivot was not successful, the vm must be started using the top
>
> volume, but it
>
> will always fail if the volume is ILLEGAL.
>
>
> If the volume is ILLEGAL, trying to merge again when the VM is not running 
> will
>
> always fail, since vdsm does not if the pivot succeeded or not, and cannot 
> merge
>
> the volume in a safe way.
>
>
> Do you have the vdsm from all merge attempts on this disk?
>
> This is an extract of the vdsm logs, i may provide the complete log if it 
> would help.

Yes, this is only the start of the merge. We see the success message
but this only means the merge
job was started.

Please share the complete log, and if needed the next log. The
important messages we look for are:

Requesting pivot to complete active layer commit ...

Follow by:

Pivot completed ...

If pivot failed, we expect to see this message:

Pivot failed: ...

After these messages we may find very important logs that explain why
your disk was left
in an inconsistent state.

Since this looks like a bug and may be useful to others, I think it is
time to file a vdsm bug,
and attach the logs to the bug.

> 2020-07-13 11:18:30,257+0200 INFO  (jsonrpc/5) [api.virt] START 
> merge(drive={u'imageID': u'6c1445b3-33ac-4ec4-8e43-483d4a6da4e3', 
> u'volumeID': u'6172a270-5f73-464d-bebd-8bf0658c1de0', u'domainID': 
> u'a6f2625d-0f21-4d81-b98c-f545d5f86f8e', u'poolID': 
> u'0002-0002-0002-0002-0289'}, ba
> seVolUUID=u'a9d5fe18-f1bd-462e-95f7-42a50e81eb11', 
> topVolUUID=u'6172a270-5f73-464d-bebd-8bf0658c1de0', bandwidth=u'0', 
> jobUUID=u'5059c2ce-e2a0-482d-be93-2b79e8536667') 
> from=:::10.34.38.31,39226, flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
> vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:4
> 8)
> 2020-07-13 11:18:30,271+0200 INFO  (jsonrpc/5) [vdsm.api] START 
> getVolumeInfo(sdUUID='a6f2625d-0f21-4d81-b98c-f545d5f86f8e', 
> spUUID='0002-0002-0002-0002-0289', 
> imgUUID='6c1445b3-33ac-4ec4-8e43-483d4a6da4e3', 
> volUUID=u'a9d5fe18-f1bd-462e-95f7-42a50e81eb11', options=None) from=::fff
> f:10.34.38.31,39226, flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
> task_id=877c30b3-660c-4bfa-a215-75df8d03657e (api:48)
> 2020-07-13 11:18:30,281+0200 INFO  (jsonrpc/6) [api.virt] START 
> merge(drive={u'imageID': u'b8e8b8b6-edd1-4d40-b80b-259268ff4878', 
> u'volumeID': u'28ed1acb-9697-43bd-980b-fe4317a06f24', u'domainID': 
> u'6b82f31b-fa2a-406b-832d-64d9666e1bcc', u'poolID': 
> u'0002-0002-0002-0002-0289'}, ba
> seVolUUID=u'29f99f8d-d8a6-475a-928c-e2ffdba76d80', 
> 

[ovirt-users] Re: VM Portal on a stand alone server

2020-07-15 Thread Michal Skrivanek


> On 15 Jul 2020, at 17:12, Gal Villaret  wrote:
> 
> I have the engine running on a dedicated server.
> 
> I would like to separate the VM Portal to another server or maybe have 
> another instance of VM portal on another server.
> 
> The idea is to be able to put the VM Portal on a different subnet and to put 
> a firewall between it and the engine.

it’s possible in devel mode, see docker container at 
https://github.com/oVirt/ovirt-web-ui
but we’re not really updating it very often, I wouldn’t recommend it for 
production

What’s the concern if you limit access to :443 and use a proxy? There’s not 
much more a random joe can do without a permission to log into webadmin, 
there’s just a static landing page with few links.

> 
> Thanks. 
> 
> On Wed, Jul 15, 2020, 17:18 Sandro Bonazzola  > wrote:
> 
> 
> Il giorno mer 15 lug 2020 alle ore 12:26  > ha scritto:
> Hi Folks, 
> 
> I'm rather new to oVirt and loving it. 
> running 4.4.1. 
> I would like to be able to run the VM Portal on a stand-alone server for 
> security concerns. 
> 
> Can anyone point in the right direction for achieving this? 
> 
> Can you please elaborate?
> Are you asking for having the admin portal and the VM user portal running on 
> 2 different servers?
> Or running the engine on a dedicated server instead of on self hosted engine 
> VM?
> 
> 
>  
> 
> Thanks, 
> 
> Gal Villaret
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BE6CNDNLDKZQW5ANOC3UFT3BQZZFGHC/
>  
> 
> 
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA 
> sbona...@redhat.com    
>  
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EDCHL6QLGDWU2GASCOXHC4B3DFHJNPBC/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LRELPGG4GFRDX4NAYARVT4IFBT6QXBL2/


[ovirt-users] Re: video virtio (instead of qxl)

2020-07-15 Thread Michal Skrivanek


> On 15 Jul 2020, at 19:05, Michael Lipp  wrote:
> 
> Am 15.07.20 um 17:36 schrieb Michal Skrivanek:
>> 
>>> On 14 Jul 2020, at 16:44, Michael Lipp  wrote:
>>> 
>>> Am 14.07.20 um 15:27 schrieb Michal Skrivanek:
> This works perfectly with my Fedora 32 and Arch guests and the change is 
> really worth it.
 Hi,
 what kind of performance benefits you’ve seen? 
 
 It’s not currently in near term roadmap, but if anyone wants to contribute 
 patches I don’t see a problem including it as an option for newer guests 
 indeed.
>>> The display (spice) updates much faster and more "smoothly". Most
>>> notibly when using Arch VMs. With QXL, I have an extreme delay when
>>> typing. This vanishes completey with virtio-vga.
>> using which client? remote-viewer? on which platform?
>> qxl needs drivers, not sure if Arch has that…it should have, but for Windows 
>> you definitely need to install them. If they’re not all right it falls back 
>> to vga emulation which is then slow indeed
> 
> Using ... interesting question. I always assumed that virt-manager
> starts a viewer, but I cannot find one in my process list. So: using
> virt-manager.

for vnc and spice it embeds the same component as remote-viewer. There’s 
“graphics” which is the console protocol(spice,vnc) and “video” which is the 
emulated guest video card. You could use both for graphics(that’s what we 
default to since ~4.3) and then you can choose. I’m not sure what virt-manager 
does in this case. Either way, I would suspect it uses VNC


> 
> My Arch system has "xf86-video-qxl" installed and loads the "qxl" kernel
> module automatically when the guest configuration is set to using QXL.
> It does not automatically load "bochs_drm" as mentioned here
> (https://wiki.archlinux.org/index.php/QEMU#qxl 
> ). However, adding it
> "manually" doesn't make any difference.

it could be just a misconfiguration, I’m really not familiar with how this is 
supposed to be configured on Arch, sorry, but it generally works ok elsewhere.

> 
> I know about Windows. I'm using these guests with QXL and the windows
> drivers installed (AFAIK there are no virtio-vga drivers for Windows).
> Contrary to Arch, Windows+QXL works satisfactory.

ok. that’s another indication it’s rather on the guest side.

The only reason we’re not adding it just yet is that it lacks wider support 
(e.g. those Windows drivers) and there’s not much difference. It may be that on 
Arch it’s already useful, but it’s still not widespread enough so not on the 
list yet.
If anyone wants to contribute a patch it would be welcome (it’s not exactly 
trivial, but not too complex either)

Thanks,
michal

> 
>  - Michael
> 
>> 
>>> - Michael
>>> 
 Thanks,
 michal
 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZQHW3NZB4BFGMP4LMJE2VTJ6H2OSWSB/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKA22EZKWUEUA4ZPGQF6PODQMRDSHL7X/


[ovirt-users] Re: mixed hyperconverged?

2020-07-15 Thread Philip Brown
arg. when I said "add 2 more nodes that arent part of the cluster", I meant,
"part of the glusterfs cluster".

or at minimum, maybe some kind of client-only setup, if they need access?


- Original Message -
From: "Philip Brown" 
To: "users" 
Sent: Wednesday, July 15, 2020 10:37:48 AM
Subject: [ovirt-users] mixed hyperconverged?

I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but am 
wondering about certain design issues.

seems like the optimal number is 3 nodes for the glusterfs.
but.. I want 5 host nodes, not 3
and I want the main storage for VMs to be separate iSCSI NAS boxes.
Is it possible to have 3 nodes be the hyperconverged stuff.. but then add in 2 
"regular" nodes, that dont store anything and arent part of the cluster?

is it required to be part of the gluster cluster, to also be part of the ovirt 
cluster, if thats where the hosted-engine lives?
or can I just have the hosted engine be switchable between the 3 nodes, and the 
other 2 be VM-only hosts?

Any recommendations here?

I dont what 5 way replication going on. Nor do I want to have to pay for large 
SSDs on all my host nodes.
(Im planning to run them with the ovirt 3.4 node image)



--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RINQKWPRCQD5KYPFJYA75HFIUVJVTZXC/


[ovirt-users] Re: video virtio (instead of qxl)

2020-07-15 Thread Michael Lipp
Am 15.07.20 um 17:36 schrieb Michal Skrivanek:
>
>> On 14 Jul 2020, at 16:44, Michael Lipp  wrote:
>>
>> Am 14.07.20 um 15:27 schrieb Michal Skrivanek:
 This works perfectly with my Fedora 32 and Arch guests and the change is 
 really worth it.
>>> Hi,
>>> what kind of performance benefits you’ve seen? 
>>>
>>> It’s not currently in near term roadmap, but if anyone wants to contribute 
>>> patches I don’t see a problem including it as an option for newer guests 
>>> indeed.
>> The display (spice) updates much faster and more "smoothly". Most
>> notibly when using Arch VMs. With QXL, I have an extreme delay when
>> typing. This vanishes completey with virtio-vga.
> using which client? remote-viewer? on which platform?
> qxl needs drivers, not sure if Arch has that…it should have, but for Windows 
> you definitely need to install them. If they’re not all right it falls back 
> to vga emulation which is then slow indeed

Using ... interesting question. I always assumed that virt-manager
starts a viewer, but I cannot find one in my process list. So: using
virt-manager.

My Arch system has "xf86-video-qxl" installed and loads the "qxl" kernel
module automatically when the guest configuration is set to using QXL.
It does not automatically load "bochs_drm" as mentioned here
(https://wiki.archlinux.org/index.php/QEMU#qxl). However, adding it
"manually" doesn't make any difference.

I know about Windows. I'm using these guests with QXL and the windows
drivers installed (AFAIK there are no virtio-vga drivers for Windows).
Contrary to Arch, Windows+QXL works satisfactory.

 - Michael

>
>>  - Michael
>>
>>> Thanks,
>>> michal
>>>
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZQHW3NZB4BFGMP4LMJE2VTJ6H2OSWSB/
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIX7QDZERFY7IYO5J6DUAJ7Y6K3QBSOF/


[ovirt-users] mixed hyperconverged?

2020-07-15 Thread Philip Brown

I'm thinking of doing an SSD based hyperconverged setup (for 4.3), but am 
wondering about certain design issues.

seems like the optimal number is 3 nodes for the glusterfs.
but.. I want 5 host nodes, not 3
and I want the main storage for VMs to be separate iSCSI NAS boxes.
Is it possible to have 3 nodes be the hyperconverged stuff.. but then add in 2 
"regular" nodes, that dont store anything and arent part of the cluster?

is it required to be part of the gluster cluster, to also be part of the ovirt 
cluster, if thats where the hosted-engine lives?
or can I just have the hosted engine be switchable between the 3 nodes, and the 
other 2 be VM-only hosts?

Any recommendations here?

I dont what 5 way replication going on. Nor do I want to have to pay for large 
SSDs on all my host nodes.
(Im planning to run them with the ovirt 3.4 node image)



--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZMXMGRGOMYE4UIQH32R6GCCHTABTGSX/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Arsène Gschwind
On Wed, 2020-07-15 at 17:46 +0300, Nir Soffer wrote:

What we see in the data you sent:


Qemu chain:


$ qemu-img info --backing-chain

/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

file format: qcow2

virtual size: 150G (161061273600 bytes)

disk size: 0

cluster_size: 65536

backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path:

/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)

backing file format: qcow2

Format specific information:

compat: 1.1

lazy refcounts: false

refcount bits: 16

corrupt: false


image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8

file format: qcow2

virtual size: 150G (161061273600 bytes)

disk size: 0

cluster_size: 65536

Format specific information:

compat: 1.1

lazy refcounts: false

refcount bits: 16

corrupt: false


Vdsm chain:


$ cat 6197b30d-0732-4cc7-aef0-12f9f6e9565b.meta

CAP=161061273600

CTIME=1594060718

DESCRIPTION=

DISKTYPE=DATA

DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca

FORMAT=COW

GEN=0

IMAGE=d7bd480d-2c51-4141-a386-113abf75219e

LEGALITY=ILLEGAL


^^

This is the issue, the top volume is illegal.


PUUID=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8

TYPE=SPARSE

VOLTYPE=LEAF


$ cat 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8.meta

CAP=161061273600

CTIME=1587646763

DESCRIPTION={"DiskAlias":"cpslpd01_Disk1","DiskDescription":"SAP SLCM

H11 HDB D13"}

DISKTYPE=DATA

DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca

FORMAT=COW

GEN=0

IMAGE=d7bd480d-2c51-4141-a386-113abf75219e

LEGALITY=LEGAL

PUUID=----

TYPE=SPARSE

VOLTYPE=INTERNAL


We set volume to ILLEGAL when we merge the top volume into the parent volume,

and both volumes contain the same data.


After we mark the volume as ILLEGAL, we pivot to the parent volume

(8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8).


If the pivot was successful, the parent volume may have new data, and starting

the vm using the top volume may corrupt the vm filesystem. The ILLEGAL state

prevent this.


If the pivot was not successful, the vm must be started using the top

volume, but it

will always fail if the volume is ILLEGAL.


If the volume is ILLEGAL, trying to merge again when the VM is not running will

always fail, since vdsm does not if the pivot succeeded or not, and cannot merge

the volume in a safe way.


Do you have the vdsm from all merge attempts on this disk?

This is an extract of the vdsm logs, i may provide the complete log if it would 
help.

2020-07-13 11:18:30,257+0200 INFO  (jsonrpc/5) [api.virt] START 
merge(drive={u'imageID': u'6c1445b3-33ac-4ec4-8e43-483d4a6da4e3', u'volumeID': 
u'6172a270-5f73-464d-bebd-8bf0658c1de0', u'domainID': 
u'a6f2625d-0f21-4d81-b98c-f545d5f86f8e', u'poolID': 
u'0002-0002-0002-0002-0289'}, ba
seVolUUID=u'a9d5fe18-f1bd-462e-95f7-42a50e81eb11', 
topVolUUID=u'6172a270-5f73-464d-bebd-8bf0658c1de0', bandwidth=u'0', 
jobUUID=u'5059c2ce-e2a0-482d-be93-2b79e8536667') from=:::10.34.38.31,39226, 
flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:4
8)
2020-07-13 11:18:30,271+0200 INFO  (jsonrpc/5) [vdsm.api] START 
getVolumeInfo(sdUUID='a6f2625d-0f21-4d81-b98c-f545d5f86f8e', 
spUUID='0002-0002-0002-0002-0289', 
imgUUID='6c1445b3-33ac-4ec4-8e43-483d4a6da4e3', 
volUUID=u'a9d5fe18-f1bd-462e-95f7-42a50e81eb11', options=None) from=::fff
f:10.34.38.31,39226, flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
task_id=877c30b3-660c-4bfa-a215-75df8d03657e (api:48)
2020-07-13 11:18:30,281+0200 INFO  (jsonrpc/6) [api.virt] START 
merge(drive={u'imageID': u'b8e8b8b6-edd1-4d40-b80b-259268ff4878', u'volumeID': 
u'28ed1acb-9697-43bd-980b-fe4317a06f24', u'domainID': 
u'6b82f31b-fa2a-406b-832d-64d9666e1bcc', u'poolID': 
u'0002-0002-0002-0002-0289'}, ba
seVolUUID=u'29f99f8d-d8a6-475a-928c-e2ffdba76d80', 
topVolUUID=u'28ed1acb-9697-43bd-980b-fe4317a06f24', bandwidth=u'0', 
jobUUID=u'241dfab0-2ef2-45a6-a22f-c7122e9fc193') from=:::10.34.38.31,39226, 
flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:4
8)
2020-07-13 11:18:30,282+0200 INFO  (jsonrpc/7) [api.virt] START 
merge(drive={u'imageID': u'd7bd480d-2c51-4141-a386-113abf75219e', u'volumeID': 
u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', u'domainID': 
u'33777993-a3a5-4aad-a24c-dfe5e473faca', u'poolID': 
u'0002-0002-0002-0002-0289'}, ba
seVolUUID=u'8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8', 
topVolUUID=u'6197b30d-0732-4cc7-aef0-12f9f6e9565b', bandwidth=u'0', 
jobUUID=u'720410c3-f1a0-4b25-bf26-cf40aa6b1f97') from=:::10.34.38.31,39226, 
flow_id=4a8b9527-06a3-4be6-9bb9-88630febc227, 
vmId=b5534254-660f-44b1-bc83-d616c98ba0ba (api:4
8)
2020-07-13 11:18:30,299+0200 INFO  (jsonrpc/6) [vdsm.api] START 
getVolumeInfo(sdUUID='6b82f31b-fa2a-406b-832d-64d9666e1bcc', 

[ovirt-users] Re: video virtio (instead of qxl)

2020-07-15 Thread Michal Skrivanek


> On 14 Jul 2020, at 16:44, Michael Lipp  wrote:
> 
> Am 14.07.20 um 15:27 schrieb Michal Skrivanek:
>> 
>>> This works perfectly with my Fedora 32 and Arch guests and the change is 
>>> really worth it.
>> Hi,
>> what kind of performance benefits you’ve seen? 
>> 
>> It’s not currently in near term roadmap, but if anyone wants to contribute 
>> patches I don’t see a problem including it as an option for newer guests 
>> indeed.
> 
> The display (spice) updates much faster and more "smoothly". Most
> notibly when using Arch VMs. With QXL, I have an extreme delay when
> typing. This vanishes completey with virtio-vga.

using which client? remote-viewer? on which platform?
qxl needs drivers, not sure if Arch has that…it should have, but for Windows 
you definitely need to install them. If they’re not all right it falls back to 
vga emulation which is then slow indeed

> 
>  - Michael
> 
>> Thanks,
>> michal
>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZQHW3NZB4BFGMP4LMJE2VTJ6H2OSWSB/
> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MMSHCWHO74MXAGYPRZHHNXZTJJKFCB6R/


[ovirt-users] Re: What permission do I need to get API access

2020-07-15 Thread miguel . garcia
I already figured out the problem was how to type a user account. I realize 
this format by going to About section in the admin portal and my account was 
displayed in the following way: miguel.gar...@email.com@DOMAIN

I was able to login to API using the same user account at the end.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EKSWWQPDCDIMF2ZES2QORXDJLBLC6RLO/


[ovirt-users] Re: Failing to restore a backup

2020-07-15 Thread Andrea Chierici

Dear all,
I think I finally understood the issue, even if I don't know how to fix it.

Trying to install a new HE from a backup I get the error:
 "The host has been set in non_operational status, please check engine 
logs, more info can be found in the engine logs, fix accordingly and 
re-deploy."


*The host, not the hosted engine*. This is more clear in another log:
Host  is set to Non-Operational, it is missing the 
following networks: 'iscsi_net,sgsi_iscsi,sgsi_priv,sgsi_vpn'


The fact is that those networks are present on the host:
# ip addr

26: sgsi_priv:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000

    link/ether 90:e2:ba:63:2e:bc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::92e2:baff:fe63:2ebc/64 scope link
28: sgsi_vpn:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000

    link/ether 90:e2:ba:63:2e:bc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::92e2:baff:fe63:2ebc/64 scope link
   valid_lft forever preferred_lft forever

The other two are configured on ovirt but not configurable on bare metal 
system, indeed if I issue "ip addr" on a production host I don't see 
those nets at all: I am puzzled. The problem is definitely this one, can 
anyone provide any suggestion on how to proceed?
Why is it complaining about sgsi_priv and sgsi_vpn that are not missing 
at all?


Andrea


On 15/07/2020 08:33, Yedidyah Bar David wrote:

On Tue, Jul 14, 2020 at 6:04 PM Andrea Chierici
 wrote:

Hi,
thank you for your help.


I think this is not a critical failure, and is not what failed the restore.




Recently I tried the 4.3.11 beta and 4.4.1 and the error now is different:

[ INFO  ] Upgrading CA\n[ ERROR ] Failed to execute stage 'Misc configuration': 
(2, 'No such file or directory')\n[ INFO  ] DNF Performing DNF transaction 
rollback\n


This is part of 'engine-setup' output, which 'hosted-engine' runs inside the 
engine VM. If you can access the engine VM, you can try finding more 
information in /var/log/ovirt-engine/setup/* there. Otherwise, the 
hosted-engine deploy script might have managed to get a copy to 
/var/log/ovirt-hosted-engine-setup/engine-logs*. Please check/share these. 
Thanks.


Unfortunately the installation procedures when exiting, deletes the vm, hence I 
can't log in.

Are you sure? Did you check with 'ps', searching qemu processes?

If it's still up, but still using a local IP address, you can find it
by searching the hosted-engine logs for 'local_vm_ip' and login there
from the host.


Here are the ERROR messages I got on the logs copied on the host:

engine.log:2020-07-08 15:05:04,178+02 ERROR 
[org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) [45a7e7f3] 
Can not run fence action on host '', no suitable proxy host 
was found.

That's ok.


server.log:2020-07-08 15:09:23,081+02 ERROR 
[org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) RESTEASY002010: 
Failed to execute: javax.ws.rs.WebApplicationException: HTTP 404 Not Found
server.log:2020-07-08 15:14:19,804+02 ERROR 
[org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) RESTEASY002010: 
Failed to execute: javax.ws.rs.WebApplicationException: HTTP 404 Not Found

This probably indicates a problem, but I agree it's not very helpful.


grep: setup: Is a directory

Right - so please search inside it.

Also please check the hosted-engine deploy logs themselves.


Not very helpful.





I simply can't figure out what file is missing.
If, as a test, I try to install the HE without restoring the backup, the 
installation goes smoothly to the end, but at that point I can't restore the 
backup, as far as I can understand.


Another option is to do the restore manually. To find relevant information, search the 
net for "enginevm_before_engine_setup".


Later I will give it a try.

Good luck and best regards,


--
Andrea Chierici - INFN-CNAF 
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463 
SkypeID ataruz
--

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IH2ISMET6PKCP3VUNNTOJZH74F5AEHF2/


[ovirt-users] Re: VM Portal on a stand alone server

2020-07-15 Thread Gal Villaret
I have the engine running on a dedicated server.

I would like to separate the VM Portal to another server or maybe have
another instance of VM portal on another server.

The idea is to be able to put the VM Portal on a different subnet and to
put a firewall between it and the engine.

Thanks.

On Wed, Jul 15, 2020, 17:18 Sandro Bonazzola  wrote:

>
>
> Il giorno mer 15 lug 2020 alle ore 12:26  ha
> scritto:
>
>> Hi Folks,
>>
>> I'm rather new to oVirt and loving it.
>> running 4.4.1.
>> I would like to be able to run the VM Portal on a stand-alone server for
>> security concerns.
>>
>> Can anyone point in the right direction for achieving this?
>>
>
> Can you please elaborate?
> Are you asking for having the admin portal and the VM user portal running
> on 2 different servers?
> Or running the engine on a dedicated server instead of on self hosted
> engine VM?
>
>
>
>
>>
>> Thanks,
>>
>> Gal Villaret
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BE6CNDNLDKZQW5ANOC3UFT3BQZZFGHC/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.
> *
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EDCHL6QLGDWU2GASCOXHC4B3DFHJNPBC/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
What we see in the data you sent:

Qemu chain:

$ qemu-img info --backing-chain
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
file format: qcow2
virtual size: 150G (161061273600 bytes)
disk size: 0
cluster_size: 65536
backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path:
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
file format: qcow2
virtual size: 150G (161061273600 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

Vdsm chain:

$ cat 6197b30d-0732-4cc7-aef0-12f9f6e9565b.meta
CAP=161061273600
CTIME=1594060718
DESCRIPTION=
DISKTYPE=DATA
DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
FORMAT=COW
GEN=0
IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
LEGALITY=ILLEGAL

^^
This is the issue, the top volume is illegal.

PUUID=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8
TYPE=SPARSE
VOLTYPE=LEAF

$ cat 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8.meta
CAP=161061273600
CTIME=1587646763
DESCRIPTION={"DiskAlias":"cpslpd01_Disk1","DiskDescription":"SAP SLCM
H11 HDB D13"}
DISKTYPE=DATA
DOMAIN=33777993-a3a5-4aad-a24c-dfe5e473faca
FORMAT=COW
GEN=0
IMAGE=d7bd480d-2c51-4141-a386-113abf75219e
LEGALITY=LEGAL
PUUID=----
TYPE=SPARSE
VOLTYPE=INTERNAL

We set volume to ILLEGAL when we merge the top volume into the parent volume,
and both volumes contain the same data.

After we mark the volume as ILLEGAL, we pivot to the parent volume
(8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8).

If the pivot was successful, the parent volume may have new data, and starting
the vm using the top volume may corrupt the vm filesystem. The ILLEGAL state
prevent this.

If the pivot was not successful, the vm must be started using the top
volume, but it
will always fail if the volume is ILLEGAL.

If the volume is ILLEGAL, trying to merge again when the VM is not running will
always fail, since vdsm does not if the pivot succeeded or not, and cannot merge
the volume in a safe way.

Do you have the vdsm from all merge attempts on this disk?

The most important log is the one showing the original merge. If the merge
succeeded, we should see a log showing the new libvirt chain, which
should contain
only the parent volume.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LPUSJIQMK2JG2IC5EIODOR3S2JPNLIKS/


[ovirt-users] Re: Strange SD problem

2020-07-15 Thread Arsène Gschwind
Hi Ahmad,

I was running 4.3.9 and yesterday evening I've updated everything to 4.3.10.
I did put this SD several to maintenance mode and active them back without any 
luck. The message appears every time.
When you look at the log I've posted you will see that there seems to be a 
problem with a duplicate key. It looks like the engine isn't able to add this 
SD since it already exists.

Thanks,
Arsene

On Wed, 2020-07-15 at 13:24 +0300, Ahmad Khiet wrote:
Hi Arsène,

can you please send which version are you referring to?

as shown in the log: Storage domains with IDs 
[6b82f31b-fa2a-406b-832d-64d9666e1bcc] could not be synchronized. To 
synchronize them, please move them to maintenance and then activate.
can you put them in maintenance and then activate them back so it will be 
synced?
I guess that it is out of sync, that's why the "Add" button appears to already 
added LUNs



On Tue, Jul 14, 2020 at 4:58 PM Arsène Gschwind 
mailto:arsene.gschw...@unibas.ch>> wrote:
Hi Ahmad,

I did the following:

1. Storage -> Storage Domains
2 Click the existing Storage Domain and click "Manage Domain"
and then I see next to the LUN which is already part of the SD the "Add" button

I do not want to click add since it may destroy the existing SD or the content 
of the LUNs.
In the Engine Log I see the following:


020-07-14 09:57:45,131+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-20) [277145f2] EVENT_ID: 
STORAGE_DOMAINS_COULD_NOT_BE_SYNCED(1,046), Storage domains with IDs 
[6b82f31b-fa2a-406b-832d-64d9666e1bcc] could not be synchronized. To 
synchronize them, please move them to maintenance and then activate.

Thanks a lot

On Tue, 2020-07-14 at 16:07 +0300, Ahmad Khiet wrote:
Hi Arsène Gschwind,

it's really strange that you see "Add" on a LUN that already has been added to 
the database.
to verify the steps you did make at first,
1- Storage -> Storage Domains
2- New Domain - [ select iSCSI ]
3- click on "+" on the iscsi target, then you see the "Add" button is available
4- after clicking add and ok, then this error will be shown in the logs
is that right?

can you also attach vdsm log?




On Tue, Jul 14, 2020 at 1:15 PM Arsène Gschwind 
mailto:arsene.gschw...@unibas.ch>> wrote:
Hello all,

I've checked all my multipath configuration and everything seems korrekt.
Is there a way to correct this, may be in the DB?

I really need some help, thanks a lot.
Arsène

On Tue, 2020-07-14 at 00:29 +, Arsène Gschwind wrote:
HI,

I'm having a strange behavior with a SD. When trying to manage the SD I see 
they "Add" button for the LUN which should already be the one use for that SD.
In the Logs I see the following:

2020-07-13 17:48:07,292+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.BatchProcedureExecutionConnectionCallback] 
(EE-ManagedThreadFactory-engine-Thread-95) [51091853] Can't execute batch: 
Batch entry 0 select * from public.insertluns(CAST ('repl_HanaLogs_osd_01' AS 
varchar),CAST ('DPUtaW-Q5zp-aZos-HriP-5Z0v-hiWO-w7rmwG' AS varchar),CAST 
('4TCXZ7-R1l1-xkdU-u0vx-S3n4-JWcE-qksPd1' AS varchar),CAST 
('SHUAWEI_XSG1_2102350RMG10HC200035' AS varchar),CAST (7 AS int4),CAST 
('HUAWEI' AS varchar),CAST ('XSG1' AS varchar),CAST (2548 AS int4),CAST 
(268435456 AS int8)) as result was aborted: ERROR: duplicate key value violates 
unique constraint "pk_luns"
  Detail: Key (lun_id)=(repl_HanaLogs_osd_01) already exists.
  Where: SQL statement "INSERT INTO LUNs (
LUN_id,
physical_volume_id,
volume_group_id,
serial,
lun_mapping,
vendor_id,
product_id,
device_size,
discard_max_size
)
VALUES (
v_LUN_id,
v_physical_volume_id,
v_volume_group_id,
v_serial,
v_lun_mapping,
v_vendor_id,
v_product_id,
v_device_size,
v_discard_max_size
)"
PL/pgSQL function insertluns(character varying,character varying,character 
varying,character varying,integer,character varying,character 
varying,integer,bigint) line 3 at SQL statement  Call getNextException to see 
other errors in the batch.
2020-07-13 17:48:07,292+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.BatchProcedureExecutionConnectionCallback] 
(EE-ManagedThreadFactory-engine-Thread-95) [51091853] Can't execute batch. Next 
exception is: ERROR: duplicate key value violates unique constraint "pk_luns"
  Detail: Key (lun_id)=(repl_HanaLogs_osd_01) already exists.
  Where: SQL statement "INSERT INTO LUNs (
LUN_id,
physical_volume_id,
volume_group_id,
serial,
lun_mapping,
vendor_id,
product_id,
device_size,
discard_max_size
)
VALUES (
v_LUN_id,
v_physical_volume_id,
v_volume_group_id,
v_serial,
v_lun_mapping,
v_vendor_id,
v_product_id,
v_device_size,
v_discard_max_size
)"
PL/pgSQL function 

[ovirt-users] Slow ova export performance

2020-07-15 Thread francesco--- via Users
Hi All,

I'm facing a really slow export ov vms hosted on a single node cluster, in a 
local storage. The Vm disk is 600 GB and the effective usage is around 300 GB. 
I estimated that the following process would take up about 15 hours to end:

vdsm 25338 25332 99 04:14 pts/007:40:09 qemu-img measure -O qcow2 
/rhev/data-center/mnt/_data/6775c41c-7d67-451b-8beb-4fd086eade2e/images/a084fa36-0f93-45c2-a323-ea9ca2d16677/55b3eac5-05b2-4bae-be50-37cde7050697

A strace -p of the pid shows a slow progression to reach the effective size.

lseek(11, 3056795648, SEEK_DATA)= 3056795648
lseek(11, 3056795648, SEEK_HOLE)= 13407092736
lseek(14, 128637468672, SEEK_DATA)  = 128637468672
lseek(14, 128637468672, SEEK_HOLE)  = 317708828672
lseek(14, 128646250496, SEEK_DATA)  = 128646250496
lseek(14, 128646250496, SEEK_HOLE)  = 317708828672
lseek(14, 128637730816, SEEK_DATA)  = 128637730816
lseek(14, 128637730816, SEEK_HOLE)  = 317708828672
lseek(14, 128646774784, SEEK_DATA)  = 128646774784
lseek(14, 128646774784, SEEK_HOLE)  = 317708828672
lseek(14, 128646709248, SEEK_DATA)  = 128646709248

The process take a single full core, but i don't think this is the problem. The 
I/O is almost nothing.

Any idea/suggestion?

Thank you for your time
Regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QF2QIA4ZRQIE6HQNJSNRBCPM25I3O5D3/


[ovirt-users] Re: Ovirt 4.4.1 Install failure at OVF_Store check

2020-07-15 Thread Yedidyah Bar David
On Wed, Jul 15, 2020 at 3:55 PM Andy  wrote:
>
> Didi,
>
> thanks for the reply and appreciate the help.  I was/am unable to get to the 
> engine however I have collected and attached all the setup logs from the 
> host.  After looking at the thread you suggested, I will change the hostname 
> of the engine, try an install and report back.  Again thanks for the help .

You engine.log indeed has the same error:

2020-07-15 02:34:08,515-04 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-67)
[b9d5d45] Command 'UploadStreamVDSCommand(HostName = hm3svr03.hm3.loc,
UploadStreamVDSCommandParameters:{hostId='67cd09ed-cb27-48f6-bceb-b9bd997543ac'})'
execution failed: javax.net.ssl.SSLPeerUnverifiedException:
Certificate for  doesn't match any of the subject
alternative names: [hm3svr03.hm3.loc]

I do not see before it any "logged out" line, as I noted that might be
relevant, in the other thread.

Adding Martin and Artur:

Please check also an earlier reply on current thread by Alistair.

Thanks,

>
> Andy
>
> On Wednesday, July 15, 2020, 3:40:31 AM EDT, Yedidyah Bar David 
>  wrote:
>
>
> On Wed, Jul 15, 2020 at 8:14 AM AK via Users  wrote:
>
> >
> > Yes sir,  I run the clean up script after each failure, clean out the 
> > gluster volume, and remove any network the deploy scripts create.  I just 
> > conducted the deployment on different hardware (different drives, different 
> > CPU, raid controller, SSD's) and it produced the same result (failure at 
> > OVF_STore_check).  The only deployment items that are consistent are 
> > creating the physical network bonds and gluster volumes which can be 
> > mounted across the network and have been tested as storage pools for other 
> > virtualization and storage platforms.
>
>
> Can you please check engine-side logs? If you can access the engine VM
> (search hosted-engine logs for local_vm_ip if it's still on the local
> network), check /var/log/ovirt-engine/*, otherwise, on the host,
> /var/log/ovirt-hosted-engine-setup/engine-logs*/*.
>
> That said, we also have (what seems like) a very similar failure on
> CI, for some time now - check e.g. the latest nightly run:
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-20200714225605-ueg6k8.log
>
> 2020-07-14 22:59:42,414-0400 INFO ansible task start {'status': 'OK',
> 'ansible_type': 'task', 'ansible_playbook':
> '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml',
> 'ansible_task': 'ovirt.hosted_engine_setup : Check OVF_STORE volume
> status'}
>
> It tries some time, eventually fails, like your case. engine log has:
>
> https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/engine-logs-2020-07-15T03%3A04%3A29Z/ovirt-engine/engine.log
>
> 2020-07-14 22:57:03,197-04 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [4abbccc] EVENT_ID: USER_VDC_LOGOUT(31), User
> admin@internal-authz connected from '192.168.222.1' using session
> 'W5qdcPNyRLHmMnbMz7i+ZP85De1GjKq7+V1hqbKEeD+QJtpcFGpITEVFIHbUvz+2wF+GTAB6qnCY1gHxBHkGLA=='
> logged out.
> 2020-07-14 22:57:03,242-04 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29)
> [313eed07] Command 'UploadStreamVDSCommand(HostName =
> lago-he-basic-suite-master-host-0.lago.local,
> UploadStreamVDSCommandParameters:{hostId='c6d33fd9-5137-49fc-815a-94baf2d58b93'})'
> execution failed: javax.net.ssl.SSLPeerUnverifiedException:
> Certificate for  doesn't
> match any of the subject alternative names:
> [lago-he-basic-suite-master-host-0.lago.local]
>
> This is currently discussed on the devel list, in thread:
>
> execution failed: javax.net.ssl.SSLPeerUnverifiedException (was:
> [ovirt-devel] vdsm.storage.exception.UnknownTask: Task id unknown
> (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build
> # 1641 - Still Failing!))
>
> We are still not sure about the exact cause, but I have a feeling that
> it's somehow related to naming/name resolution/hostname/etc.
>
> In any case, I didn't manage to reproduce this locally on my own machine.
>
> I suggest checking everything you can think of related to this -
> dhcp/dns, output of 'hostname' on the host, etc.
>
> Good luck and best regards,
> --
> Didi
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy 

[ovirt-users] Re: VM Portal on a stand alone server

2020-07-15 Thread Sandro Bonazzola
Il giorno mer 15 lug 2020 alle ore 12:26  ha
scritto:

> Hi Folks,
>
> I'm rather new to oVirt and loving it.
> running 4.4.1.
> I would like to be able to run the VM Portal on a stand-alone server for
> security concerns.
>
> Can anyone point in the right direction for achieving this?
>

Can you please elaborate?
Are you asking for having the admin portal and the VM user portal running
on 2 different servers?
Or running the engine on a dedicated server instead of on self hosted
engine VM?




>
> Thanks,
>
> Gal Villaret
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BE6CNDNLDKZQW5ANOC3UFT3BQZZFGHC/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3BV5XHRODSS3WN7ZIY7B4XMBSCBCXMR/


[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Arsène Gschwind
On Wed, 2020-07-15 at 16:28 +0300, Nir Soffer wrote:

On Wed, Jul 15, 2020 at 4:00 PM Arsène Gschwind

<



arsene.gschw...@unibas.ch

> wrote:


On Wed, 2020-07-15 at 15:42 +0300, Nir Soffer wrote:


On Wed, Jul 15, 2020 at 3:12 PM Arsène Gschwind


<




arsene.gschw...@unibas.ch



wrote:



Hi Nir,



I've followed your guide, please find attached the informations.


Thanks a lot for your help.



Thanks, looking at the data.



Quick look in the pdf show that one qemu-img info command failed:



---


lvchange -ay 
33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b



lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert


6197b30d-0732-4cc7-aef0-12f9f6e9565b


33777993-a3a5-4aad-a24c-dfe5e473faca -wi-a- 5.00g



qemu-img info --backing-chain


/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


qemu-img: Could not open


'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':


It is clear now - qemu could not open the backing file:


lv=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8


You must activate all the volumes in this image. I think my

instructions was not clear enough.


1. Find all lvs related to this image


2. Activate all of them


for lv_name in lv-name-1 lv-name-2 lv-name-3; do

lvchange -ay vg-name/$lv_name

done


3. Run qemu-img info on the LEAF volume


4. Deactivate the lvs activated in step 2.


Ouups, sorry .

Now it should be correct.


qemu-img info --backing-chain 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

file format: qcow2

virtual size: 150G (161061273600 bytes)

disk size: 0

cluster_size: 65536

backing file: 8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8 (actual path: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8)

backing file format: qcow2

Format specific information:

compat: 1.1

lazy refcounts: false

refcount bits: 16

corrupt: false


image: 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8

file format: qcow2

virtual size: 150G (161061273600 bytes)

disk size: 0

cluster_size: 65536

Format specific information:

compat: 1.1

lazy refcounts: false

refcount bits: 16

corrupt: false



---



Maybe this lv was deactivated by vdsm after you activate it? Please


try to activate it again and


run the command again.



Sending all the info in text format in the mail message would make it


easier to respond.


I did it again with the same result, and the LV was still activated.


lvchange -ay 
33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


  LV   VG   
Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert


  6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
-wi-a- 5.00g


qemu-img info --backing-chain 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


qemu-img: Could not open 
'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
 Could not open 
'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
 No such file or directory


lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


  LV   VG   
Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert


  6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
-wi-a- 5.00g



Sorry for the PDF, it was easier for me, but I will post everything in the mail 
from now on.




Arsene



On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:



On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind



<





arsene.gschw...@unibas.ch





wrote:




On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:




On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind




<






arsene.gschw...@unibas.ch







wrote:





Hi,





I running oVirt 4.3.9 with FC based storage.




I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
VM Snapshot and that task failed after a while and since then the Snapshot is 
inconsistent.




disk1 : Snapshot still visible in DB and on Storage using LVM commands




disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
merge did run correctly)




disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
merge did run correctly)





When I try to delete the snapshot again it runs forever and 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
On Wed, Jul 15, 2020 at 4:00 PM Arsène Gschwind
 wrote:
>
> On Wed, 2020-07-15 at 15:42 +0300, Nir Soffer wrote:
>
> On Wed, Jul 15, 2020 at 3:12 PM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
> > wrote:
>
>
> Hi Nir,
>
>
> I've followed your guide, please find attached the informations.
>
> Thanks a lot for your help.
>
>
> Thanks, looking at the data.
>
>
> Quick look in the pdf show that one qemu-img info command failed:
>
>
> ---
>
> lvchange -ay 
> 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>
> lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
>
> 6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> 33777993-a3a5-4aad-a24c-dfe5e473faca -wi-a- 5.00g
>
>
> qemu-img info --backing-chain
>
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> qemu-img: Could not open
>
> '/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':

It is clear now - qemu could not open the backing file:

lv=8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8

You must activate all the volumes in this image. I think my
instructions was not clear enough.

1. Find all lvs related to this image

2. Activate all of them

for lv_name in lv-name-1 lv-name-2 lv-name-3; do
lvchange -ay vg-name/$lv_name
done

3. Run qemu-img info on the LEAF volume

4. Deactivate the lvs activated in step 2.

>
> ---
>
>
> Maybe this lv was deactivated by vdsm after you activate it? Please
>
> try to activate it again and
>
> run the command again.
>
>
> Sending all the info in text format in the mail message would make it
>
> easier to respond.
>
> I did it again with the same result, and the LV was still activated.
>
> lvchange -ay 
> 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>   LV   VG   
> Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>
>   6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
> -wi-a- 5.00g
>
> qemu-img info --backing-chain 
> /dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
> qemu-img: Could not open 
> '/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
>  Could not open 
> '/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
>  No such file or directory
>
> lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
>
>   LV   VG   
> Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>
>   6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
> -wi-a- 5.00g
>
>
> Sorry for the PDF, it was easier for me, but I will post everything in the 
> mail from now on.
>
>
>
> Arsene
>
>
> On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:
>
>
> On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind
>
>
> <
>
>
> arsene.gschw...@unibas.ch
>
>
>
> wrote:
>
>
>
> On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:
>
>
>
> On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind
>
>
>
> <
>
>
>
> arsene.gschw...@unibas.ch
>
>
>
>
>
> wrote:
>
>
>
>
> Hi,
>
>
>
>
> I running oVirt 4.3.9 with FC based storage.
>
>
>
> I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
> VM Snapshot and that task failed after a while and since then the Snapshot is 
> inconsistent.
>
>
>
> disk1 : Snapshot still visible in DB and on Storage using LVM commands
>
>
>
> disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
> merge did run correctly)
>
>
>
> disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
> merge did run correctly)
>
>
>
>
> When I try to delete the snapshot again it runs forever and nothing happens.
>
>
>
>
> Did you try also when the vm is not running?
>
>
>
> Yes I've tried that without success
>
>
>
>
> In general the system is designed so trying again a failed merge will complete
>
>
>
> the merge.
>
>
>
>
> If the merge does complete, there may be some bug that the system cannot
>
>
>
> handle.
>
>
>
>
> Is there a way to suppress that snapshot?
>
>
>
> Is it possible to merge disk1 with its snapshot using LVM commands and then 
> cleanup the Engine DB?
>
>
>
>
> Yes but it is complicated. You need to understand the qcow2 chain
>
>
>
> on storage, complete the merge manually using qemu-img commit,
>
>
>
> update the metadata manually (even harder), then update engine db.
>
>
>
>
> The best way - if the system cannot recover, is to fix the bad metadata
>
>
>
> that cause the system to fail, and the let the system recover itself.
>
>
>
>
> Which storage domain format are you using? V5? V4?
>
>
>
> I'm using storage format V5 on FC.
>
>
>
> Fixing the 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Arsène Gschwind
On Wed, 2020-07-15 at 15:42 +0300, Nir Soffer wrote:

On Wed, Jul 15, 2020 at 3:12 PM Arsène Gschwind

<



arsene.gschw...@unibas.ch

> wrote:


Hi Nir,


I've followed your guide, please find attached the informations.

Thanks a lot for your help.


Thanks, looking at the data.


Quick look in the pdf show that one qemu-img info command failed:


---

lvchange -ay 
33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b


lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert

6197b30d-0732-4cc7-aef0-12f9f6e9565b

33777993-a3a5-4aad-a24c-dfe5e473faca -wi-a- 5.00g


qemu-img info --backing-chain

/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

qemu-img: Could not open

'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':

Could not open '/dev/33777993-

a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8': No

such file or directory

---


Maybe this lv was deactivated by vdsm after you activate it? Please

try to activate it again and

run the command again.


Sending all the info in text format in the mail message would make it

easier to respond.

I did it again with the same result, and the LV was still activated.

lvchange -ay 
33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

  LV   VG   
Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

  6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
-wi-a- 5.00g

qemu-img info --backing-chain 
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

qemu-img: Could not open 
'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
 Could not open 
'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
 No such file or directory

lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

  LV   VG   
Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

  6197b30d-0732-4cc7-aef0-12f9f6e9565b 33777993-a3a5-4aad-a24c-dfe5e473faca 
-wi-a- 5.00g


Sorry for the PDF, it was easier for me, but I will post everything in the mail 
from now on.



Arsene


On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:


On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind


<




arsene.gschw...@unibas.ch



wrote:



On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:



On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind



<





arsene.gschw...@unibas.ch





wrote:




Hi,




I running oVirt 4.3.9 with FC based storage.



I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
VM Snapshot and that task failed after a while and since then the Snapshot is 
inconsistent.



disk1 : Snapshot still visible in DB and on Storage using LVM commands



disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
merge did run correctly)



disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
merge did run correctly)




When I try to delete the snapshot again it runs forever and nothing happens.




Did you try also when the vm is not running?



Yes I've tried that without success




In general the system is designed so trying again a failed merge will complete



the merge.




If the merge does complete, there may be some bug that the system cannot



handle.




Is there a way to suppress that snapshot?



Is it possible to merge disk1 with its snapshot using LVM commands and then 
cleanup the Engine DB?




Yes but it is complicated. You need to understand the qcow2 chain



on storage, complete the merge manually using qemu-img commit,



update the metadata manually (even harder), then update engine db.




The best way - if the system cannot recover, is to fix the bad metadata



that cause the system to fail, and the let the system recover itself.




Which storage domain format are you using? V5? V4?



I'm using storage format V5 on FC.



Fixing the metadata is not easy.



First you have to find the volumes related to this disk. You can find


the disk uuid and storage


domain uuid in engine ui, and then you can find the volumes like this:



lvs -o vg_name,lv_name,tags | grep disk-uuid



For every lv, you will have a tag MD_N where n is a number. This is


the slot number


in the metadata volume.



You need to calculate the offset of the metadata area for every volume using:



offset = 1024*1024 + 8192 * N



Then you can copy the metadata block using:



dd if=/dev/vg-name/metadata bs=512 count=1 skip=$offset



[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Nir Soffer
On Wed, Jul 15, 2020 at 3:12 PM Arsène Gschwind
 wrote:
>
> Hi Nir,
>
> I've followed your guide, please find attached the informations.
> Thanks a lot for your help.

Thanks, looking at the data.

Quick look in the pdf show that one qemu-img info command failed:

---
lvchange -ay 
33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b

lvs 33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
6197b30d-0732-4cc7-aef0-12f9f6e9565b
33777993-a3a5-4aad-a24c-dfe5e473faca -wi-a- 5.00g

qemu-img info --backing-chain
/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/6197b30d-0732-4cc7-aef0-12f9f6e9565b
qemu-img: Could not open
'/dev/33777993-a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8':
Could not open '/dev/33777993-
a3a5-4aad-a24c-dfe5e473faca/8e412b5a-85ec-4c53-a5b8-dfb4d6d987b8': No
such file or directory
---

Maybe this lv was deactivated by vdsm after you activate it? Please
try to activate it again and
run the command again.

Sending all the info in text format in the mail message would make it
easier to respond.

>
> Arsene
>
> On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:
>
> On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind
>
> <
>
> arsene.gschw...@unibas.ch
>
> > wrote:
>
>
> On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:
>
>
> On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind
>
>
> <
>
>
> arsene.gschw...@unibas.ch
>
>
>
> wrote:
>
>
>
> Hi,
>
>
>
> I running oVirt 4.3.9 with FC based storage.
>
>
> I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
> VM Snapshot and that task failed after a while and since then the Snapshot is 
> inconsistent.
>
>
> disk1 : Snapshot still visible in DB and on Storage using LVM commands
>
>
> disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
> merge did run correctly)
>
>
> disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
> merge did run correctly)
>
>
>
> When I try to delete the snapshot again it runs forever and nothing happens.
>
>
>
> Did you try also when the vm is not running?
>
>
> Yes I've tried that without success
>
>
>
> In general the system is designed so trying again a failed merge will complete
>
>
> the merge.
>
>
>
> If the merge does complete, there may be some bug that the system cannot
>
>
> handle.
>
>
>
> Is there a way to suppress that snapshot?
>
>
> Is it possible to merge disk1 with its snapshot using LVM commands and then 
> cleanup the Engine DB?
>
>
>
> Yes but it is complicated. You need to understand the qcow2 chain
>
>
> on storage, complete the merge manually using qemu-img commit,
>
>
> update the metadata manually (even harder), then update engine db.
>
>
>
> The best way - if the system cannot recover, is to fix the bad metadata
>
>
> that cause the system to fail, and the let the system recover itself.
>
>
>
> Which storage domain format are you using? V5? V4?
>
>
> I'm using storage format V5 on FC.
>
>
> Fixing the metadata is not easy.
>
>
> First you have to find the volumes related to this disk. You can find
>
> the disk uuid and storage
>
> domain uuid in engine ui, and then you can find the volumes like this:
>
>
> lvs -o vg_name,lv_name,tags | grep disk-uuid
>
>
> For every lv, you will have a tag MD_N where n is a number. This is
>
> the slot number
>
> in the metadata volume.
>
>
> You need to calculate the offset of the metadata area for every volume using:
>
>
> offset = 1024*1024 + 8192 * N
>
>
> Then you can copy the metadata block using:
>
>
> dd if=/dev/vg-name/metadata bs=512 count=1 skip=$offset
>
> conv=skip_bytes > lv-name.meta
>
>
> Please share these files.
>
>
> This part is not needed in 4.4, we have a new StorageDomain dump API,
>
> that can find the same
>
> info in one command:
>
>
> vdsm-client StorageDomain dump sd_id=storage-domain-uuid | \
>
> jq '.volumes | .[] | select(.image=="disk-uuid")'
>
>
> The second step is to see what is the actual qcow2 chain. Find the
>
> volume which is the LEAF
>
> by grepping the metadata files. In some cases you may have more than
>
> one LEAF (which may
>
> be the problem).
>
>
> Then activate all volumes using:
>
>
> lvchange -ay vg-name/lv-name
>
>
> Now you can get the backing chain using qemu-img and the LEAF volume.
>
>
> qemu-img info --backing-chain /dev/vg-name/lv-name
>
>
> If you have more than one LEAF, run this on all LEAFs. Ony one of them
>
> will be correct.
>
>
> Please share also output of qemu-img.
>
>
> Once we finished with the volumes, deactivate them:
>
>
> lvchange -an vg-name/lv-name
>
>
> Based on the output, we can tell what is the real chain, and what is
>
> the chain as seen by
>
> vdsm metadata, and what is the required fix.
>
>
> Nir
>
>
>
> Thanks.
>
>
>
> Thanks for any hint or help.
>
>
> rgds , arsene
>
>
>
> --
>
>
>
> Arsène Gschwind <
>
>
> arsene.gschw...@unibas.ch
>
>
>
>
>
> 

[ovirt-users] Re: VM Snapshot inconsistent

2020-07-15 Thread Arsène Gschwind
Hi Nir,

I've followed your guide, please find attached the informations.
Thanks a lot for your help.

Arsene

On Tue, 2020-07-14 at 23:47 +0300, Nir Soffer wrote:

On Tue, Jul 14, 2020 at 7:51 PM Arsène Gschwind

<



arsene.gschw...@unibas.ch

> wrote:


On Tue, 2020-07-14 at 19:10 +0300, Nir Soffer wrote:


On Tue, Jul 14, 2020 at 5:37 PM Arsène Gschwind


<




arsene.gschw...@unibas.ch



wrote:



Hi,



I running oVirt 4.3.9 with FC based storage.


I'm running several VM with 3 disks on 3 different SD. Lately we did delete a 
VM Snapshot and that task failed after a while and since then the Snapshot is 
inconsistent.


disk1 : Snapshot still visible in DB and on Storage using LVM commands


disk2: Snapshot still visible in DB but not on storage anymore (It seems the 
merge did run correctly)


disk3: Snapshot still visible in DB but not on storage ansmore (It seems the 
merge did run correctly)



When I try to delete the snapshot again it runs forever and nothing happens.



Did you try also when the vm is not running?


Yes I've tried that without success



In general the system is designed so trying again a failed merge will complete


the merge.



If the merge does complete, there may be some bug that the system cannot


handle.



Is there a way to suppress that snapshot?


Is it possible to merge disk1 with its snapshot using LVM commands and then 
cleanup the Engine DB?



Yes but it is complicated. You need to understand the qcow2 chain


on storage, complete the merge manually using qemu-img commit,


update the metadata manually (even harder), then update engine db.



The best way - if the system cannot recover, is to fix the bad metadata


that cause the system to fail, and the let the system recover itself.



Which storage domain format are you using? V5? V4?


I'm using storage format V5 on FC.


Fixing the metadata is not easy.


First you have to find the volumes related to this disk. You can find

the disk uuid and storage

domain uuid in engine ui, and then you can find the volumes like this:


lvs -o vg_name,lv_name,tags | grep disk-uuid


For every lv, you will have a tag MD_N where n is a number. This is

the slot number

in the metadata volume.


You need to calculate the offset of the metadata area for every volume using:


offset = 1024*1024 + 8192 * N


Then you can copy the metadata block using:


dd if=/dev/vg-name/metadata bs=512 count=1 skip=$offset

conv=skip_bytes > lv-name.meta


Please share these files.


This part is not needed in 4.4, we have a new StorageDomain dump API,

that can find the same

info in one command:


vdsm-client StorageDomain dump sd_id=storage-domain-uuid | \

jq '.volumes | .[] | select(.image=="disk-uuid")'


The second step is to see what is the actual qcow2 chain. Find the

volume which is the LEAF

by grepping the metadata files. In some cases you may have more than

one LEAF (which may

be the problem).


Then activate all volumes using:


lvchange -ay vg-name/lv-name


Now you can get the backing chain using qemu-img and the LEAF volume.


qemu-img info --backing-chain /dev/vg-name/lv-name


If you have more than one LEAF, run this on all LEAFs. Ony one of them

will be correct.


Please share also output of qemu-img.


Once we finished with the volumes, deactivate them:


lvchange -an vg-name/lv-name


Based on the output, we can tell what is the real chain, and what is

the chain as seen by

vdsm metadata, and what is the required fix.


Nir



Thanks.



Thanks for any hint or help.


rgds , arsene



--



Arsène Gschwind <




arsene.gschw...@unibas.ch





Universitaet Basel


___


Users mailing list --




users@ovirt.org




To unsubscribe send an email to




users-le...@ovirt.org




Privacy Statement:




https://www.ovirt.org/privacy-policy.html




oVirt Code of Conduct:




https://www.ovirt.org/community/about/community-guidelines/




List Archives:




https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WZ6KO2LVD3ZA2JNNIHJRCXG65HO4LMZ/





--


Arsène Gschwind

Fa. Sapify AG im Auftrag der universitaet Basel

IT Services

Klinelbergstr. 70 | CH-4056 Basel | Switzerland

Tel: +41 79 449 25 63 |



http://its.unibas.ch


ITS-ServiceDesk:



support-...@unibas.ch

 | +41 61 267 14 11


--

Arsène Gschwind mailto:arsene.gschw...@unibas.ch>>
Universitaet Basel


cpslpd01.pdf
Description: cpslpd01.pdf
CAP=354334801920
CTIME=158765
DESCRIPTION={"DiskAlias":"cpslpd01_HANADB_Disk1","DiskDescription":"SAP SLCM H11 HDB D13 data"}

[ovirt-users] Re: Ovirt 4.4.1 Install failure at OVF_Store check

2020-07-15 Thread Alistair Cloete

On 7/15/20 9:40 AM, Yedidyah Bar David wrote:

On Wed, Jul 15, 2020 at 8:14 AM AK via Users  wrote:


Yes sir,  I run the clean up script after each failure, clean out the gluster 
volume, and remove any network the deploy scripts create.   I just conducted 
the deployment on different hardware (different drives, different CPU, raid 
controller, SSD's) and it produced the same result (failure at 
OVF_STore_check).  The only deployment items that are consistent are creating 
the physical network bonds and gluster volumes which can be mounted across the 
network and have been tested as storage pools for other virtualization and 
storage platforms.


Can you please check engine-side logs? If you can access the engine VM
(search hosted-engine logs for local_vm_ip if it's still on the local
network), check /var/log/ovirt-engine/*, otherwise, on the host,
/var/log/ovirt-hosted-engine-setup/engine-logs*/*.

That said, we also have (what seems like) a very similar failure on
CI, for some time now - check e.g. the latest nightly run:

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-20200714225605-ueg6k8.log

2020-07-14 22:59:42,414-0400 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml',
'ansible_task': 'ovirt.hosted_engine_setup : Check OVF_STORE volume
status'}

It tries some time, eventually fails, like your case. engine log has:

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/engine-logs-2020-07-15T03%3A04%3A29Z/ovirt-engine/engine.log

2020-07-14 22:57:03,197-04 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [4abbccc] EVENT_ID: USER_VDC_LOGOUT(31), User
admin@internal-authz connected from '192.168.222.1' using session
'W5qdcPNyRLHmMnbMz7i+ZP85De1GjKq7+V1hqbKEeD+QJtpcFGpITEVFIHbUvz+2wF+GTAB6qnCY1gHxBHkGLA=='
logged out.
2020-07-14 22:57:03,242-04 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29)
[313eed07] Command 'UploadStreamVDSCommand(HostName =
lago-he-basic-suite-master-host-0.lago.local,
UploadStreamVDSCommandParameters:{hostId='c6d33fd9-5137-49fc-815a-94baf2d58b93'})'
execution failed: javax.net.ssl.SSLPeerUnverifiedException:
Certificate for  doesn't
match any of the subject alternative names:
[lago-he-basic-suite-master-host-0.lago.local]

This is currently discussed on the devel list, in thread:

 execution failed: javax.net.ssl.SSLPeerUnverifiedException (was:
[ovirt-devel] vdsm.storage.exception.UnknownTask: Task id unknown
(was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build
# 1641 - Still Failing!))

We are still not sure about the exact cause, but I have a feeling that
it's somehow related to naming/name resolution/hostname/etc.

In any case, I didn't manage to reproduce this locally on my own machine.

I suggest checking everything you can think of related to this -
dhcp/dns, output of 'hostname' on the host, etc.

Good luck and best regards,



The javax.net.ssl.SSLPeerUnverifiedException may be caused by a recent 
regression in HttpClient:

  https://issues.apache.org/jira/browse/HTTPCLIENT-2047
This only affects domains which are not publicly accessible, e.g. .local 
or .test. According to the bug report it was fixed upstream.


Best regards,
Alistair.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7V27ZBSIFIE3WW6KGGI5HHWDKT7MPYJA/


[ovirt-users] Re: Strange SD problem

2020-07-15 Thread Ahmad Khiet
Hi Arsène,

can you please send which version are you referring to?

as shown in the log: Storage domains with IDs
[6b82f31b-fa2a-406b-832d-64d9666e1bcc]
could not be synchronized. To synchronize them, please move them to
maintenance and then activate.
can you put them in maintenance and then activate them back so it will be
synced?
I guess that it is out of sync, that's why the "Add" button appears to
already added LUNs



On Tue, Jul 14, 2020 at 4:58 PM Arsène Gschwind 
wrote:

> Hi Ahmad,
>
> I did the following:
>
> 1. Storage -> Storage Domains
> 2 Click the existing Storage Domain and click "Manage Domain"
> and then I see next to the LUN which is already part of the SD the "Add"
> button
>
> I do not want to click add since it may destroy the existing SD or the
> content of the LUNs.
> In the Engine Log I see the following:
>
> 020-07-14 09:57:45,131+02 WARN  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-20) [277145f2] EVENT_ID: 
> STORAGE_DOMAINS_COULD_NOT_BE_SYNCED(1,046), Storage domains with IDs 
> [6b82f31b-fa2a-406b-832d-64d9666e1bcc] could not be synchronized. To 
> synchronize them, please move them to maintenance and then activate.
>
>
> Thanks a lot
>
> On Tue, 2020-07-14 at 16:07 +0300, Ahmad Khiet wrote:
>
> Hi Arsène Gschwind,
>
> it's really strange that you see "Add" on a LUN that already has been
> added to the database.
> to verify the steps you did make at first,
> 1- Storage -> Storage Domains
> 2- New Domain - [ select iSCSI ]
> 3- click on "+" on the iscsi target, then you see the "Add" button is
> available
> 4- after clicking add and ok, then this error will be shown in the logs
> is that right?
>
> can you also attach vdsm log?
>
>
>
>
> On Tue, Jul 14, 2020 at 1:15 PM Arsène Gschwind 
> wrote:
>
> Hello all,
>
> I've checked all my multipath configuration and everything seems korrekt.
> Is there a way to correct this, may be in the DB?
>
> I really need some help, thanks a lot.
> Arsène
>
> On Tue, 2020-07-14 at 00:29 +, Arsène Gschwind wrote:
>
> HI,
>
> I'm having a strange behavior with a SD. When trying to manage the SD I
> see they "Add" button for the LUN which should already be the one use for
> that SD.
> In the Logs I see the following:
>
> 2020-07-13 17:48:07,292+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.BatchProcedureExecutionConnectionCallback]
> (EE-ManagedThreadFactory-engine-Thread-95) [51091853] Can't execute batch:
> Batch entry 0 select * from public.insertluns(CAST ('repl_HanaLogs_osd_01'
> AS varchar),CAST ('DPUtaW-Q5zp-aZos-HriP-5Z0v-hiWO-w7rmwG' AS varchar),CAST
> ('4TCXZ7-R1l1-xkdU-u0vx-S3n4-JWcE-qksPd1' AS varchar),CAST
> ('SHUAWEI_XSG1_2102350RMG10HC200035' AS varchar),CAST (7 AS int4),CAST
> ('HUAWEI' AS varchar),CAST ('XSG1' AS varchar),CAST (2548 AS int4),CAST
> (268435456 AS int8)) as result was aborted: ERROR: duplicate key value
> violates unique constraint "pk_luns"
>   Detail: Key (lun_id)=(repl_HanaLogs_osd_01) already exists.
>   Where: SQL statement "INSERT INTO LUNs (
> LUN_id,
> physical_volume_id,
> volume_group_id,
> serial,
> lun_mapping,
> vendor_id,
> product_id,
> device_size,
> discard_max_size
> )
> VALUES (
> v_LUN_id,
> v_physical_volume_id,
> v_volume_group_id,
> v_serial,
> v_lun_mapping,
> v_vendor_id,
> v_product_id,
> v_device_size,
> v_discard_max_size
> )"
> PL/pgSQL function insertluns(character varying,character varying,character
> varying,character varying,integer,character varying,character
> varying,integer,bigint) line 3 at SQL statement  Call getNextException to
> see other errors in the batch.
> 2020-07-13 17:48:07,292+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.BatchProcedureExecutionConnectionCallback]
> (EE-ManagedThreadFactory-engine-Thread-95) [51091853] Can't execute batch.
> Next exception is: ERROR: duplicate key value violates unique constraint
> "pk_luns"
>   Detail: Key (lun_id)=(repl_HanaLogs_osd_01) already exists.
>   Where: SQL statement "INSERT INTO LUNs (
> LUN_id,
> physical_volume_id,
> volume_group_id,
> serial,
> lun_mapping,
> vendor_id,
> product_id,
> device_size,
> discard_max_size
> )
> VALUES (
> v_LUN_id,
> v_physical_volume_id,
> v_volume_group_id,
> v_serial,
> v_lun_mapping,
> v_vendor_id,
> v_product_id,
> v_device_size,
> v_discard_max_size
> )"
> PL/pgSQL function insertluns(character varying,character varying,character
> varying,character varying,integer,character varying,character
> varying,integer,bigint) line 3 at SQL statement
> 2020-07-13 17:48:07,293+02
> INFO  [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engine-Thread-95) 

[ovirt-users] VM Portal on a stand alone server

2020-07-15 Thread gal . villaret
Hi Folks, 

I'm rather new to oVirt and loving it. 
running 4.4.1. 
I would like to be able to run the VM Portal on a stand-alone server for 
security concerns. 

Can anyone point in the right direction for achieving this? 

Thanks, 

Gal Villaret
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BE6CNDNLDKZQW5ANOC3UFT3BQZZFGHC/


[ovirt-users] Re: Problem with backuping ovirt 4.4 with SDK

2020-07-15 Thread Eyal Shenitzky
On Tue, 14 Jul 2020 at 11:27, Nir Soffer  wrote:

> On Tue, Jul 14, 2020 at 9:33 AM Łukasz Kołaciński <
> l.kolacin...@storware.eu> wrote:
>
>> Hello,
>>
>
> Hi Lukaz,
>
> Lets move the discussion to de...@ovirt.org, I think it will be more
> productive.
>
> Also, always CC me and Eyal on incremental backup questions for a quicker
> response.
>
>
>> I am trying to do full backup on ovirt 4.4 using sdk.
>>
>
> Which version of oVirt? libvirt?
>
>
>> I used steps from this youtube video:
>> https://www.youtube.com/watch?v=E2VWUVcycj4 and I got error after
>> running backup_vm.py. I see that sdk has imported disks and created backup
>> entity and then I got sdk.NotFoundError exception.
>>
>
> This means that starting backup failed. Unfortunately the API does not
> have a good way to get
> the error that caused the backup to fai.
>
> You should be able to see the error in the event log in the UI, and in
> engine log.
>
>
>> I also tried to do full backup with API and after finalizing backup
>> disappeared (I think)
>>
>
> So backup from the API was successful?
>
> Backups are expected to disappear, they are temporary objects used to
> manage the backup
> process. Once the backup process was finished you can do nothing with the
> backup object,
> and you cannot fetch the same backup data again.
>
>
>> and I couldn't try incremental.
>>
>
> The fact that the backup disappeared should not prevent the next backup.
>
> After you create a backup, you need to poll backup status until the backup
> is ready.
>
> while backup.phase != BackupPhase.READY:
> time.sleep(1)
> backup = backup_service.get()# to_checkpoint_id will be used as
>
> If the backup does not end in ready state, it failed, and you cannot do
> anything with
> this backup.
>
> When the backup is ready, you can fetch the to_checkpoint_id created for
> this backup.
>
> checkpoint_id = backup.to_checkpoint_id
>
> At this point you need to persist the checkpoint id. This will be used to
> create the incremental
> backup.
>
>
> [   0.0 ] Starting full backup for VM
>> '51708c8e-6671-480b-b2d8-199a1af9cbdc'
>> Password:
>> [   4.2 ] Waiting until backup 0458bf7f-868c-4859-9fa7-767b3ec62b52 is
>> ready
>> Traceback (most recent call last):
>>   File "./backup_vm.py", line 343, in start_backup
>> backup = backup_service.get()
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/services.py", line
>> 32333, in get
>> return self._internal_get(headers, query, wait)
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
>> 211, in _internal_get
>> return future.wait() if wait else future
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
>> 55, in wait
>> return self._code(response)
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
>> 208, in callback
>> self._check_fault(response)
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
>> 130, in _check_fault
>> body = self._internal_read_body(response)
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
>> 312, in _internal_read_body
>> self._raise_error(response)
>>   File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line
>> 118, in _raise_error
>> raise error
>> ovirtsdk4.NotFoundError: HTTP response code is 404.
>>
>> During handling of the above exception, another exception occurred:
>>
>> Traceback (most recent call last):
>>   File "./backup_vm.py", line 476, in 
>> main()
>>   File "./backup_vm.py", line 173, in main
>> args.command(args)
>>   File "./backup_vm.py", line 230, in cmd_start
>> backup = start_backup(connection, args)
>>   File "./backup_vm.py", line 345, in start_backup
>> raise RuntimeError("Backup {} failed".format(backup.id))
>> RuntimeError: Backup 0458bf7f-868c-4859-9fa7-767b3ec62b52 failed
>>
>
> This is correct, backup has failed.
>
> Please check the event log to understand the failure.
>
> Eyal, can you show how to get the error from the backup using the SDK, in
> a way
> that can be used by a program?
>
> e.g. a public error code that can be used to decide on the next step, and
> an error
> message that can be used for displaying error to users of the backup
> application.
>
> This should be added to the backup_vm.py example.
>
> Nir
>
> I added an example of how to fetch the event from the engine using the SDK.

You can find here -
https://gerrit.ovirt.org/#/c/110307/1/sdk/examples/backup_vm.py



-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYQN4VPUHNCQHNL5HCDM4HQ3UHF5ZLN7/


[ovirt-users] How does ovirt collect data from foreman gui

2020-07-15 Thread The Gaming Ant
Trying to initiate a build from foreman to ovirt. Data entered in the foreman 
gui, is not seen as accepted in ovirt . Is there anything to be setup for ovirt 
to accept the data from foreman..
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RZSHJSUZGAGLBZP35FR33NWENBEB5RDX/


[ovirt-users] Re: What permission do I need to get API access

2020-07-15 Thread Martin Perina
Hi Miguel,

So could you please share your playbook with us and the exact error you are
getting during its execution?

On Tue, Jul 14, 2020 at 4:08 PM  wrote:

> We are trying to create vm using ansible scripts. However, also tried to
> log into the API web https://master-server/ovirt-engine/api with
> authentication error messages. I think the problem is authentication method
> since we are using LDAP accounts, to access vm portal or api web URL we use
> email address too.
>

There should be no difference in usernames provided into UI or
RESTAPI/SDK/Ansible modules. The only thing which differs is how to provide
it:

1. In UI you are providing username and the select a profile (for example
username can be 'admin' and profile 'internal')
2. For RESTAPI/SDK/Ansible you are entering in the format of
username@profile (for example 'admin@internal')

Thanks,
Martin

___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIHD5BVBI2V4BLM7IEDRJLJZBMJQY4OY/
>


-- 
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HS2OQQYGBJQ3IAWCVGPLLBDWX6ACJ4TE/


[ovirt-users] Re: Ovirt 4.4.1 Install failure at OVF_Store check

2020-07-15 Thread Yedidyah Bar David
On Wed, Jul 15, 2020 at 8:14 AM AK via Users  wrote:
>
> Yes sir,  I run the clean up script after each failure, clean out the gluster 
> volume, and remove any network the deploy scripts create.   I just conducted 
> the deployment on different hardware (different drives, different CPU, raid 
> controller, SSD's) and it produced the same result (failure at 
> OVF_STore_check).  The only deployment items that are consistent are creating 
> the physical network bonds and gluster volumes which can be mounted across 
> the network and have been tested as storage pools for other virtualization 
> and storage platforms.

Can you please check engine-side logs? If you can access the engine VM
(search hosted-engine logs for local_vm_ip if it's still on the local
network), check /var/log/ovirt-engine/*, otherwise, on the host,
/var/log/ovirt-hosted-engine-setup/engine-logs*/*.

That said, we also have (what seems like) a very similar failure on
CI, for some time now - check e.g. the latest nightly run:

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-20200714225605-ueg6k8.log

2020-07-14 22:59:42,414-0400 INFO ansible task start {'status': 'OK',
'ansible_type': 'task', 'ansible_playbook':
'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml',
'ansible_task': 'ovirt.hosted_engine_setup : Check OVF_STORE volume
status'}

It tries some time, eventually fails, like your case. engine log has:

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1672/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/engine-logs-2020-07-15T03%3A04%3A29Z/ovirt-engine/engine.log

2020-07-14 22:57:03,197-04 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [4abbccc] EVENT_ID: USER_VDC_LOGOUT(31), User
admin@internal-authz connected from '192.168.222.1' using session
'W5qdcPNyRLHmMnbMz7i+ZP85De1GjKq7+V1hqbKEeD+QJtpcFGpITEVFIHbUvz+2wF+GTAB6qnCY1gHxBHkGLA=='
logged out.
2020-07-14 22:57:03,242-04 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-29)
[313eed07] Command 'UploadStreamVDSCommand(HostName =
lago-he-basic-suite-master-host-0.lago.local,
UploadStreamVDSCommandParameters:{hostId='c6d33fd9-5137-49fc-815a-94baf2d58b93'})'
execution failed: javax.net.ssl.SSLPeerUnverifiedException:
Certificate for  doesn't
match any of the subject alternative names:
[lago-he-basic-suite-master-host-0.lago.local]

This is currently discussed on the devel list, in thread:

execution failed: javax.net.ssl.SSLPeerUnverifiedException (was:
[ovirt-devel] vdsm.storage.exception.UnknownTask: Task id unknown
(was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build
# 1641 - Still Failing!))

We are still not sure about the exact cause, but I have a feeling that
it's somehow related to naming/name resolution/hostname/etc.

In any case, I didn't manage to reproduce this locally on my own machine.

I suggest checking everything you can think of related to this -
dhcp/dns, output of 'hostname' on the host, etc.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PLWCAX5QS6535CZTFJ4PNGS7WYKXXN5E/


[ovirt-users] Re: Failing to restore a backup

2020-07-15 Thread Yedidyah Bar David
On Tue, Jul 14, 2020 at 6:04 PM Andrea Chierici
 wrote:
>
> Hi,
> thank you for your help.
>
>
> I think this is not a critical failure, and is not what failed the restore.
>
>>
>>
>>
>> Recently I tried the 4.3.11 beta and 4.4.1 and the error now is different:
>>
>> [ INFO  ] Upgrading CA\n[ ERROR ] Failed to execute stage 'Misc 
>> configuration': (2, 'No such file or directory')\n[ INFO  ] DNF Performing 
>> DNF transaction rollback\n
>
>
> This is part of 'engine-setup' output, which 'hosted-engine' runs inside the 
> engine VM. If you can access the engine VM, you can try finding more 
> information in /var/log/ovirt-engine/setup/* there. Otherwise, the 
> hosted-engine deploy script might have managed to get a copy to 
> /var/log/ovirt-hosted-engine-setup/engine-logs*. Please check/share these. 
> Thanks.
>
>
> Unfortunately the installation procedures when exiting, deletes the vm, hence 
> I can't log in.

Are you sure? Did you check with 'ps', searching qemu processes?

If it's still up, but still using a local IP address, you can find it
by searching the hosted-engine logs for 'local_vm_ip' and login there
from the host.

> Here are the ERROR messages I got on the logs copied on the host:
>
> engine.log:2020-07-08 15:05:04,178+02 ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-89) 
> [45a7e7f3] Can not run fence action on host '', no 
> suitable proxy host was found.

That's ok.

>
> server.log:2020-07-08 15:09:23,081+02 ERROR 
> [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) RESTEASY002010: 
> Failed to execute: javax.ws.rs.WebApplicationException: HTTP 404 Not Found
> server.log:2020-07-08 15:14:19,804+02 ERROR 
> [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) RESTEASY002010: 
> Failed to execute: javax.ws.rs.WebApplicationException: HTTP 404 Not Found

This probably indicates a problem, but I agree it's not very helpful.

> grep: setup: Is a directory

Right - so please search inside it.

Also please check the hosted-engine deploy logs themselves.

>
> Not very helpful.
>
>
>
>>
>>
>> I simply can't figure out what file is missing.
>> If, as a test, I try to install the HE without restoring the backup, the 
>> installation goes smoothly to the end, but at that point I can't restore the 
>> backup, as far as I can understand.
>
>
> Another option is to do the restore manually. To find relevant information, 
> search the net for "enginevm_before_engine_setup".
>
>
> Later I will give it a try.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BRMKX5LC2LIHPRWY77A3SIT6F7ZLECBU/


[ovirt-users] oVirt 2020 online conference - free registration now open

2020-07-15 Thread Sandro Bonazzola
It is our pleasure to renew our invite to oVirt 2020 online conference and
announce free registration

is now open.
The conference,organized by oVirt community, will take place online on
Monday, September 7th 2020!
You can read updated information about the conference at
https://blogs.ovirt.org/ovirt-2020-online-conference/

A kind reminder that if you'd like to present at the conference, the
deadline for the presentation proposal is July 26th 2020.

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HEYUQADICKFS6OXTGJO3YUCTQW6YR35I/