[ovirt-users] Re: Continuing HE install from lost connection

2020-06-15 Thread Yedidyah Bar David
On Mon, Jun 15, 2020 at 6:43 PM Glenn Marcy  wrote:
>
> I ran the Prepare VM stage of the GUI HostedEngine install and when I came 
> back it had disconnected.  Looking at the logs and the current status 
> everything appears to be just fine, but there doesn't seem to be any way in 
> the GUI to continue where it left off.  I was wondering if there was some way 
> to provide the hosted_storage information requested in the next panel and 
> complete the installation, perhaps with the command line, instead of redoing 
> all of the work that has already been completed successfully?

Sorry, but there is no way.

There is an open bug about this:

https://bugzilla.redhat.com/show_bug.cgi?id=1422544

Hopefully we'll soon get to handle it somehow. It should now be easier
than when it was opened, because now everything is just ansible,
unattended, so we can run it in the background and check the logs.

For now, these are your options:

1. Just do what you did, and make sure the browser session is not disconnected.
2. Same, but run the browser on some server in the same lan as the
hosted-engine host, and connect to it using some screen sharing tool.
This way, if your laptop disconnects, you can reconnect to the screen
share and hopefully find your browser still connected.
3. Do not use the GUI, but run deploy from the command-line
('hosted-engine --deploy'), and do that inside screen or tmux, so that
you can reconnect if needed.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5NWOAOTTBP7N2WGLLQFTT4RYJ5ARUDSM/


[ovirt-users] Re: Advise Needed - Hosted Engine Minor Update

2020-06-15 Thread jrbdeguzman05
Hi Strahil,
Thank you for the suggestion. I'll take note of it and will definitely try 
those steps once we have Gluster already configured. :)

Hi Didi,
Thanks for your inputs as well. I've tried the steps and was able to revert 
back to 4.3.8.

Here's the complete steps performed for the update / rollback:
1. Take backup of the current environment (1 node in maintenance mode).
2. Enable global maintenance.
3. Update the HostedEngine following the documentation available for minor 
updates
4. Update / install other packages using yum update
5. Disable global maintenance then reboot HE

If there's a need to rollback,
1. Run engine-cleanup
2. yum history undo ($ID of updates done in updates for step 3 and 2)
3. Perform a restore using the backup
4. Run engine-setup to complete the setup

Will try these steps for updating to 4.3.10. Thank you again! :)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QLUU7MUJQMWZXO3A4W2BPQYXW3XQOMR/


[ovirt-users] Re: Ansible ovirt_disk module: how to attach disk without activation

2020-06-15 Thread Gianluca Cecchi
On Mon, Jun 15, 2020 at 11:47 PM Gianluca Cecchi 
wrote:

>
>   register: disk_attach_info
>
> But going to see registered variable it contains "active: true":
>
> "disk_activate_info": {
> "changed": false,
> "id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
> "diskattachment": {
> "href":
> "/ovirt-engine/api/vms/b5c67c93-bd5d-42b6-a873-05f69cece2f1/diskattachments/6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
> "id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
> "active": true,
> "bootable": false,
> "disk": {
> 
>
>
Unfortunately I copied two times the disk_activate_info registered value in
the previous e-mail.
Here below the correct disk_attach_info var contents that contains "active:
true":

"disk_attach_info": {
"changed": true,
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"diskattachment": {
"href":
"/ovirt-engine/api/vms/b5c67c93-bd5d-42b6-a873-05f69cece2f1/diskattachments/6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"active": true,
"bootable": false,
"disk": {
"href":
"/ovirt-engine/api/disks/6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827"
},
...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ANQCN5GFPSWJG36LH7PJJU3BU477HBPM/


[ovirt-users] Ansible ovirt_disk module: how to attach disk without activation

2020-06-15 Thread Gianluca Cecchi
Hello,
root problem in a 4.3.8 environment is that sometimes when I hot add a
floating disk to a VM in web admin gui, it attaches and also activates it
by default (the flag is checked by default, but you can uncheck it) and
this two-phases action fails as a single step.
What I get is:

VDSM ovhost command HotPlugDiskVDS failed: Requested operation is not
valid: Domain already contains a disk with that address

And then the VM event regarding hot add disk failure.

I'm going to investigate more: it should be a past bug already solved in
theory by my libvirt sw version (I've to crosscheck its id)... but I still
get the error sometimes.
What programmatically works is to attach the disk without activating it
(this step always completes with success) and then activate the disk as a
separate operation (this one typically fails the first time when I have the
"problematic" situation, but then the second time succeeds).
So I would like to use it as a workaround in the mean time.

The problem reflects using the ansible module ovirt_disk, that by default
activates the disk. So I have tried to specify "activate: no" (introduced
in 2.8, readings the docs) when I attach it, but it seems not to work as
expected.
I'm testing using awx and its info says:
AWX 9.0.1.0
Ansible 2.8.5

I tried this on a VM that (as usual when you want to reproduce... ;-)
doesn't show the problem:

  ovirt_disk:
auth: "{{ ovirt_auth }}"
state: attached
activate: no
name: "{{ tobe_sw_disk_name }}"
vm_name: "{{ ansible_hostname }}"
size: "{{ current_sw_disk_size }}"
interface: virtio_scsi
  delegate_to: localhost
  register: disk_attach_info

But going to see registered variable it contains "active: true":

"disk_activate_info": {
"changed": false,
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"diskattachment": {
"href":
"/ovirt-engine/api/vms/b5c67c93-bd5d-42b6-a873-05f69cece2f1/diskattachments/6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"active": true,
"bootable": false,
"disk": {



In fact I followed with the step to activate the disk:

  ovirt_disk:
auth: "{{ ovirt_auth }}"
state: present
activate: yes
name: "{{ tobe_sw_disk_name }}"
vm_name: "{{ ansible_hostname }}"
size: "{{ current_sw_disk_size }}"
interface: virtio_scsi
  delegate_to: localhost
  register: disk_activate_info

But in playbook run I got:

ok: [target_vm -> localhost]

And inside registered variable "changed: false":

"disk_activate_info": {
"changed": false,
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"diskattachment": {
"href":
"/ovirt-engine/api/vms/b5c67c93-bd5d-42b6-a873-05f69cece2f1/diskattachments/6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"id": "6f7cdf02-cf8b-4fa8-ac00-6b47f6e0c827",
"active": true,
...

Any tip to realize what needed: task to only attach disk without activating
it and then task to activate the disk?

Thanks in advance,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VVTVKFMYFRHDX73MJ3YM5PO5HMBSFIX/


[ovirt-users] Re: Host has time-drift of xxx seconds

2020-06-15 Thread Eli Mesika
Hi

Looking at the code I realized that the date/time retrieved from the host
is cached and not refreshed again until the RHV manager engine is restarted
Please open a bug on that, we should be able to notice that the problem was
fixed

Thanks
Eli

On Thu, Jun 11, 2020 at 6:02 AM Strahil Nikolov via Users 
wrote:

> Hello All,
>
> I have a strange error that should be fixed but the event log is  still
> filling with the following after the latest patching (4.3.10):
>
> Host ovirt2.localdomain has time-drift of 2909848 seconds while maximum
> configured value is 300 seconds.
> Host ovirt3.localdomain has time-drift of 2909848 seconds while maximum
> configured value is 300 seconds.
>
> As  it blamed only 2  out of 3 systems,  I checked what has happened  on
> ovirt1 and that one was far behind the other servers.
>
> Once I fixed the issue, I kept receiving those errors despite tthe fact
> that I fixed the time drift on ovirt1 several days ago.
> Currently the hosts and the engine are OK, but I got no idea how to 'fix'
> the issue.
>
> I have also noticed that the 2  errors had a  date  of 2 PM which is not
> possible with my current timezone.
>
> Here  is a one-shot query from all nodes:
>
> [root@ovirt1 ~]# for i in ovirt{1..3}; do ssh $i "ntpdate -q
> office.ipacct.com"; done
> server 195.85.215.8, stratum 1, offset 0.001233, delay 0.03105
> 11 Jun 05:48:16 ntpdate[5224]: adjust time server 195.85.215.8 offset
> 0.001233 sec
> server 195.85.215.8, stratum 1, offset -0.000200, delay 0.02821
> 11 Jun 05:48:23 ntpdate[6637]: adjust time server 195.85.215.8 offset
> -0.000200 sec
> server 195.85.215.8, stratum 1, offset 0.000243, delay 0.02914
> 11 Jun 05:48:30 ntpdate[14215]: adjust time server 195.85.215.8 offset
> 0.000243 sec
> [root@ovirt1 ~]# ssh engine 'ntpdate -q office.ipacct.com'
> root@engine's password:
> server 195.85.215.8, stratum 1, offset 0.000291, delay 0.02888
> 11 Jun 05:49:15 ntpdate[13911]: adjust time server 195.85.215.8 offset
> 0.000291 sec
>
> Any ideas ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2IGFLACF66RX2SUWBTAX66GZTJJ4T4L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OALM4MOP65ZRBTAG7A7NVHJ4ERCBVTYH/


[ovirt-users] Re: LDAP setup fails on 4.4 reading PEM file

2020-06-15 Thread Eli Mesika
IMO your first error
[ ERROR ] Failed to execute stage 'Environment customization': a
byte-like object is required, not 'str'

seems to me as related to python2=>python3 upgrade and worth filing a bug
with all the relevant details


On Thu, Jun 11, 2020 at 8:38 PM Stack Korora 
wrote:

> Greetings,
> I'm having some issues getting LDAP working on CentOS 8 with oVirt 4.4.
> I would appreciate some help please.
>
> When I run ovirt-engine-extension-aaa-ldap-setup I choose "11 - RFC-2307
> Schema (Generic)" because that's what my LDAP guy said I should do. :-)
>
> Next I select the default Yes for "Use DNS".
>
> I select 4 for "Failover between multiple hosts".
>
> I put in my two hosts "svr1.my.domain srv2.my.domain".
>
> To select the protocol I type "ldaps".
>
> To select the method to obtain the PEM I type "File".
>
> Then the "File path". A full path to the file. Not quoted. Yes, I
> checked that I typed it correct. I can copy-paste into "ls" and it's
> fine with the correct read permissions and everything. (I can't copy
> paste into the script but that's another issue.)
>
> It immediately fails with:
> [ ERROR ] Failed to execute stage 'Environment customization': a
> byte-like object is required, not 'str'
>
> There is a log file, here is the snippet at the point it goes wrong.
>
> 2020-06-11 11:35:49,915-0500 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND File path:
> 2020-06-11 11:36:24,373-0500 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:RECEIVE
> /etc/pki/ca-trust/source/anchors/Infrastructure.pem
> 2020-06-11 11:36:24,375-0500 DEBUG otopi.context
> context._executeMethod:145 method exception
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in
> _executeMethod
> method['method']()
>   File
>
> "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py",
> line 781, in _customization_late
> cacert, cacertfile, insecure = self._getCACert()
>   File
>
> "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py",
> line 357, in _getCACert
> _cacertfile.write('\n'.join(cacert) + '\n')
>   File "/usr/lib64/python3.6/tempfile.py", line 485, in func_wrapper
> return func(*args, **kwargs)
> TypeError: a bytes-like object is required, not 'str'
> 2020-06-11 11:36:24,376-0500 ERROR otopi.context
> context._executeMethod:154 Failed to execute stage 'Environment
> customization': a bytes-like object is required, not 'str'
> 2020-06-11 11:36:24,376-0500 DEBUG otopi.context
> context.dumpEnvironment:765 ENVIRONMENT DUMP - BEGIN
> 2020-06-11 11:36:24,376-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV BASE/error=bool:'True'
> 2020-06-11 11:36:24,376-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV BASE/exceptionInfo=list:'[( 'TypeError'>, TypeError("a bytes-like object is required, not 'str'",),
> )]'
> 2020-06-11 11:36:24,377-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV OVAAALDAP_LDAP/hosts=str:'svr1.my.domain
> srv2.my.domain'
> 2020-06-11 11:36:24,377-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV OVAAALDAP_LDAP/protocol=str:'ldaps'
> 2020-06-11 11:36:24,377-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV OVAAALDAP_LDAP/serverset=str:'failover'
> 2020-06-11 11:36:24,377-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV OVAAALDAP_LDAP/useDNS=bool:'True'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV
>
> QUESTION/1/OVAAALDAP_LDAP_CACERT_FILE=str:'/etc/pki/ca-trust/source/anchors/Infrastructure.pem'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV
> QUESTION/1/OVAAALDAP_LDAP_CACERT_METHOD=str:'file'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV
> QUESTION/1/OVAAALDAP_LDAP_PROTOCOL=str:'ldaps'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV QUESTION/1/OVAAALDAP_LDAP_SERVERSET=str:'4'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV QUESTION/1/OVAAALDAP_LDAP_USE_DNS=str:'yes'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:775 ENV
> QUESTION/2/OVAAALDAP_LDAP_SERVERSET=str:'svr1.my.domain srv2.my.domain'
> 2020-06-11 11:36:24,378-0500 DEBUG otopi.context
> context.dumpEnvironment:779 ENVIRONMENT DUMP - END
>
>
> Can someone help please?
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHBAPSJOFLAWFMBT4HPJAZUYB3ODL7BX/
>
___

[ovirt-users] Re: It looks like big changes are happening, Centos moving from 8.1.1911 to 8.2.2004 perhaps

2020-06-15 Thread Dominik Holler
Yes, this is because of  Centos moving from 8.1.1911 to 8.2.2004

On Mon, Jun 15, 2020 at 10:11 PM Glenn Marcy  wrote:

> I hope this situation settles down before rc5
>
>
This will be fixed quickly.


> trying to run 4.4.1-rc4 hosted engine setup, got
>
> [ ERROR ] fatal: [localhost -> ovirt-engine.example.com]: FAILED! =>
> {"changed": false, "failures": [], "msg": "Depsolve Error occured:
>  Problem: package ovirt-engine-4.4.1.3-1.el8.noarch requires
> ovirt-provider-ovn >= 1.2.1, but none of the providers can be installed
>  - package ovirt-provider-ovn-1.2.30-1.el8.noarch requires openvswitch >=
> 2.7, but none of the providers can be installed
>  - cannot install the best candidate for the job
>  - nothing provides librte_bitratestats.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_bus_pci.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_bus_vdev.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_bus_vmbus.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_cmdline.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_common_cpt.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_eal.so.9()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_eal.so.9(DPDK_17.08)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_eal.so.9(DPDK_18.11)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_eal.so.9(DPDK_2.0)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11(DPDK_16.07)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11(DPDK_17.05)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11(DPDK_18.05)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11(DPDK_18.08)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11(DPDK_18.11)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ethdev.so.11(DPDK_2.2)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_gro.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_gso.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_hash.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_ip_frag.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_kvargs.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_latencystats.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mbuf.so.4()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mbuf.so.4(DPDK_2.1)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_member.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mempool.so.5()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mempool.so.5(DPDK_16.07)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mempool.so.5(DPDK_2.0)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mempool_bucket.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mempool_ring.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_mempool_stack.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_meter.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_meter.so.2(DPDK_18.08)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_meter.so.2(DPDK_2.0)(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_metrics.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_net.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pci.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pdump.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pmd_bnxt.so.2()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pmd_e1000.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pmd_enic.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pmd_failsafe.so.1()(64bit) needed by
> openvswitch-2.11.1-5.el8.x86_64
>  - nothing provides librte_pmd_i40e.so.2()(64bit) needed by
> openvswitch-2.11.1-

[ovirt-users] oVirt-4.4 on CetOS 8.2

2020-06-15 Thread Dominik Holler
Hello,
CentOS 8.2 was released before oVirt was prepared for CentOS 8.2 .
Currently oVirt-4.4 fails to install on CentOS 8.2 .
This will be fixed soon.
Dominik
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IZHXD6ZP4X3UYVWVN6XCMNNJRLWTGORM/


[ovirt-users] It looks like big changes are happening, Centos moving from 8.1.1911 to 8.2.2004 perhaps

2020-06-15 Thread Glenn Marcy
I hope this situation settles down before rc5

trying to run 4.4.1-rc4 hosted engine setup, got

[ ERROR ] fatal: [localhost -> ovirt-engine.example.com]: FAILED! => 
{"changed": false, "failures": [], "msg": "Depsolve Error occured: 
 Problem: package ovirt-engine-4.4.1.3-1.el8.noarch requires ovirt-provider-ovn 
>= 1.2.1, but none of the providers can be installed
 - package ovirt-provider-ovn-1.2.30-1.el8.noarch requires openvswitch >= 2.7, 
but none of the providers can be installed
 - cannot install the best candidate for the job
 - nothing provides librte_bitratestats.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_bus_pci.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_bus_vdev.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_bus_vmbus.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_cmdline.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_common_cpt.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_eal.so.9()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_eal.so.9(DPDK_17.08)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_eal.so.9(DPDK_18.11)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_eal.so.9(DPDK_2.0)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11(DPDK_16.07)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11(DPDK_17.05)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11(DPDK_18.05)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11(DPDK_18.08)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11(DPDK_18.11)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ethdev.so.11(DPDK_2.2)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_gro.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_gso.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_hash.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_ip_frag.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_kvargs.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_latencystats.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mbuf.so.4()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mbuf.so.4(DPDK_2.1)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_member.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mempool.so.5()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mempool.so.5(DPDK_16.07)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mempool.so.5(DPDK_2.0)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mempool_bucket.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mempool_ring.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_mempool_stack.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_meter.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_meter.so.2(DPDK_18.08)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_meter.so.2(DPDK_2.0)(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_metrics.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_net.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pci.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pdump.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_bnxt.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_e1000.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_enic.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_failsafe.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_i40e.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_ixgbe.so.2()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_mlx4.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - nothing provides librte_pmd_mlx5.so.1()(64bit) needed by 
openvswitch-2.11.1-5.el8.x86_64
 - noth

[ovirt-users] Re: oVirt 4.4 Self-hosted Engine and Intel Skylake CPUs

2020-06-15 Thread Anton Gonzalez
Hey. Yup, this is a known issue. You can reference the following threads:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/KZHDCDE6JYADDMFSZD6AXYBP6SPV4TGA/
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/5LBCJGWTVRVTEWC5VSDQ2OINQ3OHKQ7K/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BY77QHD4TXXEUFCQWBOYLRQUSYSAE5P5/


[ovirt-users] Continuing HE install from lost connection

2020-06-15 Thread Glenn Marcy
I ran the Prepare VM stage of the GUI HostedEngine install and when I came back 
it had disconnected.  Looking at the logs and the current status everything 
appears to be just fine, but there doesn't seem to be any way in the GUI to 
continue where it left off.  I was wondering if there was some way to provide 
the hosted_storage information requested in the next panel and complete the 
installation, perhaps with the command line, instead of redoing all of the work 
that has already been completed successfully?

Regards,
Glenn Marcy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KPV4LQOHCH74DI2KCGI3WL5OMFT6TXK/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Nir Soffer
On Mon, Jun 15, 2020 at 5:58 PM Marco Fais  wrote:
>
> Hi Nir,
>
> I have raised the same issue here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1840414
>
> It is indeed working with the libvirt and qemu versions you mention below, 
> however in order to use them I had to add the testing repository for AV which 
> could bring other issues...
> Also ovirt-node-4.4.0 ships with libvirt 5.6/qemu 4.1 -- if the recommended 
> versions are the more recent ones, should they not be included in 
> ovirt-node-4.4.x as well?

At this point, yes, you should use the testing repos, since libivrt and qemu
are too old in CentOS 8.1.

Once CentOS 8.2 is released, we should be good with the official packages.

> Thanks,
> Marco
>
> On Mon, 15 Jun 2020 at 13:00, Nir Soffer  wrote:
>>
>> On Mon, Jun 15, 2020 at 2:38 PM Yedidyah Bar David  wrote:
>> >
>> > On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
>> >  wrote:
>> > >
>> > > Hi,
>> > >
>> > > I tried to send the log to you by email, but it fails. So I have sent 
>> > > them to Google Drive. Please go to the link below to get them:
>> > >
>> > > https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
>> > > https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
>> >
>> > I did get them, but not engine logs. Can you please attach them as well? 
>> > Thanks.
>> >
>> > vdsm.log.61 has:
>> >
>> > 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
>> > (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
>> > 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
>> > Traceback (most recent call last):
>> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in 
>> > merge
>> > bandwidth, flags)
>> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 
>> > 101, in f
>> > ret = attr(*args, **kwargs)
>> >   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
>> > line 131, in wrapper
>> > ret = f(*args, **kwargs)
>> >   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
>> > line 94, in wrapper
>> > return func(inst, *args, **kwargs)
>> >   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in 
>> > blockCommit
>> > if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
>> > dom=self)
>> > libvirt.libvirtError: internal error: qemu block name
>> > 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
>> > "filename": 
>> > "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
>> > "driver": "qcow2", "file": {"driver": "file", "filename":
>> > "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
>> > doesn't match expected
>> > '/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'
>>
>> This is a known issue in libvirt 5.6/qemu 4.1.
>>
>> It works in libivrt >= 6.0 and qemu >= 4.2, which are the versions
>> needed for 4.4.
>>
>> > Adding Eyal. Eyal, can you please have a look? Thanks.
>> >
>> > >
>> > > Best regards,
>> > >
>> > > Minnie Du--Presales & Business Development
>> > >
>> > > Mob  : +86-15244932162
>> > > Tel: +86-28-85530156
>> > > Skype :minnie...@vinchin.com
>> > > Email: minnie...@vinchin.com
>> > > Website: www.vinchin.com
>> > >
>> > > F5, Building 8, National Information Security Industry Park, No.333 
>> > > YunHua Road, Hi-Tech Zone, Chengdu, China
>> > >
>> > >
>> > > From: Yedidyah Bar David
>> > > Date: 2020-06-15 15:42
>> > > To: minnie.du
>> > > CC: users
>> > > Subject: Re: [ovirt-users] Problem with oVirt 4.4
>> > > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
>> > > >
>> > > > We have met a problem when testing oVirt 4.4.
>> > > >
>> > > > Our VM is on NFS storage. When testing the snapshot function of oVirt 
>> > > > 4.4, we created snapshot 1 and then snapshot 2, but after clicking the 
>> > > > delete button of snapshot 1, snapshot 1 failed to be deleted and the 
>> > > > state of corresponding disk became illegal. Removing the snapshot in 
>> > > > this state requires a lot of risky work in the background, leading to 
>> > > > the inability to free up snapshot space. Long-term backups will cause 
>> > > > the target VM to create a large number of unrecoverable snapshots, 
>> > > > thus taking up a large amount of production storage. So we need your 
>> > > > help.
>> > >
>> > > Can you please share relevant parts of engine and vdsm logs? Perhaps
>> > > open a bug and attach all of them, just in case.
>> > >
>> > > Thanks!
>> > > --
>> > > Didi
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > Didi
>> > ___
>> > Users mailing list -- users

[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Marco Fais
Hi Nir,

I have raised the same issue here:
https://bugzilla.redhat.com/show_bug.cgi?id=1840414

It is indeed working with the libvirt and qemu versions you mention below,
however in order to use them I had to add the testing repository for AV
which could bring other issues...
Also ovirt-node-4.4.0 ships with libvirt 5.6/qemu 4.1 -- if the recommended
versions are the more recent ones, should they not be included in
ovirt-node-4.4.x as well?

Thanks,
Marco

On Mon, 15 Jun 2020 at 13:00, Nir Soffer  wrote:

> On Mon, Jun 15, 2020 at 2:38 PM Yedidyah Bar David 
> wrote:
> >
> > On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
> >  wrote:
> > >
> > > Hi,
> > >
> > > I tried to send the log to you by email, but it fails. So I have sent
> them to Google Drive. Please go to the link below to get them:
> > >
> > >
> https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> > >
> https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
> >
> > I did get them, but not engine logs. Can you please attach them as well?
> Thanks.
> >
> > vdsm.log.61 has:
> >
> > 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
> > (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
> > 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in
> merge
> > bandwidth, flags)
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line
> 101, in f
> > ret = attr(*args, **kwargs)
> >   File
> "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> > line 131, in wrapper
> > ret = f(*args, **kwargs)
> >   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> > line 94, in wrapper
> > return func(inst, *args, **kwargs)
> >   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in
> blockCommit
> > if ret == -1: raise libvirtError ('virDomainBlockCommit() failed',
> dom=self)
> > libvirt.libvirtError: internal error: qemu block name
> > 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
> > "filename": "/rhev/data-center/mnt/192.168.67.8:
> _root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
> > "driver": "qcow2", "file": {"driver": "file", "filename":
> > "/rhev/data-center/mnt/192.168.67.8:
> _root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
> > doesn't match expected
> > '/rhev/data-center/mnt/192.168.67.8:
> _root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'
>
> This is a known issue in libvirt 5.6/qemu 4.1.
>
> It works in libivrt >= 6.0 and qemu >= 4.2, which are the versions
> needed for 4.4.
>
> > Adding Eyal. Eyal, can you please have a look? Thanks.
> >
> > >
> > > Best regards,
> > >
> > > Minnie Du--Presales & Business Development
> > >
> > > Mob  : +86-15244932162
> > > Tel: +86-28-85530156
> > > Skype :minnie...@vinchin.com
> > > Email: minnie...@vinchin.com
> > > Website: www.vinchin.com
> > >
> > > F5, Building 8, National Information Security Industry Park, No.333
> YunHua Road, Hi-Tech Zone, Chengdu, China
> > >
> > >
> > > From: Yedidyah Bar David
> > > Date: 2020-06-15 15:42
> > > To: minnie.du
> > > CC: users
> > > Subject: Re: [ovirt-users] Problem with oVirt 4.4
> > > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> > > >
> > > > We have met a problem when testing oVirt 4.4.
> > > >
> > > > Our VM is on NFS storage. When testing the snapshot function of
> oVirt 4.4, we created snapshot 1 and then snapshot 2, but after clicking
> the delete button of snapshot 1, snapshot 1 failed to be deleted and the
> state of corresponding disk became illegal. Removing the snapshot in this
> state requires a lot of risky work in the background, leading to the
> inability to free up snapshot space. Long-term backups will cause the
> target VM to create a large number of unrecoverable snapshots, thus taking
> up a large amount of production storage. So we need your help.
> > >
> > > Can you please share relevant parts of engine and vdsm logs? Perhaps
> > > open a bug and attach all of them, just in case.
> > >
> > > Thanks!
> > > --
> > > Didi
> > >
> > >
> >
> >
> >
> > --
> > Didi
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/
> ___
> Users mailing list -- users@ovirt.org
> To unsubs

[ovirt-users] Re: [ANN] oVirt 4.4.1 Fourth Release Candidate is now available for testing

2020-06-15 Thread Sandro Bonazzola
Il giorno lun 15 giu 2020 alle ore 07:57 Glenn Marcy  ha
scritto:

> This candidate changes the version of the OpenStack Java API from 3.2.8 to
> 3.2.9, which isn't available in the repositories.
> Installation produces the error:
>
> [ ERROR ] fatal: [localhost -> ovirt-engine.example.com]: FAILED! =>
> {"changed": false, "failures": [], "msg": "Depsolve Error occured:
>  Problem: cannot install the best candidate for the job
>  - nothing provides openstack-java-cinder-client >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-cinder-model >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-client >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-glance-client >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-glance-model >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-keystone-client >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-keystone-model >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-quantum-client >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-quantum-model >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch
>  - nothing provides openstack-java-resteasy-connector >= 3.2.9 needed by
> ovirt-engine-4.4.1.3-1.el8.noarch", "rc": 1, "results": []}
>
> Regards,
>
> Glenn Marcy
>
>
Thanks for reporting, tagged openstack-java-sdk-3.2.9-1.el8
 for release.
Should land on mirrors in a couple of hours.
+Dominik Holler  FYI.





> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QFFEBV6RPBJ3VVZZ6MLHM6VNNFWAIVR7/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G7FUT7RRV7GHMANQA6PUIWNVJVO7URGX/


[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-15 Thread Yedidyah Bar David
On Mon, Jun 15, 2020 at 4:54 PM Gilboa Davara  wrote:
>
> On Mon, Jun 15, 2020 at 11:46 AM Yedidyah Bar David  wrote:
> >
> > On Mon, Jun 15, 2020 at 11:21 AM Gilboa Davara  wrote:
> > >
> > > On Mon, Jun 15, 2020 at 9:13 AM Yedidyah Bar David  
> > > wrote:
> > > >
> > > > On Fri, Jun 12, 2020 at 1:49 PM Gilboa Davara  wrote:
> > > > >
> > > > > Hello,
> > > > >
> > > > > I'm trying to deploy a hosted engine on one of my test setups.
> > > > > No matter how I tried to deploy the hosted engine, either via command 
> > > > > line or via "Hosted Engine" deployment from the cockpit web console, 
> > > > > I always fails with the same error message. [1]
> > > > > Manually trying to download RPMs via dnf from the host, work just 
> > > > > fine.
> > > > > Firewall log files are clean.
> > > > >
> > > > > Any idea what's going on?
> > > > >
> > > > > [1]  2020-06-12 06:09:38,609-0400 DEBUG 
> > > > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > > > ansible_utils._process_output:103 {'msg': "Failed to download 
> > > > > metadata for repo 'AppStream'", 'results': [], 'rc': 1, 'invocation': 
> > > > > {'module_args': {'name': ['ovirt-engine'], 'state': 'present', 
> > > > > 'allow_downgrade': False, 'autoremove': False, 'bugfix': False, 
> > > > > 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 
> > > > > 'down  load_only': False, 'enable_plugin': [], 'enablerepo': [], 
> > > > > 'exclude': [], 'installroot': '/', 'install_repoquery': True, 
> > > > > 'install_weak_deps': True, 'security': False, 'skip_broken': False, 
> > > > > 'update_cache': False, 'update_only': False, 'validate_certs': True, 
> > > > > 'lock_timeout': 30, 'conf_file': None, 'disable_excludes': None, 
> > > > > 'download_dir': None, 'list': None, 'releasever': None}}, 
> > > > > '_ansible_no_log': False, 'changed  ': False, 
> > > > > '_ansible_delegated_vars': {'ansible_host': 
> > > > > 'test-vmengine.localdomain'}}
> > > > >   2020-06-12 06:09:38,709-0400 ERROR 
> > > > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > > > ansible_utils._process_output:107 fatal: [localhost -> 
> > > > > gilboa-wx-vmovirt.localdomain]: FAILED! => {"changed": false, "msg": 
> > > > > "Failed to download metadata for repo 'AppStream'", "rc": 1, 
> > > > > "results": []}
> > > > >   2020-06-12 06:09:39,711-0400 DEBUG 
> > > > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > > > ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 183 
> > > > > changed: 57 unreachable: 0 skipped: 77 failed: 1
> > > > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215 
> > > > > ansible-playbook rc: 2
> > > > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222 
> > > > > ansible-playbook stdout:
> > > > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225 
> > > > > ansible-playbook stderr:
> > > > >   2020-06-12 06:09:39,812-0400 DEBUG otopi.context 
> > > > > context._executeMethod:145 method exception
> > > > >   Traceback (most recent call last):
> > > > > File "/usr/lib/python3.6/site-packages/otopi/context.py", line 
> > > > > 132, in _executeMethod
> > > > >   method['method']()
> > > > > File 
> > > > > "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
> > > > >  line 403, in _closeup
> > > > >   r = ah.run()
> > > > > File 
> > > > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
> > > > >  line 229, in run
> > > > >   raise RuntimeError(_('Failed executing ansible-playbook'))
> > > >
> > > > This snippet does not reveal the cause for failure, or the exact place
> > > > where it happened. Can you please check/share the full file, as long
> > > > as perhaps other files in /var/log/ovirt-hosted-engine-setup (and
> > > > maybe others in /var/log)? Thanks!
> > > >
> > > > Best regards,
> > > > --
> > > > Didi
> > > >
> > >
> > > H,
> > >
> > > Compressed tar.bz2 of ovirt-hosted-engine-setup attached.
> > > Please let me know if you need additional log files.
> > > (/var/log/messages seems rather empty)
> >
> > Ok, it's failing in the task "Install oVirt Engine package", which
> > tries to install/upgrade the package 'ovirt-engine' on the engine VM.
> > Can you try to do this manually and see if it works?
> >
> > At this stage, the engine VM is on libvirt's default network
> > (private), you can find the temporary address by searching the log for
> > local_vm_ip, which is 192.168.1.173, in your log.
> >
> > Good luck and best regards,
> > --
> > Didi
> >
>
> You are correct.
> $ dnf install -y ovirt-engine
>  Problem: package ovirt-engine-4.4.0.3-1.el8.noarch requires
> apache-commons-jxpath, but none of the providers can be installed
>   - conflicting requests
>   - package apache-commons-jxpath-1.3-29.module_el8.0.0+30+832da3a1.noarc

[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-15 Thread Gilboa Davara
On Mon, Jun 15, 2020 at 11:46 AM Yedidyah Bar David  wrote:
>
> On Mon, Jun 15, 2020 at 11:21 AM Gilboa Davara  wrote:
> >
> > On Mon, Jun 15, 2020 at 9:13 AM Yedidyah Bar David  wrote:
> > >
> > > On Fri, Jun 12, 2020 at 1:49 PM Gilboa Davara  wrote:
> > > >
> > > > Hello,
> > > >
> > > > I'm trying to deploy a hosted engine on one of my test setups.
> > > > No matter how I tried to deploy the hosted engine, either via command 
> > > > line or via "Hosted Engine" deployment from the cockpit web console, I 
> > > > always fails with the same error message. [1]
> > > > Manually trying to download RPMs via dnf from the host, work just fine.
> > > > Firewall log files are clean.
> > > >
> > > > Any idea what's going on?
> > > >
> > > > [1]  2020-06-12 06:09:38,609-0400 DEBUG 
> > > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > > ansible_utils._process_output:103 {'msg': "Failed to download metadata 
> > > > for repo 'AppStream'", 'results': [], 'rc': 1, 'invocation': 
> > > > {'module_args': {'name': ['ovirt-engine'], 'state': 'present', 
> > > > 'allow_downgrade': False, 'autoremove': False, 'bugfix': False, 
> > > > 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 
> > > > 'down  load_only': False, 'enable_plugin': [], 'enablerepo': [], 
> > > > 'exclude': [], 'installroot': '/', 'install_repoquery': True, 
> > > > 'install_weak_deps': True, 'security': False, 'skip_broken': False, 
> > > > 'update_cache': False, 'update_only': False, 'validate_certs': True, 
> > > > 'lock_timeout': 30, 'conf_file': None, 'disable_excludes': None, 
> > > > 'download_dir': None, 'list': None, 'releasever': None}}, 
> > > > '_ansible_no_log': False, 'changed  ': False, 
> > > > '_ansible_delegated_vars': {'ansible_host': 
> > > > 'test-vmengine.localdomain'}}
> > > >   2020-06-12 06:09:38,709-0400 ERROR 
> > > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > > ansible_utils._process_output:107 fatal: [localhost -> 
> > > > gilboa-wx-vmovirt.localdomain]: FAILED! => {"changed": false, "msg": 
> > > > "Failed to download metadata for repo 'AppStream'", "rc": 1, "results": 
> > > > []}
> > > >   2020-06-12 06:09:39,711-0400 DEBUG 
> > > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > > ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 183 
> > > > changed: 57 unreachable: 0 skipped: 77 failed: 1
> > > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215 
> > > > ansible-playbook rc: 2
> > > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222 
> > > > ansible-playbook stdout:
> > > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225 
> > > > ansible-playbook stderr:
> > > >   2020-06-12 06:09:39,812-0400 DEBUG otopi.context 
> > > > context._executeMethod:145 method exception
> > > >   Traceback (most recent call last):
> > > > File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, 
> > > > in _executeMethod
> > > >   method['method']()
> > > > File 
> > > > "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
> > > >  line 403, in _closeup
> > > >   r = ah.run()
> > > > File 
> > > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
> > > >  line 229, in run
> > > >   raise RuntimeError(_('Failed executing ansible-playbook'))
> > >
> > > This snippet does not reveal the cause for failure, or the exact place
> > > where it happened. Can you please check/share the full file, as long
> > > as perhaps other files in /var/log/ovirt-hosted-engine-setup (and
> > > maybe others in /var/log)? Thanks!
> > >
> > > Best regards,
> > > --
> > > Didi
> > >
> >
> > H,
> >
> > Compressed tar.bz2 of ovirt-hosted-engine-setup attached.
> > Please let me know if you need additional log files.
> > (/var/log/messages seems rather empty)
>
> Ok, it's failing in the task "Install oVirt Engine package", which
> tries to install/upgrade the package 'ovirt-engine' on the engine VM.
> Can you try to do this manually and see if it works?
>
> At this stage, the engine VM is on libvirt's default network
> (private), you can find the temporary address by searching the log for
> local_vm_ip, which is 192.168.1.173, in your log.
>
> Good luck and best regards,
> --
> Didi
>

You are correct.
$ dnf install -y ovirt-engine
 Problem: package ovirt-engine-4.4.0.3-1.el8.noarch requires
apache-commons-jxpath, but none of the providers can be installed
  - conflicting requests
  - package apache-commons-jxpath-1.3-29.module_el8.0.0+30+832da3a1.noarch
is excluded
(try to add '--skip-broken' to skip uninstallable packages or
'--nobest' to use not only best candidate packages)

- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Pri

[ovirt-users] oVirt 4.4 Self-hosted Engine and Intel Skylake CPUs

2020-06-15 Thread Erez Zarum
Hey,
I was trying to install oVirt with SE on a node that has Intel Skylake CPU
(Intel Xeon Gold 6238R CPU to be precise) which by Intel supports TSX.
When the SE was provisioned as a local VM all was working well, it was
using a different CPU type for local provisioning.
After the local SE VM was migrated to the shared Storage (iSCSI) and was
configured, it failed to start.
When checking the XML (and vm.conf) that was created and provided to
libvirt I noticed it uses the "Secure Intel Skylake" type CPU
with +tsx-ctrl as a required flag.
My assumption as this is a fresh install of oVirt with SE is that the newly
created Cluster was set to this CPU compatibility.
This specific CPU by Intel does not expose any tsx flags, while it does
indeed support TSX libvirt has no way of knowing it, more strange, some
other CPUs from that same range/models do expose the tsx flag.
I have tried to set the kernel cmdline to tsx=yes|auto|off and none of
those helped.
The quick solution was to start the engine manually by editing the XML file
(hosted-engine --vm-shutdown and then start it with libvirt) and change the
Cluster CPU type and the HostedEngine CPU type as well to "Intel Skylake"
and then start it (hosted-engine --vm-start)
Another solution which i haven't tried is to get the correct string from
the oVirt-Engine API of the non-secure Intel Skylake CPU and hardcode it
into the cpu_model fact in the SE ansible role (task).
At the end i opted not to try any workaround and decided to go on with
oVirt 4.3 which went smooth, it chose "Intel Skylake Server IBRS SSBD MDS
Family" as the Cluster CPU compatibility and installation went without any
errors/issues.
1) What will happen if i decide to upgrade to 4.4? I will first have to
reinstall a node with CentOS 8 and then migrate the HostedEngine to there
as well, will it keep the current cluster CPU type or will it try to
upgrade and these fail the upgrade?
2) Are you aware of this situation? I understand this is a new solution
because you had to update everytime the CPU databases but on the other
hand, Intel is not helping here by not being strict about exposing the tsx
flags, perhaps the best will be to let the user chose which CPU type use on
the first Cluster created by the SE ansible role? (As far as i remember,
this was available in the previous versions of oVirt)

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JRERYTVG7D6EC5SRYFSWZCYE6OD5276V/


[ovirt-users] Re: Ansible ovirt.hosts has no port setting

2020-06-15 Thread Martin Necas
Hi,

thank you for the request.
I have created PR on oVirt Ansible collection [1]
Because this is RFE I'll not be able to get it to Ansible 2.9 but only to the 
collection.
Further issues/requests on oVirt modules recommend opening on the GitHub repo.

[1] https://github.com/oVirt/ovirt-ansible-collection/pull/60

Martin Necas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4C2LRHNPXBNAACHAJUDAQIMMB6IJEIZP/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Nir Soffer
On Mon, Jun 15, 2020 at 2:38 PM Yedidyah Bar David  wrote:
>
> On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
>  wrote:
> >
> > Hi,
> >
> > I tried to send the log to you by email, but it fails. So I have sent them 
> > to Google Drive. Please go to the link below to get them:
> >
> > https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> > https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
>
> I did get them, but not engine logs. Can you please attach them as well? 
> Thanks.
>
> vdsm.log.61 has:
>
> 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
> (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
> 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in merge
> bandwidth, flags)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> line 94, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in 
> blockCommit
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
> dom=self)
> libvirt.libvirtError: internal error: qemu block name
> 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
> "filename": 
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
> "driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
> doesn't match expected
> '/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'

This is a known issue in libvirt 5.6/qemu 4.1.

It works in libivrt >= 6.0 and qemu >= 4.2, which are the versions
needed for 4.4.

> Adding Eyal. Eyal, can you please have a look? Thanks.
>
> >
> > Best regards,
> >
> > Minnie Du--Presales & Business Development
> >
> > Mob  : +86-15244932162
> > Tel: +86-28-85530156
> > Skype :minnie...@vinchin.com
> > Email: minnie...@vinchin.com
> > Website: www.vinchin.com
> >
> > F5, Building 8, National Information Security Industry Park, No.333 YunHua 
> > Road, Hi-Tech Zone, Chengdu, China
> >
> >
> > From: Yedidyah Bar David
> > Date: 2020-06-15 15:42
> > To: minnie.du
> > CC: users
> > Subject: Re: [ovirt-users] Problem with oVirt 4.4
> > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> > >
> > > We have met a problem when testing oVirt 4.4.
> > >
> > > Our VM is on NFS storage. When testing the snapshot function of oVirt 
> > > 4.4, we created snapshot 1 and then snapshot 2, but after clicking the 
> > > delete button of snapshot 1, snapshot 1 failed to be deleted and the 
> > > state of corresponding disk became illegal. Removing the snapshot in this 
> > > state requires a lot of risky work in the background, leading to the 
> > > inability to free up snapshot space. Long-term backups will cause the 
> > > target VM to create a large number of unrecoverable snapshots, thus 
> > > taking up a large amount of production storage. So we need your help.
> >
> > Can you please share relevant parts of engine and vdsm logs? Perhaps
> > open a bug and attach all of them, just in case.
> >
> > Thanks!
> > --
> > Didi
> >
> >
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJ7T5B3URAV3QVC45TF6QOVEWJORUPOT/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Benny Zlotnik
looks like https://bugzilla.redhat.com/show_bug.cgi?id=1785939

On Mon, Jun 15, 2020 at 2:37 PM Yedidyah Bar David  wrote:
>
> On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
>  wrote:
> >
> > Hi,
> >
> > I tried to send the log to you by email, but it fails. So I have sent them 
> > to Google Drive. Please go to the link below to get them:
> >
> > https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> > https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
>
> I did get them, but not engine logs. Can you please attach them as well? 
> Thanks.
>
> vdsm.log.61 has:
>
> 2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
> (vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
> 1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in merge
> bandwidth, flags)
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> line 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> line 94, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in 
> blockCommit
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
> dom=self)
> libvirt.libvirtError: internal error: qemu block name
> 'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
> "filename": 
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
> "driver": "qcow2", "file": {"driver": "file", "filename":
> "/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
> doesn't match expected
> '/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'
>
> Adding Eyal. Eyal, can you please have a look? Thanks.
>
> >
> > Best regards,
> >
> > Minnie Du--Presales & Business Development
> >
> > Mob  : +86-15244932162
> > Tel: +86-28-85530156
> > Skype :minnie...@vinchin.com
> > Email: minnie...@vinchin.com
> > Website: www.vinchin.com
> >
> > F5, Building 8, National Information Security Industry Park, No.333 YunHua 
> > Road, Hi-Tech Zone, Chengdu, China
> >
> >
> > From: Yedidyah Bar David
> > Date: 2020-06-15 15:42
> > To: minnie.du
> > CC: users
> > Subject: Re: [ovirt-users] Problem with oVirt 4.4
> > On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> > >
> > > We have met a problem when testing oVirt 4.4.
> > >
> > > Our VM is on NFS storage. When testing the snapshot function of oVirt 
> > > 4.4, we created snapshot 1 and then snapshot 2, but after clicking the 
> > > delete button of snapshot 1, snapshot 1 failed to be deleted and the 
> > > state of corresponding disk became illegal. Removing the snapshot in this 
> > > state requires a lot of risky work in the background, leading to the 
> > > inability to free up snapshot space. Long-term backups will cause the 
> > > target VM to create a large number of unrecoverable snapshots, thus 
> > > taking up a large amount of production storage. So we need your help.
> >
> > Can you please share relevant parts of engine and vdsm logs? Perhaps
> > open a bug and attach all of them, just in case.
> >
> > Thanks!
> > --
> > Didi
> >
> >
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYVFRUWNYE2NFRZAYSIL2WQN72TYROT3/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Yedidyah Bar David
On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
 wrote:
>
> Hi,
>
> I tried to send the log to you by email, but it fails. So I have sent them to 
> Google Drive. Please go to the link below to get them:
>
> https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
> https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing

I did get them, but not engine logs. Can you please attach them as well? Thanks.

vdsm.log.61 has:

2020-05-26 14:36:49,668+ ERROR (jsonrpc/6) [virt.vm]
(vmId='e78ce69c-94f3-416b-a4ed-257161bde4d4') Live merge failed (job:
1c308aa8-a829-4563-9c01-326199c3d28b) (vm:5381)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 5379, in merge
bandwidth, flags)
  File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
line 94, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 728, in blockCommit
if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', dom=self)
libvirt.libvirtError: internal error: qemu block name
'json:{"backing": {"driver": "qcow2", "file": {"driver": "file",
"filename": 
"/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/08f91e3f-f37b-4434-a183-56478b732c1b"}},
"driver": "qcow2", "file": {"driver": "file", "filename":
"/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990"}}'
doesn't match expected
'/rhev/data-center/mnt/192.168.67.8:_root_nfs_ovirt44__vm/01477dfd-1f4c-43d4-b000-603c6ed839b6/images/6140b67d-9895-4ee6-90a9-5410de8b5a01/5ba0d7e5-afa8-4d75-bc5a-1b077955a990'

Adding Eyal. Eyal, can you please have a look? Thanks.

>
> Best regards,
>
> Minnie Du--Presales & Business Development
>
> Mob  : +86-15244932162
> Tel: +86-28-85530156
> Skype :minnie...@vinchin.com
> Email: minnie...@vinchin.com
> Website: www.vinchin.com
>
> F5, Building 8, National Information Security Industry Park, No.333 YunHua 
> Road, Hi-Tech Zone, Chengdu, China
>
>
> From: Yedidyah Bar David
> Date: 2020-06-15 15:42
> To: minnie.du
> CC: users
> Subject: Re: [ovirt-users] Problem with oVirt 4.4
> On Mon, Jun 15, 2020 at 10:39 AM  wrote:
> >
> > We have met a problem when testing oVirt 4.4.
> >
> > Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, 
> > we created snapshot 1 and then snapshot 2, but after clicking the delete 
> > button of snapshot 1, snapshot 1 failed to be deleted and the state of 
> > corresponding disk became illegal. Removing the snapshot in this state 
> > requires a lot of risky work in the background, leading to the inability to 
> > free up snapshot space. Long-term backups will cause the target VM to 
> > create a large number of unrecoverable snapshots, thus taking up a large 
> > amount of production storage. So we need your help.
>
> Can you please share relevant parts of engine and vdsm logs? Perhaps
> open a bug and attach all of them, just in case.
>
> Thanks!
> --
> Didi
>
>



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U4SBKJTS4OSWVZB2UYEZEOM7TV2AWPXB/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Nir Soffer
On Mon, Jun 15, 2020 at 10:39 AM  wrote:
>
> We have met a problem when testing oVirt 4.4.
>
> Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we 
> created snapshot 1 and then snapshot 2, but after clicking the delete button 
> of snapshot 1, snapshot 1 failed to be deleted and the state of corresponding 
> disk became illegal.

Illegal means that you cannot restore this snapshot, it was modified by merging
data from the next layer, so it does not represent the state of the VM
at the time
of the snapshot any more.

> Removing the snapshot in this state requires a lot of risky work in the 
> background, leading to the inability to free up snapshot space. Long-term 
> backups will cause the target VM to create a large number of unrecoverable 
> snapshots, thus taking up a large amount of production storage. So we need 
> your help.

Failure to remove a snapshot should be recoverable by retrying the operation.

If retrying still fails, you should be able to delete the snapshot
after stopping the VM.

If this also fails, there may be an issue in the volume metadata that
needs to be
fixed manually.

Please file a bug if this is the case.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6QO37JKW76HGSIW5TDOH3LQMIPXKT3J/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread minnie...@vinchin.com
Hi,

I tried to send the log to you by email, but it fails. So I have sent them to 
Google Drive. Please go to the link below to get them:

https://drive.google.com/file/d/1c9dqkv7qyvH6sS9VcecJawQIg91-1HLR/view?usp=sharing
 
https://drive.google.com/file/d/1zYfr_6SLFZj_IpM2KQCf-hJv2ZR0zi1c/view?usp=sharing
 

Best regards,

Minnie Du--Presales & Business Development

Mob  : +86-15244932162   
Tel: +86-28-85530156
Skype :minnie...@vinchin.com 
Email: minnie...@vinchin.com 
Website: www.vinchin.com 

F5, Building 8, National Information Security Industry Park, No.333 YunHua 
Road, Hi-Tech Zone, Chengdu, China
 
From: Yedidyah Bar David
Date: 2020-06-15 15:42
To: minnie.du
CC: users
Subject: Re: [ovirt-users] Problem with oVirt 4.4
On Mon, Jun 15, 2020 at 10:39 AM  wrote:
>
> We have met a problem when testing oVirt 4.4.
>
> Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we 
> created snapshot 1 and then snapshot 2, but after clicking the delete button 
> of snapshot 1, snapshot 1 failed to be deleted and the state of corresponding 
> disk became illegal. Removing the snapshot in this state requires a lot of 
> risky work in the background, leading to the inability to free up snapshot 
> space. Long-term backups will cause the target VM to create a large number of 
> unrecoverable snapshots, thus taking up a large amount of production storage. 
> So we need your help.
 
Can you please share relevant parts of engine and vdsm logs? Perhaps
open a bug and attach all of them, just in case.
 
Thanks!
-- 
Didi
 
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CBXRHF2IKSNY2O6Y7CV2BJ5D7T2KOD25/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Роман Черкалин
Interesting, after update to 4.4 and I got same problem.

With regards,
Roman Cherkalin
NPP Mera
+7(495)783-71-59

- Исходное сообщение -
От: "Yedidyah Bar David" 
Кому: "minnie du" 
Копия: "users" 
Отправленные: Понедельник, 15 Июнь 2020 г 10:42:29
Тема: [ovirt-users] Re: Problem with oVirt 4.4

On Mon, Jun 15, 2020 at 10:39 AM  wrote:
>
> We have met a problem when testing oVirt 4.4.
>
> Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we 
> created snapshot 1 and then snapshot 2, but after clicking the delete button 
> of snapshot 1, snapshot 1 failed to be deleted and the state of corresponding 
> disk became illegal. Removing the snapshot in this state requires a lot of 
> risky work in the background, leading to the inability to free up snapshot 
> space. Long-term backups will cause the target VM to create a large number of 
> unrecoverable snapshots, thus taking up a large amount of production storage. 
> So we need your help.

Can you please share relevant parts of engine and vdsm logs? Perhaps
open a bug and attach all of them, just in case.

Thanks!
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/POOGPBP37ZQCGRGG5GVRXLTURHCPHQYY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7QC5ZUPLNS2BBUHI7TWZR4XJGJZCOPXS/


[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-15 Thread Yedidyah Bar David
On Mon, Jun 15, 2020 at 11:21 AM Gilboa Davara  wrote:
>
> On Mon, Jun 15, 2020 at 9:13 AM Yedidyah Bar David  wrote:
> >
> > On Fri, Jun 12, 2020 at 1:49 PM Gilboa Davara  wrote:
> > >
> > > Hello,
> > >
> > > I'm trying to deploy a hosted engine on one of my test setups.
> > > No matter how I tried to deploy the hosted engine, either via command 
> > > line or via "Hosted Engine" deployment from the cockpit web console, I 
> > > always fails with the same error message. [1]
> > > Manually trying to download RPMs via dnf from the host, work just fine.
> > > Firewall log files are clean.
> > >
> > > Any idea what's going on?
> > >
> > > [1]  2020-06-12 06:09:38,609-0400 DEBUG 
> > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > ansible_utils._process_output:103 {'msg': "Failed to download metadata 
> > > for repo 'AppStream'", 'results': [], 'rc': 1, 'invocation': 
> > > {'module_args': {'name': ['ovirt-engine'], 'state': 'present', 
> > > 'allow_downgrade': False, 'autoremove': False, 'bugfix': False, 
> > > 'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [], 
> > > 'down  load_only': False, 'enable_plugin': [], 'enablerepo': [], 
> > > 'exclude': [], 'installroot': '/', 'install_repoquery': True, 
> > > 'install_weak_deps': True, 'security': False, 'skip_broken': False, 
> > > 'update_cache': False, 'update_only': False, 'validate_certs': True, 
> > > 'lock_timeout': 30, 'conf_file': None, 'disable_excludes': None, 
> > > 'download_dir': None, 'list': None, 'releasever': None}}, 
> > > '_ansible_no_log': False, 'changed  ': False, '_ansible_delegated_vars': 
> > > {'ansible_host': 'test-vmengine.localdomain'}}
> > >   2020-06-12 06:09:38,709-0400 ERROR 
> > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > ansible_utils._process_output:107 fatal: [localhost -> 
> > > gilboa-wx-vmovirt.localdomain]: FAILED! => {"changed": false, "msg": 
> > > "Failed to download metadata for repo 'AppStream'", "rc": 1, "results": 
> > > []}
> > >   2020-06-12 06:09:39,711-0400 DEBUG 
> > > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > > ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 183 
> > > changed: 57 unreachable: 0 skipped: 77 failed: 1
> > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215 
> > > ansible-playbook rc: 2
> > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222 
> > > ansible-playbook stdout:
> > >   2020-06-12 06:09:39,812-0400 DEBUG 
> > > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225 
> > > ansible-playbook stderr:
> > >   2020-06-12 06:09:39,812-0400 DEBUG otopi.context 
> > > context._executeMethod:145 method exception
> > >   Traceback (most recent call last):
> > > File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, 
> > > in _executeMethod
> > >   method['method']()
> > > File 
> > > "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
> > >  line 403, in _closeup
> > >   r = ah.run()
> > > File 
> > > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
> > >  line 229, in run
> > >   raise RuntimeError(_('Failed executing ansible-playbook'))
> >
> > This snippet does not reveal the cause for failure, or the exact place
> > where it happened. Can you please check/share the full file, as long
> > as perhaps other files in /var/log/ovirt-hosted-engine-setup (and
> > maybe others in /var/log)? Thanks!
> >
> > Best regards,
> > --
> > Didi
> >
>
> H,
>
> Compressed tar.bz2 of ovirt-hosted-engine-setup attached.
> Please let me know if you need additional log files.
> (/var/log/messages seems rather empty)

Ok, it's failing in the task "Install oVirt Engine package", which
tries to install/upgrade the package 'ovirt-engine' on the engine VM.
Can you try to do this manually and see if it works?

At this stage, the engine VM is on libvirt's default network
(private), you can find the temporary address by searching the log for
local_vm_ip, which is 192.168.1.173, in your log.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XL5XFIBV75SEV6IYTRZIYUVGMMTUO3SN/


[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-15 Thread Gilboa Davara
On Mon, Jun 15, 2020 at 9:13 AM Yedidyah Bar David  wrote:
>
> On Fri, Jun 12, 2020 at 1:49 PM Gilboa Davara  wrote:
> >
> > Hello,
> >
> > I'm trying to deploy a hosted engine on one of my test setups.
> > No matter how I tried to deploy the hosted engine, either via command line 
> > or via "Hosted Engine" deployment from the cockpit web console, I always 
> > fails with the same error message. [1]
> > Manually trying to download RPMs via dnf from the host, work just fine.
> > Firewall log files are clean.
> >
> > Any idea what's going on?
> >
> > [1]  2020-06-12 06:09:38,609-0400 DEBUG 
> > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > ansible_utils._process_output:103 {'msg': "Failed to download metadata for 
> > repo 'AppStream'", 'results': [], 'rc': 1, 'invocation': {'module_args': 
> > {'name': ['ovirt-engine'], 'state': 'present', 'allow_downgrade': False, 
> > 'autoremove': False, 'bugfix': False, 'disable_gpg_check': False, 
> > 'disable_plugin': [], 'disablerepo': [], 'down  load_only': False, 
> > 'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'installroot': '/', 
> > 'install_repoquery': True, 'install_weak_deps': True, 'security': False, 
> > 'skip_broken': False, 'update_cache': False, 'update_only': False, 
> > 'validate_certs': True, 'lock_timeout': 30, 'conf_file': None, 
> > 'disable_excludes': None, 'download_dir': None, 'list': None, 'releasever': 
> > None}}, '_ansible_no_log': False, 'changed  ': False, 
> > '_ansible_delegated_vars': {'ansible_host': 'test-vmengine.localdomain'}}
> >   2020-06-12 06:09:38,709-0400 ERROR 
> > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > ansible_utils._process_output:107 fatal: [localhost -> 
> > gilboa-wx-vmovirt.localdomain]: FAILED! => {"changed": false, "msg": 
> > "Failed to download metadata for repo 'AppStream'", "rc": 1, "results": []}
> >   2020-06-12 06:09:39,711-0400 DEBUG 
> > otopi.ovirt_hosted_engine_setup.ansible_utils 
> > ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 183 changed: 
> > 57 unreachable: 0 skipped: 77 failed: 1
> >   2020-06-12 06:09:39,812-0400 DEBUG 
> > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215 
> > ansible-playbook rc: 2
> >   2020-06-12 06:09:39,812-0400 DEBUG 
> > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222 
> > ansible-playbook stdout:
> >   2020-06-12 06:09:39,812-0400 DEBUG 
> > otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225 
> > ansible-playbook stderr:
> >   2020-06-12 06:09:39,812-0400 DEBUG otopi.context 
> > context._executeMethod:145 method exception
> >   Traceback (most recent call last):
> > File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in 
> > _executeMethod
> >   method['method']()
> > File 
> > "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
> >  line 403, in _closeup
> >   r = ah.run()
> > File 
> > "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
> >  line 229, in run
> >   raise RuntimeError(_('Failed executing ansible-playbook'))
>
> This snippet does not reveal the cause for failure, or the exact place
> where it happened. Can you please check/share the full file, as long
> as perhaps other files in /var/log/ovirt-hosted-engine-setup (and
> maybe others in /var/log)? Thanks!
>
> Best regards,
> --
> Didi
>

H,

Compressed tar.bz2 of ovirt-hosted-engine-setup attached.
Please let me know if you need additional log files.
(/var/log/messages seems rather empty)

- Gilboa


ovirt-hosted-engine-setup.tar.bz2
Description: application/bzip
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MMULT7I7G2IWFZVUR46CSAJXMRVRMHLS/


[ovirt-users] Re: Problem with oVirt 4.4

2020-06-15 Thread Yedidyah Bar David
On Mon, Jun 15, 2020 at 10:39 AM  wrote:
>
> We have met a problem when testing oVirt 4.4.
>
> Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we 
> created snapshot 1 and then snapshot 2, but after clicking the delete button 
> of snapshot 1, snapshot 1 failed to be deleted and the state of corresponding 
> disk became illegal. Removing the snapshot in this state requires a lot of 
> risky work in the background, leading to the inability to free up snapshot 
> space. Long-term backups will cause the target VM to create a large number of 
> unrecoverable snapshots, thus taking up a large amount of production storage. 
> So we need your help.

Can you please share relevant parts of engine and vdsm logs? Perhaps
open a bug and attach all of them, just in case.

Thanks!
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/POOGPBP37ZQCGRGG5GVRXLTURHCPHQYY/


[ovirt-users] Problem with oVirt 4.4

2020-06-15 Thread minnie . du
We have met a problem when testing oVirt 4.4.

Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we 
created snapshot 1 and then snapshot 2, but after clicking the delete button of 
snapshot 1, snapshot 1 failed to be deleted and the state of corresponding disk 
became illegal. Removing the snapshot in this state requires a lot of risky 
work in the background, leading to the inability to free up snapshot space. 
Long-term backups will cause the target VM to create a large number of 
unrecoverable snapshots, thus taking up a large amount of production storage. 
So we need your help. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVTM3EVGOY7QCY2X5EPEYKWRCJZ6MP4B/