[ovirt-users] Failover IP on Hetzner

2022-08-09 Thread Tommaso - Shellrent via Users

Hi to all.

we want to configure a failover ip on an Hetzner host. Anyone can be 
able of make it works?


Ip failover does not include a MAC address, so it cant't work on a 
bridged network. They sauggest su confure it on a routed network.


On ovirt, how can we make it work?


Regards,
Tommaso.

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5VCKIEVSFCM4W7EUKBSYIGW5PQCWAGF/


[ovirt-users] Re: Unable to migrate VMs to a newly upgraded Ovirt node host

2022-02-25 Thread Tommaso - Shellrent via Users

Hi to all, we have the same problem.

we are tryng to migrate some VM from:

qemu 6.1 /libvirt 7.8

to

qemu 6.0 /libvirt 7.10

We cannot maintain qemu61, because one of this vm crash on snapshot remove.

How can we have a stable version of quemu(6.0?) and the live migrate 
functionality?



Il 16/12/2021 12:13, Sandro Bonazzola ha scritto:
+Arik Hadas  can you suggest a workaround 
here? Giving context:


4.4.9.1 shipped qemu-kvm 6.1 and libvirt 7.9.0
4.4.9.2 downgraded qemu-kvm to 6.0 due to pcie bug
4.4.9.3 upgraded libvirt to 7.10 as it was released in CentOS Stream.

So it seems a migration problem between 6.1/7.9 -> 6.0/7.10

qemu-kvm 6.2 got released upstream 
https://www.qemu.org/2021/12/14/qemu-6-2-0/ but it's not yet available 
in CentOS Stream.


Il giorno gio 16 dic 2021 alle ore 11:52 Giulio Casella 
 ha scritto:


Hi guys,
I just faced a problem after updating a host. I cannot migrate VM to
updated host.
Here's the error I see trying to migrate a VM to that host.

Dec 16 10:13:11 host01.ovn.di.unimi.it
 systemd[1]: Starting Network
Manager Script Dispatcher Service...
Dec 16 10:13:11 host01.ovn.di.unimi.it
 libvirtd[5667]: Unable to read
from monitor: Connection reset by peer
Dec 16 10:13:11 host01.ovn.di.unimi.it
 libvirtd[5667]: internal error:
qemu unexpectedly closed the monitor: 2021-12-16T10:13:00.447480Z
qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter
-numa node,mem is deprecated, use -numa node,memdev instead

2021-12-16T10:13:11.158057Z qemu-kvm: Failed to load pckbd:kbd

2021-12-16T10:13:11.158114Z qemu-kvm: error while loading state for
instance 0x0 of device 'pckbd'

2021-12-16T10:13:11.158744Z qemu-kvm: load of migration failed: No
such
file or directory
Dec 16 10:13:11 host01.ovn.xx.x.it
 kvm[35663]: 0 guests now active

Instead I can start VM on that host, and migrate away VM from that
host.

Rolling back to ovirt-node-ng-4.4.9.1-0.20211207.0+1 via host console
restores full functionality.

The affected version is ovirt-node-ng-4.4.9.3-0.20211215.0+1 (and
also
previous one, I don't remember precisely, it was another async
release).


Any ideas?

TIA,
gc

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJEZGEW6X77JGBMTORJ4UKTSX3BG27HX/



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

*
*

*

___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/WA6LG4SQDHIIXBG2K7N3BGNUMJMLDR5J/

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RROB77QI2BZSGQJWBPKAIF5Y23X7JU7E/


[ovirt-users] vdsm-client delete_checkpoints

2021-12-20 Thread Tommaso - Shellrent via Users

Hi, someone can give to use us an exemple of the command

vdsm-client VM delete_checkpoints

?

we have tried a lot of combinations like:

vdsm-client VM delete_checkpoints 
vmID="ce5d0251-e971-4d89-be1b-4bc28283614c" 
checkpoint_ids=["e0c56289-bfb3-4a91-9d33-737881972116"]


without success..


Regards,

Tommaso.

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VHUBTIQLQGUHMSBHX3MBNT2LU3RLMVLX/


[ovirt-users] Re: Parent checkpoint ID does not match the actual leaf checkpoint

2021-05-26 Thread Tommaso - Shellrent via Users

after some investigation we have foud that:

via virsh our VM have 71 checkpoint

on engine's db, in the table vm_checkpoints there are ZERO checkpoint.

Is ther a way to sync the checkpoints?!?


Regards,
Tommaso

Il 26/05/2021 10:27, Tommaso - Shellrent via Users ha scritto:


Hi to all. We have almost the same problem.

After a snapshot restore in a checkpoint taken previous last 
incremental backup, we have alwes the error "Parent checkpoint ID does 
not match the actual leaf checkpoint".

we have also the " parent_checkpoint_id: None"

there is a wey to fix it on a production env?

our engine version is 4.4.5.11 and vdsm 4.40.50





Il 19/07/2020 23:27, Nir Soffer ha scritto:
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński 
mailto:l.kolacin...@storware.eu>> wrote:


Hello,
Thanks to previous answers, I was able to make backups.
Unfortunately, we had some infrastructure issues and after the
host reboots new problems appeared. I am not able to do any
backup using the commands that worked yesterday. I looked through
the logs and there is something like this:

2020-07-17 15:06:30,644+02 ERROR
[org.ovirt.engine.core.bll.StartVmBackupCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
[944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM
backup operation 'StartVmBackup': {}:
org.ovirt.engine.core.common.errors.EngineException:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
StartVmBackupVDS, error = Checkpoint Error:
{'parent_checkpoint_id': None, 'leaf_checkpoint_id':
'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
'116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent
checkpoint ID does not match the actual leaf checkpoint*'},
code = 1610 (Failed with error unexpected and code 16)


It looks like engine sent:

    parent_checkpoint_id: None

This issue was fix in engine few weeks ago.

Which engine and vdsm versions are you testing?

at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at

java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at

java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at

org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
at

org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
at

java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at

java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at

org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)


And the last error is:

2020-07-17 15:13:45,835+02 ERROR
[org.ovirt.engine.core.bll.StartVmBackupCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
[f553c1f2-1c99-4118

[ovirt-users] Re: Parent checkpoint ID does not match the actual leaf checkpoint

2021-05-26 Thread Tommaso - Shellrent via Users

Hi to all. We have almost the same problem.

After a snapshot restore in a checkpoint taken previous last incremental 
backup, we have alwes the error "Parent checkpoint ID does not match the 
actual leaf checkpoint".

we have also the " parent_checkpoint_id: None"

there is a wey to fix it on a production env?

our engine version is 4.4.5.11 and vdsm 4.40.50





Il 19/07/2020 23:27, Nir Soffer ha scritto:
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński 
mailto:l.kolacin...@storware.eu>> wrote:


Hello,
Thanks to previous answers, I was able to make backups.
Unfortunately, we had some infrastructure issues and after the
host reboots new problems appeared. I am not able to do any backup
using the commands that worked yesterday. I looked through the
logs and there is something like this:

2020-07-17 15:06:30,644+02 ERROR
[org.ovirt.engine.core.bll.StartVmBackupCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
[944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM
backup operation 'StartVmBackup': {}:
org.ovirt.engine.core.common.errors.EngineException:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
StartVmBackupVDS, error = Checkpoint Error:
{'parent_checkpoint_id': None, 'leaf_checkpoint_id':
'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
'116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent
checkpoint ID does not match the actual leaf checkpoint*'},
code = 1610 (Failed with error unexpected and code 16)


It looks like engine sent:

    parent_checkpoint_id: None

This issue was fix in engine few weeks ago.

Which engine and vdsm versions are you testing?

at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at

deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at

java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at

java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at

org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
at

org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
at

java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at

java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at

org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)


And the last error is:

2020-07-17 15:13:45,835+02 ERROR
[org.ovirt.engine.core.bll.StartVmBackupCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
[f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM
backup operation 'GetVmBackupInfo': {}:
org.ovirt.engine.core.common.errors.EngineException:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
 

[ovirt-users] Maximum domains per data center

2020-10-13 Thread Tommaso - Shellrent via Users

Hi to all.

Can someone confirm to me the value of max domains per data center on 
ovirt 4.4 ?


We found only this for RHEV: https://access.redhat.com/articles/906543

Regards,
Tommaso.

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQFGCDRW2FF7EJRD77OZ6AEN4VXRWLIN/


[ovirt-users] export to ova

2020-09-25 Thread Tommaso - Shellrent via Users

Hi to all.

Hi try to ank another time the same question:

in our tests ovirt seems to be able to make only one export to ova at 
time. also on different hosts and datacenter.


Someone can explain to us why?? this is for us a big issue, because we 
use it in a backup script of more than 50 VMs and counting


We also already opened a bug without any useful response: 
https://bugzilla.redhat.com/show_bug.cgi?id=1855782


Regards,

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MUV776V2EMSCNIZWUG7P2TNMLQBPYX4/


[ovirt-users] Re: Error exporting into ova

2020-08-31 Thread Tommaso - Shellrent via Users
Hi Gianluca, we have read the massage from Jayme, but we have problem 
with the INTERNAL ansible playbook.


Wen we run an export, ovirt internally run the playbook 
/usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml


but only export at time

Il 27/08/2020 11:15, Gianluca Cecchi ha scritto:
On Tue, Aug 25, 2020 at 3:29 PM Tommaso - Shellrent 
mailto:tomm...@shellrent.com>> wrote:


Hi Gianluca.

We have a problem wit "export as ova" too.

In our env we back up all the vm with a python script that run an
export.

If we run multiple export at the same time, also on different
datacenter[but same engine], the wait for each other and do not
run in async mode.

If only one of those export takes 10h, all of them taskes 10+h too.

seems that all the export have to end a step of the playbook to go
on, but we see only one "nasible-playbook" process at a time.

Have you got any hint?


Regards,
Tommaso.









My post was one year old.
In the meantime you can check these two posts in archives by Jayme (I 
cc him):

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/U65CV5A6WC6SCB2R5N66Y7HPXQ3ZQT2H/#FAVZG32TPSX67DTXIHMGIQXUXNG3W3OE
and
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNSY6GYNS6LPNUJXERUO2EOG5F3P7B2M/

In the first post it appears that by default the exports were of type 
fire and forget (that is what you need) and so he included a wait_for 
task to instead have them sequential.

I think you can borrow from his github playbooks and adapt to your needs.
Or Jayme, you could apply a general change where you set a variable 
(eg sequential_flow) that by default is true and then you modify the  
export_vm.yml playbook in the task


- name: "Wait for export"

with the wait condition becoming of kind

when: (export_result is not failed) or (sequential_flow|bool == False)

This way by default the play workflow remains unchanged, but a user 
can set sequential_flow to False

What do you think about it?
Gianluca


--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SGXT666QAOFF5LD76KL7CXMJW2P73CG/


[ovirt-users] Re: Error exporting into ova

2020-08-25 Thread Tommaso - Shellrent via Users

Hi Gianluca.

We have a problem wit "export as ova" too.

In our env we back up all the vm with a python script that run an export.

If we run multiple export at the same time, also on different 
datacenter[but same engine], the wait for each other and do not run in 
async mode.


If only one of those export takes 10h, all of them taskes 10+h too.

seems that all the export have to end a step of the playbook to go on, 
but we see only one "nasible-playbook" process at a time.


Have you got any hint?


Regards,
Tommaso.

Il 23/07/2019 11:21, Gianluca Cecchi ha scritto:
On Fri, Jul 19, 2019 at 5:59 PM Gianluca Cecchi 
mailto:gianluca.cec...@gmail.com>> wrote:


On Fri, Jul 19, 2019 at 4:14 PM Gianluca Cecchi
mailto:gianluca.cec...@gmail.com>> wrote:

On Fri, Jul 19, 2019 at 3:15 PM Gianluca Cecchi
mailto:gianluca.cec...@gmail.com>>
wrote:



In engine.log the first error I see is 30 minutes after start

2019-07-19 12:25:31,563+02 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engineScheduled-Thread-64)
[2001ddf4] Ansible playbook execution failed: Timeout
occurred while executing Ansible playbook.


In the mean time, as the playbook seems this one ( I run the
job from engine) :
/usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml


Based on what described in bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1697301
I created at the moment the file
/etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
with
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=80
and restarted the engine and the python script to verify

Just to see if it completes, even if in my case with a 30Gb
preallocated disk, the source problem is qemu-img convert command
very slow in I/O.
It reads from iscsi multipath (2 paths) with 2x3MB/s and it writes
on nfs
If I run a dd command from iscsi device mapper device to an nfs
file I have 140MB/s rate that is what expected based on my storage
array performances and my network.

Not understood why the qemu-img command is so slow
The question still applies in case I have to do an appliance from
a VM with a very big disk, where the copy could potentially have
an elapsed of more that 30 minutes...
Gianluca


I confirm that setting  ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT was the 
solution.

I got the ova completed:

Starting to export Vm enginecopy1 as a Virtual Appliance 7/19/19 
5:53:05 PM
Vm enginecopy1 was exported successfully as a Virtual Appliance to 
path /save_ova/base/dump/myvm2.ova on Host ov301 7/19/19 6:58:07 PM


I have to understand why the conversion of the pre-allocated disk is 
so slow, because simulating I/O from iSCSI lun where VM disks live to 
the NFS share gives me about 110MB/s
I'm going to update to 4.3.4, just to see if there is any bug fixed. 
The same operation on vSphere have an elapsed of 5 minutes.

What is the eta for 4.3.5?

One notice:
if I manually create a snapshot of the same VM and then clone the 
snapshot, the process is this one
vdsm      5713 20116  6 10:50 ?        00:00:04 /usr/bin/qemu-img 
convert -p -t none -T none -f raw 
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b 
-O raw -W 
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/d13a5c43-0138-4cbb-b663-f3ad5f9f5983/fd4e1b08-15fe-45ee-ab12-87dea2d29bc4


and its speed is quite better (up to 100MB/s read and 100MB/s write) 
with a total elapsed of 6 minutes and 30 seconds.


during the ova generation the process was instead:
 vdsm     13505 13504  3 14:24 ?        00:01:26 qemu-img convert -T 
none -O qcow2 
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b 
/dev/loop0


could it be the "-O qcow2" the reason? Why qcow2 if origin is 
preallocated (raw)?


Gianluca

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UFB6ZACCJIN3DSRL4WJX4JLPVM2NSQEO/

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 

[ovirt-users] Re: Max running ansible-playbook

2020-08-21 Thread Tommaso - Shellrent via Users

Hi. No, we use a simple python script that trigger one single export.

We have a lot of single server datacenters with local storage, and in 
all of them we run this scrip almost simultaneously.


Il 20/08/2020 11:17, Arik Hadas ha scritto:



On Tue, Aug 18, 2020 at 10:49 AM Tommaso - Shellrent via Users 
mailto:users@ovirt.org>> wrote:


No one have this kind of problem?

We use OVA export to make some backup, but the whole task takes a
lot of time because only one ansible-playbook
"/usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml" run at
the same time.


How do you trigger the export operations?
Do you select multiple VMs/templates in the webadmin and call export 
to OVA?


Il 13/08/2020 10:14, Tommaso - Shellrent via Users ha scritto:


Hi to all.

on our angine we always see max 1 ansible-playbook running on the
same time.

How can we ingrease this value?


Regards.

-- 
-- 		

Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177


___
Users mailing list --users@ovirt.org  <mailto:users@ovirt.org>
To unsubscribe send an email tousers-le...@ovirt.org  
<mailto:users-le...@ovirt.org>
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGST4GFRRAKWZTJ5FJGBO3MGTL233WWK/
-- 
-- 		

Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKKLWCHKCHZNJVKUEBGP4JVABREJ7NK4/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JW7MYWZJ352UPOKWFX32Y76SCB5V2IA5/

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4OENWVTME5JIPZAMPWORAKV35BSK6JA/


[ovirt-users] Re: Max running ansible-playbook

2020-08-18 Thread Tommaso - Shellrent via Users

No one have this kind of problem?

We use OVA export to make some backup, but the whole task takes a lot of 
time because only one ansible-playbook 
"/usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml" run at the same 
time.


Il 13/08/2020 10:14, Tommaso - Shellrent via Users ha scritto:


Hi to all.

on our angine we always see max 1 ansible-playbook running on the same 
time.


How can we ingrease this value?


Regards.

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGST4GFRRAKWZTJ5FJGBO3MGTL233WWK/

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKKLWCHKCHZNJVKUEBGP4JVABREJ7NK4/


[ovirt-users] Max running ansible-playbook

2020-08-13 Thread Tommaso - Shellrent via Users

Hi to all.

on our angine we always see max 2 ansible-playbook running on the same time.

How can we ingrease this value?


Regards.

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGST4GFRRAKWZTJ5FJGBO3MGTL233WWK/


[ovirt-users] Mount options

2020-06-01 Thread Tommaso - Shellrent via Users

Hi to all.

        there is a way to change the mount options of a running storage 
domain with gluster, without set all to maintenance and shoutdown the vm 
on it!?


Regards,

--
--  
Shellrent - Il primo hosting italiano Security First

*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155  | Fax 04441492177

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3EQRIJVFBC5CQDHYLQA7AGA2EMFUC24/