On Sun, Aug 30, 2020 at 7:13 PM <tho...@hoberg.net> wrote:
> Struggling with bugs and issues on OVA export/import (my clear favorite 
> otherwise, especially when moving VMs between different types of 
> hypervisors), I've tried pretty much everything else, too.
> Export domains are deprecated and require quite a bit of manual handling. 
> Unfortunately the buttons for the various operations are all over the place 
> e.g. the activation and maintenance toggles are in different pages.

Using export domain is not a single click, but it is not that complicated.
But this is good feedback anyway.

> In the end the mechanisms underneath (qemu-img) seem very much the same and 
> suffer from the same issues (I have larger VMs that keep failing on imports).

I think the issue is gluster, not qemu-img.

> So far the only fool-proof method has been to use the imageio daemon to 
> upload and download disk images, either via the Python API or the Web-GUI.

How did you try? transfer via the UI is completely different than
transfer using the python API.

From the UI, you get the image content on storage, without sparseness
support. If you
download 500g raw sparse disk (e.g. gluster with allocation policy
thin) with 50g of data
and 450g of unallocated space, you will get 50g of data, and 450g of
zeroes. This is very
slow. If you upload the image to another system you will upload 500g
of data, which will
again be very slow.

From the python API, download and upload support sparseness, so you
will download and
upload only 50g. Both upload and download use 4 connections, so you
can maximize the
throughput that you can get from the storage. From python API, you can
convert the image
format during download/upload automatically, for example download raw
disk to qcow2

Gluster is a challenge (as usual), since when using sharding (enabled
by default for ovirt),
it does not report sparness. So even from the python API you will
download the entire 500g.
We can improve this using zero detection but this is not implemented yet.

> Transfer times are terrible though, 50MB/s is quite low when the network 
> below is 2.5-10Gbit and SSDs all around.

In our lab we tested upload of 100 GiB image and 10 concurrent uploads
of 100 GiB
images, and we measured throughput of 1 GiB/s:

I would like to understand the setup better:

- upload or download?
- disk format?
- disk storage?
- how is storage connected to host?
- how do you access the host (1g network? 10g?)
- image format?
- image storage?

> Obviously with Python as everybody's favorite GUI these days, you can also 
> copy and transfer the VMs complete definition, but I am one of those old 
> guys, who might even prefer a real GUI to mouse clicks on a browser.
> The documentation on backup domains is terrible. What's missing behind the 
> 404 link in oVirt becomes a very terse section in the RHV manuals, where 
> you're basically just told that after cloning the VM, you should then move 
> its disks to the backup domain...

backup domain is a partly cooked feature and it is not very useful.
There is no reason
to use it for moving VMs from one environment to another.

I already explained how to move vms using a data domain. Check here:

I'm not sure it is documented properly, please file a documentation
bug if we need to
add something to the documentation.

> What you are then supposed to do with the cloned VM, if it's ok to simplay 
> throw it away, because the definition is silently copied to the OVF_STORE on 
> the backup... none of that is explained or mentioned.

If you cloned a vm to data domain and then detach the data domain
there is nothing to cleanup in the source system.

> There is also no procedure for restoring a machine from a backup domain, when 
> really a cloning process that allows a target domain would be pretty much 
> what I'd vote for.

We have this in 4.4, try to select a VM and click "Export".

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
List Archives: 

Reply via email to