Hi Yaniv,

Thanks for your detailed reply, it's very much appreciated.

> On 5 Jan 2018, at 8:34 pm, Yaniv Kaul <yk...@redhat.com> wrote:
> 
> Indeed, greenfield deployment has its advantages.
> 
> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on 
> XenServer off one LUN at a time, remove that LUN from XenServer and add it to 
> oVirt as new storage, and continue - but if it's what has to be done, we'll 
> do it.
> 
> The migration of VMs has three parts:
> - VM configuration data (from name to number of CPUs, memory, etc.)

That's not too much of an issue for us, we have a pretty standard set of 
configuration for performance / sizing.

> - Data - the disks themselves.

This is the big one, for most hosts at least the data is on a dedicated logical 
volume, for example if it's postgresql, it would be LUKS on top of a logical 
volume for /var/lib/pgsql etc....

> - Adjusting VM internal data (paths, boot kernel, grub?, etc.)

Everything is currently PVHVM which uses standard grub2, you could literally dd 
any one of our VMs to a physical disk and boot it in any x86/64 machine.

> The first item could be automated. Unfortunately, it was a bit of a challenge 
> to find a common automation platform. For example, we have great Ansible 
> support, which I could not find for XenServer (but[1], which may be a bit 
> limited). Perhaps if there aren't too many VMs, this could be done manually. 
> If you use Foreman, btw, then it could probably be used for both to provision?
> The 2nd - data movement could be done in at least two-three ways:
> 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
> 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using 
> the oVirt upload API (example in Python[2]). I think that's an easy to 
> implement option and provides the flexibility to copy from pretty much any 
> source to oVirt.

A key thing here would be how quickly the oVirt API can ingest the data, our 
storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s and 
around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find hypervisors 
disk virtualisation mechanisms to be the bottleneck - but adding an API to the 
mix, especially one that is single threaded (if that does the data stream 
processing) could be a big performance problem.

> 3. There are ways to convert XVA to qcow2 - I saw some references on the 
> Internet, never tried any.

This is something I was thinking of potentially doing, I can actually export 
each VM as an OVF/OVA package - since that's very standard I'm assuming oVirt 
can likely import them and convert to qcow2 or raw/LVM?

> 
> As for the last item, I'm really not sure what changes are needed, if at all. 
> I don't know the disk convention, for example (/dev/sd* for SCSI disk -> 
> virtio-scsi, but are there are other device types?)

Xen's virtual disks are all /dev/xvd[a-z]
Thankfully, we partition everything as LVM and partitions (other than boot I 
think) are mounted as such.

> 
> I'd be happy to help with any adjustment needed for the Python script below.

Very much appreciated, when I get to the point where I'm happy with the basic 
architectural design and POC deployment of oVirt - that's when I'll be testing 
importing VMs / data in various ways and have made note of these scripts.

> 
> Y.
> 
> [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html 
> <http://docs.ansible.com/ansible/latest/xenserver_facts_module.html>
> [2] 
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
>  
> <https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py>
>  

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to