On Fri, Jan 5, 2018 at 12:19 AM, Sam McLeod <mailingli...@smcleod.net>
wrote:

> Thanks for your response Yaniv,
>
>
>>> Context: Investigating migration from XenServer to oVirt (4.2.0)
>>>
>>
>> A very interesting subject - would love to see the outcome!
>>
>
> I'll certainly be writing one of not many blog posts on the process and
> outcome :)
>
> We've been wanting to switch to something more 'modern' for a while, but
> XenServer has had a very low TCO for us, sure it doesn't perform as well as
> Xen/KVM setup on top of CentOS/RHEL with updated kernels, tuning etc... but
> it just kept working, meanwhile we lost some people in my team so it hasn't
> been the right time to look at moving... until now...
>
> Citrix / XenServer recently screwed over the community (I don't use that
> term lightly) by kneecapping the free / unlicensed version of XenServer:
> https://xenserver.org/blog/entry/xenserver-7-3-
> changes-to-the-free-edition.html
>
> There's a large number of people very unhappy about this, as many of the
> people that contribute heavily to bug reporting, testing and rapid / modern
> deployment lifecycles were / are using the unlicensed version (like us over
> @infoxchange), so for us - this was the straw that broke the camel's back.
>
> I've been looking into various options such as oVirt, Proxmox, OpenStack
> and a roll-your-own libvirt style platform based on our CentOS (7 at
> present) SOE, so far oVirt is looking promising.
>
>
>>
>>>
>>> All our iSCSI storage is currently attached to XenServer hosts,
>>> XenServer formats those raw LUNs with LVM and VMs are stored within them.
>>>
>>
>> I suspect we need to copy the data. We might be able to do some tricks,
>> but at the end of the day I think copying the data, LV to LV, makes the
>> most sense.
>> However, I wonder what else is needed - do we need a conversion of the
>> drivers, different kernel, etc.?
>>
>
> All our Xen VMs are PVHVM, so there's no reason we could't export them as
> files, then import them to oVirt of we do go down the oVirt path after the
> POC.
> We run kernel-ml across our fleet (almost always running near-latest
> kernel release) and automate all configuration with Puppet.
>
> The issue I have with this is that it will be slow - XenServer's storage
> performance is *terrible* and there'd be lots of manual work involved.
>
> If this was to be the most simple option, I think we'd opt for rebuilding
> VMs from scratch, letting Puppet setup their config etc... then restoring
> data from backups / rsync etc... that way we'd still be performing the
> manual work - but we'd end up with nice clean VMs.
>

Indeed, greenfield deployment has its advantages.

>
> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on
> XenServer off one LUN at a time, remove that LUN from XenServer and add it
> to oVirt as new storage, and continue - but if it's what has to be done,
> we'll do it.
>

The migration of VMs has three parts:
- VM configuration data (from name to number of CPUs, memory, etc.)
- Data - the disks themselves.
- Adjusting VM internal data (paths, boot kernel, grub?, etc.)

The first item could be automated. Unfortunately, it was a bit of a
challenge to find a common automation platform. For example, we have great
Ansible support, which I could not find for XenServer (but[1], which may be
a bit limited). Perhaps if there aren't too many VMs, this could be done
manually. If you use Foreman, btw, then it could probably be used for both
to provision?
The 2nd - data movement could be done in at least two-three ways:
1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using
the oVirt upload API (example in Python[2]). I think that's an easy to
implement option and provides the flexibility to copy from pretty much any
source to oVirt.
3. There are ways to convert XVA to qcow2 - I saw some references on the
Internet, never tried any.

As for the last item, I'm really not sure what changes are needed, if at
all. I don't know the disk convention, for example (/dev/sd* for SCSI disk
-> virtio-scsi, but are there are other device types?)

I'd be happy to help with any adjustment needed for the Python script below.

Y.

[1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html
[2]
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py


>
>
>> What are the export options Xen provides? Perhaps OVF?
>> Is there an API to stream the disks from Xen?
>> Y.
>>
>
> Yes, Xen does have an API, but TBH - it's pretty awful to work with, think
> XML and lots of UUIDs...
>
>
>>
>>>
>>>
> --
> Sam McLeod
> https://smcleod.net
> https://twitter.com/s_mcleod
>
> On 4 Jan 2018, at 7:58 pm, Yaniv Kaul <yk...@redhat.com> wrote:
>
>
>
> On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod <mailingli...@smcleod.net>
> wrote:
>
>> If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data
>> centre that contains existing data - how does oVirt behave?
>>
>> For example the LUN might be partitioned as LVM, then contain existing
>> filesystems etc...
>>
>> - Would oVirt see that there is existing data on the LUN and simply
>> attach it as any other linux initiator (client) world, or would it try to
>> wipe the LUN clean and reinitialise it?
>>
>
> Neither - we will not be importing these as existing data domains, nor
> wipe them, as they have contents.
>
>
>>
>>
>> Context: Investigating migration from XenServer to oVirt (4.2.0)
>>
>
> A very interesting subject - would love to see the outcome!
>
>
>>
>> All our iSCSI storage is currently attached to XenServer hosts, XenServer
>> formats those raw LUNs with LVM and VMs are stored within them.
>>
>
> I suspect we need to copy the data. We might be able to do some tricks,
> but at the end of the day I think copying the data, LV to LV, makes the
> most sense.
> However, I wonder what else is needed - do we need a conversion of the
> drivers, different kernel, etc.?
>
> What are the export options Xen provides? Perhaps OVF?
> Is there an API to stream the disks from Xen?
> Y.
>
>
>>
>>
>>
>> *If the answer to this is already out there and I should have found it by
>> searching, I apologise, please point me to the link and I'll RTFM.*
>>
>> --
>> Sam McLeod
>> https://smcleod.net
>> https://twitter.com/s_mcleod
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to