Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-09 Thread Doron Fediuck
On 9 January 2018 at 13:54, Yaniv Kaul  wrote:

>
>
> On Mon, Jan 8, 2018 at 11:52 PM, Sam McLeod 
> wrote:
>
>> Hi Yaniv,
>>
>> Thanks for your detailed reply, it's very much appreciated.
>>
>> On 5 Jan 2018, at 8:34 pm, Yaniv Kaul  wrote:
>>
>> Indeed, greenfield deployment has its advantages.
>>
>>>
>>> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs
>>> on XenServer off one LUN at a time, remove that LUN from XenServer and add
>>> it to oVirt as new storage, and continue - but if it's what has to be done,
>>> we'll do it.
>>>
>>
>> The migration of VMs has three parts:
>> - VM configuration data (from name to number of CPUs, memory, etc.)
>>
>>
>> That's not too much of an issue for us, we have a pretty standard set of
>> configuration for performance / sizing.
>>
>> - Data - the disks themselves.
>>
>>
>> This is the big one, for most hosts at least the data is on a dedicated
>> logical volume, for example if it's postgresql, it would be LUKS on top of
>> a logical volume for /var/lib/pgsql etc
>>
>> - Adjusting VM internal data (paths, boot kernel, grub?, etc.)
>>
>>
>> Everything is currently PVHVM which uses standard grub2, you could
>> literally dd any one of our VMs to a physical disk and boot it in any
>> x86/64 machine.
>>
>> The first item could be automated. Unfortunately, it was a bit of a
>> challenge to find a common automation platform. For example, we have great
>> Ansible support, which I could not find for XenServer (but[1], which may be
>> a bit limited). Perhaps if there aren't too many VMs, this could be done
>> manually. If you use Foreman, btw, then it could probably be used for both
>> to provision?
>> The 2nd - data movement could be done in at least two-three ways:
>> 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
>> 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload
>> using the oVirt upload API (example in Python[2]). I think that's an easy
>> to implement option and provides the flexibility to copy from pretty much
>> any source to oVirt.
>>
>>
>> A key thing here would be how quickly the oVirt API can ingest the data,
>> our storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s
>> and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find
>> hypervisors disk virtualisation mechanisms to be the bottleneck - but
>> adding an API to the mix, especially one that is single threaded (if that
>> does the data stream processing) could be a big performance problem.
>>
>
> Well, it's just for the data copy. We can do ~300 or so MBps in a single
> upload API call, but you can copy multiple disks via multiple hypervisors
> in parallel. In addition, if you are using 'dd' you might even be able to
> use sg_xcopy (if it's the same storage) - worth looking into it.
> In any case, we have concrete plans to improve the performance of the
> upload API.
>
>>
>> 3. There are ways to convert XVA to qcow2 - I saw some references on the
>> Internet, never tried any.
>>
>>
>> This is something I was thinking of potentially doing, I can actually
>> export each VM as an OVF/OVA package - since that's very standard I'm
>> assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
>>
>
> Well, in theory, OVF/OVA is a standard. In practice, it's far from it - it
> defines how the XML should look and what it contains, but a VMware
> para-virtual NIC is not a para-virtual Xen NIC is not an oVirt
> para-virtual NIC, so the fact it describes a NIC means nothing when it
> comes to cross-platform compatibility.
>
>
While exporting, please ensure you include snapshots. You can learn more on
snapshot tree export support in Xen here:
https://xenserver.org/partners/18-sdk-development/114-import-export-vdi.html


>
>>
>> As for the last item, I'm really not sure what changes are needed, if at
>> all. I don't know the disk convention, for example (/dev/sd* for SCSI disk
>> -> virtio-scsi, but are there are other device types?)
>>
>>
>> Xen's virtual disks are all /dev/xvd[a-z]
>> Thankfully, we partition everything as LVM and partitions (other than
>> boot I think) are mounted as such.
>>
>
> And there's nothing that needs to address such path as /dev/xvd* ?
> Y.
>
>
>
>>
>>
>> I'd be happy to help with any adjustment needed for the Python script
>> below.
>>
>>
>> Very much appreciated, when I get to the point where I'm happy with the
>> basic architectural design and POC deployment of oVirt - that's when I'll
>> be testing importing VMs / data in various ways and have made note of these
>> scripts.
>>
>>
>> Y.
>>
>> [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html
>> [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sd
>> k/examples/upload_disk.py
>>
>>
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-09 Thread Yaniv Kaul
On Mon, Jan 8, 2018 at 11:52 PM, Sam McLeod 
wrote:

> Hi Yaniv,
>
> Thanks for your detailed reply, it's very much appreciated.
>
> On 5 Jan 2018, at 8:34 pm, Yaniv Kaul  wrote:
>
> Indeed, greenfield deployment has its advantages.
>
>>
>> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on
>> XenServer off one LUN at a time, remove that LUN from XenServer and add it
>> to oVirt as new storage, and continue - but if it's what has to be done,
>> we'll do it.
>>
>
> The migration of VMs has three parts:
> - VM configuration data (from name to number of CPUs, memory, etc.)
>
>
> That's not too much of an issue for us, we have a pretty standard set of
> configuration for performance / sizing.
>
> - Data - the disks themselves.
>
>
> This is the big one, for most hosts at least the data is on a dedicated
> logical volume, for example if it's postgresql, it would be LUKS on top of
> a logical volume for /var/lib/pgsql etc
>
> - Adjusting VM internal data (paths, boot kernel, grub?, etc.)
>
>
> Everything is currently PVHVM which uses standard grub2, you could
> literally dd any one of our VMs to a physical disk and boot it in any
> x86/64 machine.
>
> The first item could be automated. Unfortunately, it was a bit of a
> challenge to find a common automation platform. For example, we have great
> Ansible support, which I could not find for XenServer (but[1], which may be
> a bit limited). Perhaps if there aren't too many VMs, this could be done
> manually. If you use Foreman, btw, then it could probably be used for both
> to provision?
> The 2nd - data movement could be done in at least two-three ways:
> 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
> 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using
> the oVirt upload API (example in Python[2]). I think that's an easy to
> implement option and provides the flexibility to copy from pretty much any
> source to oVirt.
>
>
> A key thing here would be how quickly the oVirt API can ingest the data,
> our storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s
> and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find
> hypervisors disk virtualisation mechanisms to be the bottleneck - but
> adding an API to the mix, especially one that is single threaded (if that
> does the data stream processing) could be a big performance problem.
>

Well, it's just for the data copy. We can do ~300 or so MBps in a single
upload API call, but you can copy multiple disks via multiple hypervisors
in parallel. In addition, if you are using 'dd' you might even be able to
use sg_xcopy (if it's the same storage) - worth looking into it.
In any case, we have concrete plans to improve the performance of the
upload API.

>
> 3. There are ways to convert XVA to qcow2 - I saw some references on the
> Internet, never tried any.
>
>
> This is something I was thinking of potentially doing, I can actually
> export each VM as an OVF/OVA package - since that's very standard I'm
> assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
>

Well, in theory, OVF/OVA is a standard. In practice, it's far from it - it
defines how the XML should look and what it contains, but a VMware
para-virtual NIC is not a para-virtual Xen NIC is not an oVirt
para-virtual NIC, so the fact it describes a NIC means nothing when it
comes to cross-platform compatibility.


>
> As for the last item, I'm really not sure what changes are needed, if at
> all. I don't know the disk convention, for example (/dev/sd* for SCSI disk
> -> virtio-scsi, but are there are other device types?)
>
>
> Xen's virtual disks are all /dev/xvd[a-z]
> Thankfully, we partition everything as LVM and partitions (other than boot
> I think) are mounted as such.
>

And there's nothing that needs to address such path as /dev/xvd* ?
Y.



>
>
> I'd be happy to help with any adjustment needed for the Python script
> below.
>
>
> Very much appreciated, when I get to the point where I'm happy with the
> basic architectural design and POC deployment of oVirt - that's when I'll
> be testing importing VMs / data in various ways and have made note of these
> scripts.
>
>
> Y.
>
> [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html
> [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sd
> k/examples/upload_disk.py
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-08 Thread Sam McLeod
Hi Yaniv,

Thanks for your detailed reply, it's very much appreciated.

> On 5 Jan 2018, at 8:34 pm, Yaniv Kaul  wrote:
> 
> Indeed, greenfield deployment has its advantages.
> 
> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on 
> XenServer off one LUN at a time, remove that LUN from XenServer and add it to 
> oVirt as new storage, and continue - but if it's what has to be done, we'll 
> do it.
> 
> The migration of VMs has three parts:
> - VM configuration data (from name to number of CPUs, memory, etc.)

That's not too much of an issue for us, we have a pretty standard set of 
configuration for performance / sizing.

> - Data - the disks themselves.

This is the big one, for most hosts at least the data is on a dedicated logical 
volume, for example if it's postgresql, it would be LUKS on top of a logical 
volume for /var/lib/pgsql etc

> - Adjusting VM internal data (paths, boot kernel, grub?, etc.)

Everything is currently PVHVM which uses standard grub2, you could literally dd 
any one of our VMs to a physical disk and boot it in any x86/64 machine.

> The first item could be automated. Unfortunately, it was a bit of a challenge 
> to find a common automation platform. For example, we have great Ansible 
> support, which I could not find for XenServer (but[1], which may be a bit 
> limited). Perhaps if there aren't too many VMs, this could be done manually. 
> If you use Foreman, btw, then it could probably be used for both to provision?
> The 2nd - data movement could be done in at least two-three ways:
> 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
> 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using 
> the oVirt upload API (example in Python[2]). I think that's an easy to 
> implement option and provides the flexibility to copy from pretty much any 
> source to oVirt.

A key thing here would be how quickly the oVirt API can ingest the data, our 
storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s and 
around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find hypervisors 
disk virtualisation mechanisms to be the bottleneck - but adding an API to the 
mix, especially one that is single threaded (if that does the data stream 
processing) could be a big performance problem.

> 3. There are ways to convert XVA to qcow2 - I saw some references on the 
> Internet, never tried any.

This is something I was thinking of potentially doing, I can actually export 
each VM as an OVF/OVA package - since that's very standard I'm assuming oVirt 
can likely import them and convert to qcow2 or raw/LVM?

> 
> As for the last item, I'm really not sure what changes are needed, if at all. 
> I don't know the disk convention, for example (/dev/sd* for SCSI disk -> 
> virtio-scsi, but are there are other device types?)

Xen's virtual disks are all /dev/xvd[a-z]
Thankfully, we partition everything as LVM and partitions (other than boot I 
think) are mounted as such.

> 
> I'd be happy to help with any adjustment needed for the Python script below.

Very much appreciated, when I get to the point where I'm happy with the basic 
architectural design and POC deployment of oVirt - that's when I'll be testing 
importing VMs / data in various ways and have made note of these scripts.

> 
> Y.
> 
> [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html 
> 
> [2] 
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
>  
> 
>  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-05 Thread Yaniv Kaul
On Fri, Jan 5, 2018 at 12:19 AM, Sam McLeod 
wrote:

> Thanks for your response Yaniv,
>
>
>>> Context: Investigating migration from XenServer to oVirt (4.2.0)
>>>
>>
>> A very interesting subject - would love to see the outcome!
>>
>
> I'll certainly be writing one of not many blog posts on the process and
> outcome :)
>
> We've been wanting to switch to something more 'modern' for a while, but
> XenServer has had a very low TCO for us, sure it doesn't perform as well as
> Xen/KVM setup on top of CentOS/RHEL with updated kernels, tuning etc... but
> it just kept working, meanwhile we lost some people in my team so it hasn't
> been the right time to look at moving... until now...
>
> Citrix / XenServer recently screwed over the community (I don't use that
> term lightly) by kneecapping the free / unlicensed version of XenServer:
> https://xenserver.org/blog/entry/xenserver-7-3-
> changes-to-the-free-edition.html
>
> There's a large number of people very unhappy about this, as many of the
> people that contribute heavily to bug reporting, testing and rapid / modern
> deployment lifecycles were / are using the unlicensed version (like us over
> @infoxchange), so for us - this was the straw that broke the camel's back.
>
> I've been looking into various options such as oVirt, Proxmox, OpenStack
> and a roll-your-own libvirt style platform based on our CentOS (7 at
> present) SOE, so far oVirt is looking promising.
>
>
>>
>>>
>>> All our iSCSI storage is currently attached to XenServer hosts,
>>> XenServer formats those raw LUNs with LVM and VMs are stored within them.
>>>
>>
>> I suspect we need to copy the data. We might be able to do some tricks,
>> but at the end of the day I think copying the data, LV to LV, makes the
>> most sense.
>> However, I wonder what else is needed - do we need a conversion of the
>> drivers, different kernel, etc.?
>>
>
> All our Xen VMs are PVHVM, so there's no reason we could't export them as
> files, then import them to oVirt of we do go down the oVirt path after the
> POC.
> We run kernel-ml across our fleet (almost always running near-latest
> kernel release) and automate all configuration with Puppet.
>
> The issue I have with this is that it will be slow - XenServer's storage
> performance is *terrible* and there'd be lots of manual work involved.
>
> If this was to be the most simple option, I think we'd opt for rebuilding
> VMs from scratch, letting Puppet setup their config etc... then restoring
> data from backups / rsync etc... that way we'd still be performing the
> manual work - but we'd end up with nice clean VMs.
>

Indeed, greenfield deployment has its advantages.

>
> The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on
> XenServer off one LUN at a time, remove that LUN from XenServer and add it
> to oVirt as new storage, and continue - but if it's what has to be done,
> we'll do it.
>

The migration of VMs has three parts:
- VM configuration data (from name to number of CPUs, memory, etc.)
- Data - the disks themselves.
- Adjusting VM internal data (paths, boot kernel, grub?, etc.)

The first item could be automated. Unfortunately, it was a bit of a
challenge to find a common automation platform. For example, we have great
Ansible support, which I could not find for XenServer (but[1], which may be
a bit limited). Perhaps if there aren't too many VMs, this could be done
manually. If you use Foreman, btw, then it could probably be used for both
to provision?
The 2nd - data movement could be done in at least two-three ways:
1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.
2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using
the oVirt upload API (example in Python[2]). I think that's an easy to
implement option and provides the flexibility to copy from pretty much any
source to oVirt.
3. There are ways to convert XVA to qcow2 - I saw some references on the
Internet, never tried any.

As for the last item, I'm really not sure what changes are needed, if at
all. I don't know the disk convention, for example (/dev/sd* for SCSI disk
-> virtio-scsi, but are there are other device types?)

I'd be happy to help with any adjustment needed for the Python script below.

Y.

[1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html
[2]
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py


>
>
>> What are the export options Xen provides? Perhaps OVF?
>> Is there an API to stream the disks from Xen?
>> Y.
>>
>
> Yes, Xen does have an API, but TBH - it's pretty awful to work with, think
> XML and lots of UUIDs...
>
>
>>
>>>
>>>
> --
> Sam McLeod
> https://smcleod.net
> https://twitter.com/s_mcleod
>
> On 4 Jan 2018, at 7:58 pm, Yaniv Kaul  wrote:
>
>
>
> On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod 
> wrote:
>
>> If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data
>> centre that contains 

Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-05 Thread Sam McLeod
Thanks for your response Yaniv,

> 
> Context: Investigating migration from XenServer to oVirt (4.2.0)
> 
> A very interesting subject - would love to see the outcome!

I'll certainly be writing one of not many blog posts on the process and outcome 
:)

We've been wanting to switch to something more 'modern' for a while, but 
XenServer has had a very low TCO for us, sure it doesn't perform as well as 
Xen/KVM setup on top of CentOS/RHEL with updated kernels, tuning etc... but it 
just kept working, meanwhile we lost some people in my team so it hasn't been 
the right time to look at moving... until now...

Citrix / XenServer recently screwed over the community (I don't use that term 
lightly) by kneecapping the free / unlicensed version of XenServer: 
https://xenserver.org/blog/entry/xenserver-7-3-changes-to-the-free-edition.html 


There's a large number of people very unhappy about this, as many of the people 
that contribute heavily to bug reporting, testing and rapid / modern deployment 
lifecycles were / are using the unlicensed version (like us over @infoxchange), 
so for us - this was the straw that broke the camel's back.

I've been looking into various options such as oVirt, Proxmox, OpenStack and a 
roll-your-own libvirt style platform based on our CentOS (7 at present) SOE, so 
far oVirt is looking promising.

>  
> 
> All our iSCSI storage is currently attached to XenServer hosts, XenServer 
> formats those raw LUNs with LVM and VMs are stored within them.
> 
> I suspect we need to copy the data. We might be able to do some tricks, but 
> at the end of the day I think copying the data, LV to LV, makes the most 
> sense.
> However, I wonder what else is needed - do we need a conversion of the 
> drivers, different kernel, etc.?

All our Xen VMs are PVHVM, so there's no reason we could't export them as 
files, then import them to oVirt of we do go down the oVirt path after the POC.
We run kernel-ml across our fleet (almost always running near-latest kernel 
release) and automate all configuration with Puppet.

The issue I have with this is that it will be slow - XenServer's storage 
performance is terrible and there'd be lots of manual work involved.

If this was to be the most simple option, I think we'd opt for rebuilding VMs 
from scratch, letting Puppet setup their config etc... then restoring data from 
backups / rsync etc... that way we'd still be performing the manual work - but 
we'd end up with nice clean VMs.

The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on 
XenServer off one LUN at a time, remove that LUN from XenServer and add it to 
oVirt as new storage, and continue - but if it's what has to be done, we'll do 
it.

> 
> What are the export options Xen provides? Perhaps OVF?
> Is there an API to stream the disks from Xen?
> Y.

Yes, Xen does have an API, but TBH - it's pretty awful to work with, think XML 
and lots of UUIDs...

>  
> 

--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod

> On 4 Jan 2018, at 7:58 pm, Yaniv Kaul  wrote:
> 
> 
> 
> On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod  > wrote:
> If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data centre 
> that contains existing data - how does oVirt behave?
> 
> For example the LUN might be partitioned as LVM, then contain existing 
> filesystems etc...
>  
> - Would oVirt see that there is existing data on the LUN and simply attach it 
> as any other linux initiator (client) world, or would it try to wipe the LUN 
> clean and reinitialise it?
> 
> Neither - we will not be importing these as existing data domains, nor wipe 
> them, as they have contents.
>  
> 
> 
> Context: Investigating migration from XenServer to oVirt (4.2.0)
> 
> A very interesting subject - would love to see the outcome!
>  
> 
> All our iSCSI storage is currently attached to XenServer hosts, XenServer 
> formats those raw LUNs with LVM and VMs are stored within them.
> 
> I suspect we need to copy the data. We might be able to do some tricks, but 
> at the end of the day I think copying the data, LV to LV, makes the most 
> sense.
> However, I wonder what else is needed - do we need a conversion of the 
> drivers, different kernel, etc.?
> 
> What are the export options Xen provides? Perhaps OVF?
> Is there an API to stream the disks from Xen?
> Y.
>  
> 
> 
> 
> If the answer to this is already out there and I should have found it by 
> searching, I apologise, please point me to the link and I'll RTFM.
> 
> --
> Sam McLeod
> https://smcleod.net 
> https://twitter.com/s_mcleod 
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
> 

Re: [ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-04 Thread Yaniv Kaul
On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod  wrote:

> If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data
> centre that contains existing data - how does oVirt behave?
>
> For example the LUN might be partitioned as LVM, then contain existing
> filesystems etc...
>
> - Would oVirt see that there is existing data on the LUN and simply attach
> it as any other linux initiator (client) world, or would it try to wipe the
> LUN clean and reinitialise it?
>

Neither - we will not be importing these as existing data domains, nor wipe
them, as they have contents.


>
>
> Context: Investigating migration from XenServer to oVirt (4.2.0)
>

A very interesting subject - would love to see the outcome!


>
> All our iSCSI storage is currently attached to XenServer hosts, XenServer
> formats those raw LUNs with LVM and VMs are stored within them.
>

I suspect we need to copy the data. We might be able to do some tricks, but
at the end of the day I think copying the data, LV to LV, makes the most
sense.
However, I wonder what else is needed - do we need a conversion of the
drivers, different kernel, etc.?

What are the export options Xen provides? Perhaps OVF?
Is there an API to stream the disks from Xen?
Y.


>
>
>
> *If the answer to this is already out there and I should have found it by
> searching, I apologise, please point me to the link and I'll RTFM.*
>
> --
> Sam McLeod
> https://smcleod.net
> https://twitter.com/s_mcleod
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Behaviour when attaching shared iSCSI storage with existing data

2018-01-04 Thread Sam McLeod
If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data centre 
that contains existing data - how does oVirt behave?

For example the LUN might be partitioned as LVM, then contain existing 
filesystems etc...
 
- Would oVirt see that there is existing data on the LUN and simply attach it 
as any other linux initiator (client) world, or would it try to wipe the LUN 
clean and reinitialise it?


Context: Investigating migration from XenServer to oVirt (4.2.0)

All our iSCSI storage is currently attached to XenServer hosts, XenServer 
formats those raw LUNs with LVM and VMs are stored within them.



If the answer to this is already out there and I should have found it by 
searching, I apologise, please point me to the link and I'll RTFM.

--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users