Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Dan Yasny
On Jun 14, 2016 5:37 PM, "Fernando Frediani" 
wrote:
>
> Hi Nir,
>
> I wouldn't say that the performance coming from LVM is significantly
better than from a filesystem if the last is well built. In VMware the
performance from a VMDK running on the top of VMFS5 and from a RDM has no
significant gain one over another. I've always preferred to have machines
in a filesystem for the ease of management. In some cases with hundreds of
them in a single filesystem never faced performance issues. The bottleneck
normally is down to the storage architecture (Storage Controller, RAID
config, etc).
>
> The multipath is certainly a plus that helps in certain cases.
>

Extended scalability  (200 node clusters) and no bottlenecks around scsi3
pr are another couple of pluses.

> I guess the answer to my original question is clear. If I want to use
block storage shared among different hosts there is no choice in oVirt
other than LVM.
> In a particular case I have a storage shared via a kind of internal SAS
backplane to all servers. The only alternative to that would be dedicate a
server to own the storage and export it as NFS, but in that case there
would be some looses in the terms of hardware an reliability.

If your SAS storage is exposed to multiple hosts and presents the same wwid
to all clients you can set it up as fc, as long as multipath can detect it.
DDAS like dell md3000 works great this way.

>
> Thanks
> Fernando
>
>
> On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani <
fernando.fredi...@upx.com.br> wrote:
>>>
>>> Hi Nir,
>>> Thanks for clarification.
>>>
>>> Answering your questions: The intent was to use a Posix like filesystem
>>> similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for
how
>>> the block storage is presented to multiple servers. Yes I heard about
GFS2
>>> escalation issues in the past, but thought it had been gone now a days,
it
>>> seems not.
>>>
>>> I had the impression that qcow2 images have both thin-provisioning and
>>> snapshot capabilities.
>>
>> Yes, using file based storage, you have both snapshots and thin
provisioning,
>> this is the most reliable way to get thin provisioning in ovirt.
>>
>> But then you pay for the file system overhead, where in block storage
the qemu
>> image is using the lv directly.
>>
>> In block storage we use multipath, so if you have mutiple nics and
networks,
>> you get better reliability and performance.
>>
>>> Regarding LVM I don't like the idea of having VMs buried into a LVM
volume
>>> nor the idea of troubleshooting LVM volumes when necessary. Dealing with
>>> qcow2 images for every VM separately makes things much easier for doing
>>> several tasks. I would say that people coming from VMware would prefer
to
>>> deal with a VMDK rather than a RDM LUN. In the other hand I have
nothing to
>>> say about LVM performance.
>>
>> LVM has its own issues with many lvs on the same vg - we recommend to
>> use up to 350 lvs per vg. If you need more, you need to use another vg.
>>
>> The best would be to try both and use the best storage for the particular
>> use case.
>>
>> Nir
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi Nir,

I wouldn't say that the performance coming from LVM is significantly 
better than from a filesystem if the last is well built. In VMware the 
performance from a VMDK running on the top of VMFS5 and from a RDM has 
no significant gain one over another. I've always preferred to have 
machines in a filesystem for the ease of management. In some cases with 
hundreds of them in a single filesystem never faced performance issues. 
The bottleneck normally is down to the storage architecture (Storage 
Controller, RAID config, etc).


The multipath is certainly a plus that helps in certain cases.

I guess the answer to my original question is clear. If I want to use 
block storage shared among different hosts there is no choice in oVirt 
other than LVM.
In a particular case I have a storage shared via a kind of internal SAS 
backplane to all servers. The only alternative to that would be dedicate 
a server to own the storage and export it as NFS, but in that case there 
would be some looses in the terms of hardware an reliability.


Thanks
Fernando

On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani 
 wrote:

Hi Nir,
Thanks for clarification.

Answering your questions: The intent was to use a Posix like filesystem
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
the block storage is presented to multiple servers. Yes I heard about GFS2
escalation issues in the past, but thought it had been gone now a days, it
seems not.

I had the impression that qcow2 images have both thin-provisioning and
snapshot capabilities.

Yes, using file based storage, you have both snapshots and thin provisioning,
this is the most reliable way to get thin provisioning in ovirt.

But then you pay for the file system overhead, where in block storage the qemu
image is using the lv directly.

In block storage we use multipath, so if you have mutiple nics and networks,
you get better reliability and performance.


Regarding LVM I don't like the idea of having VMs buried into a LVM volume
nor the idea of troubleshooting LVM volumes when necessary. Dealing with
qcow2 images for every VM separately makes things much easier for doing
several tasks. I would say that people coming from VMware would prefer to
deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to
say about LVM performance.

LVM has its own issues with many lvs on the same vg - we recommend to
use up to 350 lvs per vg. If you need more, you need to use another vg.

The best would be to try both and use the best storage for the particular
use case.

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Nir Soffer
On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani
 wrote:
> Hi Nir,
> Thanks for clarification.
>
> Answering your questions: The intent was to use a Posix like filesystem
> similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
> the block storage is presented to multiple servers. Yes I heard about GFS2
> escalation issues in the past, but thought it had been gone now a days, it
> seems not.
>
> I had the impression that qcow2 images have both thin-provisioning and
> snapshot capabilities.

Yes, using file based storage, you have both snapshots and thin provisioning,
this is the most reliable way to get thin provisioning in ovirt.

But then you pay for the file system overhead, where in block storage the qemu
image is using the lv directly.

In block storage we use multipath, so if you have mutiple nics and networks,
you get better reliability and performance.

> Regarding LVM I don't like the idea of having VMs buried into a LVM volume
> nor the idea of troubleshooting LVM volumes when necessary. Dealing with
> qcow2 images for every VM separately makes things much easier for doing
> several tasks. I would say that people coming from VMware would prefer to
> deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to
> say about LVM performance.

LVM has its own issues with many lvs on the same vg - we recommend to
use up to 350 lvs per vg. If you need more, you need to use another vg.

The best would be to try both and use the best storage for the particular
use case.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi Nir,
Thanks for clarification.

Answering your questions: The intent was to use a Posix like filesystem 
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for 
how the block storage is presented to multiple servers. Yes I heard 
about GFS2 escalation issues in the past, but thought it had been gone 
now a days, it seems not.


I had the impression that qcow2 images have both thin-provisioning and 
snapshot capabilities.


Regarding LVM I don't like the idea of having VMs buried into a LVM 
volume nor the idea of troubleshooting LVM volumes when necessary. 
Dealing with qcow2 images for every VM separately makes things much 
easier for doing several tasks. I would say that people coming from 
VMware would prefer to deal with a VMDK rather than a RDM LUN. In the 
other hand I have nothing to say about LVM performance.


Best
Fernando


Em 14/06/2016 16:35, Nir Soffer escreveu:

On Tue, Jun 14, 2016 at 8:59 PM, Fernando Frediani
 wrote:

Hi there,

I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local and
Gluster.

We support iSCSI, FC, FCoE, NFS, Gluster, Ceph, Local and any posix like
shared file system.


Specifically speaking about iSCSI and FCoE I see they use LVM on the block
storage level to store the Virtual Machines.

To be more specific, we use lvm to create volumes. Each virtual machine disk
use one volume and additional volume for each snapshot.


I just wanted to understand why the choice was to have LVM

What would use use instead?


and if that is
the only option at the moment.

This is the only option for block storage if you need snapshots or thin
provisioning.

If preallocated disk without snapshots is good enough for you, you
can attach a LUN directly to a vm. This will give the best performance.


Was ever considered to have something like GFS2 ou OCFS2 in comparison with
VMFS5 and VMs running in qcow2 ou raw files on the top of it ?

Any posix compatible file system can be used, using raw or qcow2 files.

You can use GFS2, but I heard that it does not scale well.


I don't like LVM and have a strong preference for anything related to
storage that doesn't use it so the reason I'm looking for a different way to
use block storage without it having to be a LVM.

You can use one of the file based storage options, or ceph.

Whats wrong with lvm?


Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Nir Soffer
On Tue, Jun 14, 2016 at 8:59 PM, Fernando Frediani
 wrote:
> Hi there,
>
> I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local and
> Gluster.

We support iSCSI, FC, FCoE, NFS, Gluster, Ceph, Local and any posix like
shared file system.

> Specifically speaking about iSCSI and FCoE I see they use LVM on the block
> storage level to store the Virtual Machines.

To be more specific, we use lvm to create volumes. Each virtual machine disk
use one volume and additional volume for each snapshot.

> I just wanted to understand why the choice was to have LVM

What would use use instead?

> and if that is
> the only option at the moment.

This is the only option for block storage if you need snapshots or thin
provisioning.

If preallocated disk without snapshots is good enough for you, you
can attach a LUN directly to a vm. This will give the best performance.

> Was ever considered to have something like GFS2 ou OCFS2 in comparison with
> VMFS5 and VMs running in qcow2 ou raw files on the top of it ?

Any posix compatible file system can be used, using raw or qcow2 files.

You can use GFS2, but I heard that it does not scale well.

> I don't like LVM and have a strong preference for anything related to
> storage that doesn't use it so the reason I'm looking for a different way to
> use block storage without it having to be a LVM.

You can use one of the file based storage options, or ceph.

Whats wrong with lvm?


Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users