I'm curious to hear what other comments arise, as we're analyzing a
production setup shortly.


On Sun, Jun 1, 2014 at 10:11 PM,  <combus...@archlinux.us> wrote:
> I need to scratch gluster off because setup is based on CentOS 6.5, so
> essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.
Gluster would still work with EL6, afaik it just won't use libgfapi and
instead use just a standard mount.

>
> Any info regarding FC storage domain would be appreciated though.
>
> Thanks
>
> Ivan
>
> On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:
>> Hi,
>>
>> I have a 4 node cluster setup and my storage options right now are a FC
>> based storage, one partition per node on a local drive (~200GB each) and a
>> NFS based NAS device. I want to setup export and ISO domain on the NAS and
>> there are no issues or questions regarding those two. I wasn't aware of any
>> other options at the time for utilizing a local storage (since this is a
>> shared based datacenter) so I exported a directory from each partition via
>> NFS and it works. But I am little in the dark with the following:
>>
>> 1. Are there any advantages for switching from NFS based local storage to a
>> Gluster based domain with blocks for each partition. I guess it can be only
>> performance wise but maybe I'm wrong. If there are advantages, are there any
>> tips regarding xfs mount options etc ?
>>
>> 2. I've created a volume on the FC based storage and exported it to all of
>> the nodes in the cluster on the storage itself. I've configured
>> multipathing correctly and added an alias for the wwid of the LUN so I can
>> distinct this one and any other future volumes more easily. At first I
>> created a partition on it but since oVirt saw only the whole LUN as raw
>> device I erased it before adding it as the FC master storage domain. I've
>> imported a few VM's and point them to the FC storage domain. This setup
>> works, but:
>>
>> - All of the nodes see a device with the alias for the wwid of the volume,
>> but only the node wich is currently the SPM for the cluster can see logical
>> volumes inside. Also when I setup the high availability for VM's residing
>> on the FC storage and select to start on any node on the cluster, they
>> always start on the SPM. Can multiple nodes run different VM's on the same
>> FC storage at the same time (logical thing would be that they can, but I
>> wanted to be sure first). I am not familiar with the logic oVirt utilizes
>> that locks the vm's logical volume to prevent corruption.
>>
>> - Fdisk shows that logical volumes on the LUN of the FC volume are
>> missaligned (partition doesn't end on cylindar boundary), so I wonder if
>> this is becuase I imported the VM's with disks that were created on local
>> storage before and that any _new_ VM's with disks on the fc storage would
>> be propperly aligned.
>>
>> This is a new setup with oVirt 3.4 (did an export of all the VM's on 3.3 and
>> after a fresh installation of the 3.4 imported them back again). I have
>> room to experiment a little with 2 of the 4 nodes because currently they
>> are free from running any VM's, but I have limited room for anything else
>> that would cause an unplanned downtime for four virtual machines running on
>> the other two nodes on the cluster (currently highly available and their
>> drives are on the FC storage domain). All in all I have 12 VM's running and
>> I'm asking on the list for advice and guidance before I make any changes.
>>
>> Just trying to find as much info regarding all of this as possible before
>> acting upon.
>>
>> Thank you in advance,
>>
>> Ivan
>
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to