Re: [ovirt-users] oVirt 3.4.2 Hosted

2014-06-10 Thread Sandro Bonazzola
Il 10/06/2014 16:59, Bob Doolittle ha scritto:
> Hi,
> 
> I'm taking a preview look at the 3.4.2 Release Notes. I see there is no 
> mention of Hosted.
> 
> Can we assume that means that the process for upgrading Hosted is the same as 
> for traditional deployments? We know that installation is quite
> different...

Thanks for having pointed out, here's a link [1] to the upgrade procedure for 
Hosted Engine.
I'll update release notes for including it.

[1] http://www.ovirt.org/Hosted_Engine_Howto#Upgrade_Hosted_Engine

> 
> -Bob
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] FW: Storage Domain Connnection inquiry

2014-06-10 Thread Allon Mureinik
Is this an export domain or a data domain? 

- Original Message -

> > > Hello,
> > >
> > > We have recently started using ovirt for the last year or two obviously
> > > we are noobs only been
> > > on board since ovirt 3.3, anyways i have the selfhosted version of ovirt
> > > 3.4 running and its so much better
> > > then our previous environment. I deleted a storage domain from my old
> > > environment 3.3 and tried ot import
> > > it into the new 3.4 but it says the connection is already being used (
> > > and i
> > > also removed the export domain i had originally
> > > setup in ovirt 3.4)
> > >
> > > any way i could get the engine to forget that connection so i can import
> > > the
> > > domain,
> > >
> > > my main issues is trying to get my vms from ovirt 3.3 into 3.4, haven't
> > > had
> > > much success (if you can point me to the
> > > right direction )
> > >
> > >
> > > Thanks,
> > >
> > >
> > >
> > > Shawn O'Connor
> > >
> > >
> > >
> > > zimax networks
> > >
> > > Chief Technology Officer
> > >
> > >
> > >
> > > 1 818.643.99511 818.643.9951 | Email: socon...@zimax.net
> > >
> > >
> > > 650 south grand avenue, ste 119
> > >
> > >
> > > Los Angeles, ca 90017 | USA
> > >
> > >
> > >
> > > CallSend SMSAdd to SkypeYou'll need Skype CreditFree via SkypeCallSend
> > > SMSAdd
> > > to SkypeYou'll need Skype CreditFree via Skype

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] FW: Storage Domain Connnection inquiry

2014-06-10 Thread shawn o'connor

> > Hello,
> > 
> > We have recently started using ovirt for the last year or two obviously
> > we are noobs only been
> >   on board since ovirt 3.3, anyways i have the selfhosted version of ovirt
> >   3.4 running and its so much better
> >  then our previous environment. I deleted a storage domain from my old
> >  environment 3.3 and tried ot import
> > it into the new 3.4 but it says the connection is already being used ( and i
> > also removed the export domain i had originally
> >  setup in ovirt 3.4)
> > 
> > any way i could get the engine to forget that connection so i can import the
> > domain,
> > 
> > my main issues is trying to get my vms from ovirt 3.3 into 3.4, haven't had
> > much success (if you can point me to the
> > right direction )
> > 
> > 
> > Thanks,
> >
> > 
> > 
> > Shawn O'Connor
> > 
> > 
> > 
> > zimax networks
> > 
> > Chief Technology Officer
> > 
> > 
> > 
> > 1 818.643.99511 818.643.9951 | Email: socon...@zimax.net
> > 
> > 
> > 650 south grand avenue, ste 119
> > 
> > 
> > Los Angeles, ca 90017 | USA
> > 
> > 
> > 
> > CallSend SMSAdd to SkypeYou'll need Skype CreditFree via SkypeCallSend 
> > SMSAdd
> > to SkypeYou'll need Skype CreditFree via Skype
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Resource Pools in oVirt

2014-06-10 Thread s k
Hello all,
As far as I understand, CPU Shares can be set on each VM individually and 
cannot be changed while it's powered on.

It would be great if we could create resource pools (similar to what VMware 
does) for CPU shares so that we could assign priorities on multiple VMs and be 
able to move them between Resource Pools of different priorities.  I know that 
we can configure quotas but it's not the same as CPU shares.
Is that something planned for a future release? Shall I open an RFE for that ?
Regards,
Sokratis  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hacking in Ceph rather then Gluster.

2014-06-10 Thread Samuli Heinonen
Hi Nathan,

We have been running GlusterFS 3.4 with RHEV in production for about six months 
now. We were waiting for libgfapi support to show up for long time but finally 
we had to give up waiting and start using fuse mounts. Everything has been good 
so far and we haven't seen any big issues with GlusterFS. We even had hardware 
problems on one of the storage nodes but RHEV and GlusterFS survived through 
that without problems.

Our setup is rather small and it has only 4 compute nodes and 2 storage nodes. 
Of course we expect it to grow but biggest issue for us is that we are bit 
uncertain how many VM's we can run on it without affecting too much performance.

We are also looking at possibility of creating 3-6 node clusters where each 
node is acting as compute and storage node as well. Hopefully we will have test 
setup running in about week.

What kind of issues you have had with GlusterFS?

-samuli



Nathan Stratton  kirjoitti 10.6.2014 kello 2.02:

> Thanks, I will take a look at it, anyone else currently using Gluster for 
> backend images in production? 
> 
> 
> ><>
> nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 | 
> www.broadsoft.com
> 
> 
> On Mon, Jun 9, 2014 at 2:55 PM, Itamar Heim  wrote:
> On 06/09/2014 01:28 PM, Nathan Stratton wrote:
> So I understand that the news is still fresh and there may not be much
> going on yet in making Ceph work with ovirt, but I thought I would reach
> out and see if it was possible to hack them together and still use
> librdb rather then NFS.
> 
> I know, why not just use Gluster... the problem is I have tried to use
> Gluster for VM storage for years and I still don't think it is ready.
> Ceph still has work in other areas, but this is one area where I think
> it shines. This is a new lab cluster and I would like to try to use ceph
> over gluster if possible.
> 
> Unless I am missing something, can anyone tell me they are happy with
> Gluster as a backend image store? This will be a small 16 node 10 gig
> cluster of shared compute / storage (yes I know people want to keep them
> separate).
> 
>  ><>
> nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
> www.broadsoft.com 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> there was a threat about this recently. afaict, ceph support will require 
> adding a specific ceph storage domain to engine and vdsm, which is a full 
> blown feature (I assume you could try and hack it somewhat with a custom 
> hook). waiting for next version planning cycle to see if/how it gets pushed.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Working but unstable storage domains

2014-06-10 Thread Paul Heinlein


I'm running oVirt Engine 3.2.0-2.fc18 (which I know is out of date) on 
a dedicated physical host; we have 12 hosts split between two clusters 
and nine storage domains, all NFS.


Late last week, a VM that in the scope of our clusters consumes a lot 
of resources failed in migration. Since then the storage domains have 
from the engine's point of view been going up and down (though the 
underlying NFS exports are fine). Key symptoms from the oVirt Manager:


 * two of the storage domains are always marked as having type of
   "Data (Master)" when historically only one was;

 * the Manager reports "Storage Pool Manager runs on $host" then
   "Sync Error on Master Domain..." then "Reconstruct Master Domain
   ...completed" then "Data Center is being initialized" over and
   over and over again.

The Sync Error messages indicate "$pool is marked as Mater in oVirt 
Engine database but not on the Storage side. Please consult with 
Support on how to fix this issue." Note that $pool changes between the 
various domains that get marked as Data (Master).


Clues, anyone? I'm happy to provide logs (though they're all quite 
large).


--
Paul Heinlein
heinl...@madboa.com
45°38' N, 122°6' W___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.4.2 Hosted

2014-06-10 Thread Bob Doolittle

Hi,

I'm taking a preview look at the 3.4.2 Release Notes. I see there is no 
mention of Hosted.


Can we assume that means that the process for upgrading Hosted is the 
same as for traditional deployments? We know that installation is quite 
different...


-Bob

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE][ACTION NEEDED] oVirt 3.4.2 status

2014-06-10 Thread Sandro Bonazzola
Il 10/06/2014 15:30, Nathanaël Blanchet ha scritto:
> Hello,
> 
> Has 3.4.2 been postponed?
> I can't still upate ovirt-engine-setup...

We're performing final checks, it will be released within 1 hour.

> 
> Le 04/06/2014 09:51, Sandro Bonazzola a écrit :
>> Hi,
>> We're going to start composing oVirt 3.4.2 GA on *2014-06-10 08:00 UTC* from 
>> 3.4.2 branches.
>>
>> The bug tracker [1] shows no blocking bugs for the release
>>
>> There are still 56 bugs [2] targeted to 3.4.2.
>> Excluding node and documentation bugs we still have 27 bugs [3] targeted to 
>> 3.4.2.
>>
>> Maintainers / Assignee:
>> - Please add the bugs to the tracker if you think that 3.4.2 should not be 
>> released without them fixed.
>> - Please update the target to any next release for bugs that won't be in 
>> 3.4.2:
>>   it will ease gathering the blocking bugs for next releases.
>>   Critical bugs will be re-targeted to 3.4.3 after 3.4.2 GA release.
>>   All remaining bugs will be re-targeted to 3.5.0.
>> - Please fill release notes, the page has been created here [4]
>> - Please build packages before *2014-06-09 15:00 UTC*.
>>
>> Community:
>> - If you're testing oVirt 3.4.2 RC, please add yourself to the test page [5]
>>
>> [1] http://bugzilla.redhat.com/1095370
>> [2] http://red.ht/1oqLLlr
>> [3] http://red.ht/1nIAZXO
>> [4] http://www.ovirt.org/OVirt_3.4.2_Release_Notes
>> [5] http://www.ovirt.org/Testing/oVirt_3.4.2_Testing
>>
>>
>> Thanks,
>>
>>
>>
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle exploitation et maintenance
> Département des systèmes d'information
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr 
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 3.4.2 Release is now available

2014-06-10 Thread Sandro Bonazzola
The oVirt development team is pleased to announce the general
availability of oVirt 3.4.2 as of Jun 10th 2014. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.

oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).

This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.

The existing repository ovirt-3.4 has been updated for delivering this
release without the need of enabling any other repository, however since we
introduced package signing you need an additional step in order to get
the public keys installed on your system if you're upgrading from an older 
release.
Please refer to release notes [1] for Installation / Upgrade instructions.

Please note that mirrors will need a couple of days before being synchronized.
If you want to be sure to use latest rpms and don't want to wait for the 
mirrors,
you can edit /etc/yum.repos.d/ovirt-3.4.repo commenting the mirror line and
removing the comment on baseurl line.

A new oVirt Node and oVirt Live ISO will be available soon[2].

[1] http://www.ovirt.org/OVirt_3.4.2_Release_Notes
[2] http://resources.ovirt.org/plain/pub/ovirt-3.4/iso/

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE][ACTION NEEDED] oVirt 3.4.2 status

2014-06-10 Thread Nathanaël Blanchet

Hello,

Has 3.4.2 been postponed?
I can't still upate ovirt-engine-setup...

Le 04/06/2014 09:51, Sandro Bonazzola a écrit :

Hi,
We're going to start composing oVirt 3.4.2 GA on *2014-06-10 08:00 UTC* from 
3.4.2 branches.

The bug tracker [1] shows no blocking bugs for the release

There are still 56 bugs [2] targeted to 3.4.2.
Excluding node and documentation bugs we still have 27 bugs [3] targeted to 
3.4.2.

Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.4.2 should not be 
released without them fixed.
- Please update the target to any next release for bugs that won't be in 3.4.2:
   it will ease gathering the blocking bugs for next releases.
   Critical bugs will be re-targeted to 3.4.3 after 3.4.2 GA release.
   All remaining bugs will be re-targeted to 3.5.0.
- Please fill release notes, the page has been created here [4]
- Please build packages before *2014-06-09 15:00 UTC*.

Community:
- If you're testing oVirt 3.4.2 RC, please add yourself to the test page [5]

[1] http://bugzilla.redhat.com/1095370
[2] http://red.ht/1oqLLlr
[3] http://red.ht/1nIAZXO
[4] http://www.ovirt.org/OVirt_3.4.2_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.4.2_Testing


Thanks,





--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM HostedEngie is down. Exist message: internal error Failed to acquire lock error -243

2014-06-10 Thread Brad House

Ok, I thought I was doing something wrong yesterday and just
tore down my 3-node cluster with the hosted engine and started
rebuilding.  I was seeing essentially the same thing, a score of
0 on the VMs not running the engine, it wouldn't allow migration of
the hosted engine.   I played with all things related to setting
maintenance and rebooting hosts, nothing brought them up to a
point where I could migrate the hosted engine.

I thought it was related to ovirt messing up when deploying the
other hosts (I told it not to modify the firewall that I disabled,
but the deploy process forcibly reenabled the firewall which gluster
really didn't like).  Now after reading this it appears my assumption
may be false.

Previously a 2-node cluster I had worked fine, but I wanted to
go to 3-nodes so I could enable quorum on gluster to not risk
split-brain issues.

-Brad


On 6/10/14 1:19 AM, Andrew Lau wrote:

I'm really having a hard time finding out why it's happening..

If I set the cluster to global for a minute or two, the scores will
reset back to 2400. Set maintenance mode to none, and all will be fine
until a migration occurs. It seems it tries to migrate, fails and sets
the score to 0 permanently rather than the 10? minutes mentioned in
one of the ovirt slides.

When I have two hosts, it's score 0 only when a migration occurs.
(Just on the host which doesn't have engine up). The score 0 only
happens when it's tried to migrate when I set the host to local
maintenance. Migrating the VM from the UI has worked quite a few
times, but it's recently started to fail.

When I have three hosts, after 5~ mintues of them all up the score
will hit 0 on the hosts not running the VMs. It doesn't even have to
attempt to migrate before the score goes to 0. Stopping the ha agent
on one host, and "resetting" it with the global maintenance method
brings it back to the 2 host scenario above.

I may move on and just go back to a standalone engine as this is not
getting very much luck..

On Tue, Jun 10, 2014 at 3:11 PM, combuster  wrote:

Nah, I've explicitly allowed hosted-engine vm to be able to access the NAS
device as the NFS share itself, before the deploy procedure even started.
But I'm puzzled at how you can reproduce the bug, all was well on my setup
before I've stated manual migration of the engine's vm. Even auto migration
worked before that (tested it). Does it just happen without any procedure on
the engine itself? Is the score 0 for just one node, or two of three of
them?

On 06/10/2014 01:02 AM, Andrew Lau wrote:


nvm, just as I hit send the error has returned.
Ignore this..

On Tue, Jun 10, 2014 at 9:01 AM, Andrew Lau  wrote:


So after adding the L3 capabilities to my storage network, I'm no
longer seeing this issue anymore. So the engine needs to be able to
access the storage domain it sits on? But that doesn't show up in the
UI?

Ivan, was this also the case with your setup? Engine couldn't access
storage domain?

On Mon, Jun 9, 2014 at 9:56 PM, Andrew Lau  wrote:


Interesting, my storage network is a L2 only and doesn't run on the
ovirtmgmt (which is the only thing HostedEngine sees) but I've only
seen this issue when running ctdb in front of my NFS server. I
previously was using localhost as all my hosts had the nfs server on
it (gluster).

On Mon, Jun 9, 2014 at 9:15 PM, Artyom Lukianov 
wrote:


I just blocked connection to storage for testing, but on result I had
this error: "Failed to acquire lock error -243", so I added it in reproduce
steps.
If you know another steps to reproduce this error, without blocking
connection to storage it also can be wonderful if you can provide them.
Thanks

- Original Message -
From: "Andrew Lau" 
To: "combuster" 
Cc: "users" 
Sent: Monday, June 9, 2014 3:47:00 AM
Subject: Re: [ovirt-users] VM HostedEngie is down. Exist message:
internal error Failed to acquire lock error -243

I just ran a few extra tests, I had a 2 host, hosted-engine running
for a day. They both had a score of 2400. Migrated the VM through the
UI multiple times, all worked fine. I then added the third host, and
that's when it all fell to pieces.
Other two hosts have a score of 0 now.

I'm also curious, in the BZ there's a note about:

where engine-vm block connection to storage domain(via iptables -I
INPUT -s sd_ip -j DROP)

What's the purpose for that?

On Sat, Jun 7, 2014 at 4:16 PM, Andrew Lau 
wrote:


Ignore that, the issue came back after 10 minutes.

I've even tried a gluster mount + nfs server on top of that, and the
same issue has come back.

On Fri, Jun 6, 2014 at 6:26 PM, Andrew Lau 
wrote:


Interesting, I put it all into global maintenance. Shut it all down
for 10~ minutes, and it's regained it's sanlock control and doesn't
seem to have that issue coming up in the log.

On Fri, Jun 6, 2014 at 4:21 PM, combuster 
wrote:


It was pure NFS on a NAS device. They all had different ids (had no
redeployements of nodes before problem occured).

Thanks Jirka.


On 06/06/2014 08:19 AM, Jiri Mosk

Re: [ovirt-users] Recommended setup for a FC based storage domain

2014-06-10 Thread combuster

/etc/libvirt/libvirtd.conf and /etc/vdsm/logger.conf

, but unfortunately maybe I've jumped to conclusions, last weekend, that 
very same thin provisioned vm was running a simple export for 3hrs 
before I've killed the process. But I wondered:


1. The process that runs behind the export is qemu-img convert (from raw 
to raw), and running iotop shows that every three or four seconds it 
reads 10-13 MBps and then idles for a few seconds. Run the numbers on 
100GB (why is he covering the entire 100 of 15GB used on thin volume I 
still don't get it) and you get precisely 3-4 hrs estimated time remaining.
2. When I run export with SPM on a node that doesn't have any vm's 
running, export finishes for aprox. 30min (iotop shows 40-70MBps read 
speed constantly)
3. Renicing I/O priority of the qemu-img process as well as the CPU 
priority gave no results, it was still runing slow beyond any explanation.


Debug logs showed nothing of interest, so I disabled anything above 
warning and it suddenly accelerated the export, so I've connected the 
wrong dots.


On 06/10/2014 11:18 AM, Andrew Lau wrote:

Interesting, which files did you modify to lower the log levels?

On Tue, Jun 3, 2014 at 12:38 AM,   wrote:

One word of caution so far, when exporting any vm, the node that acts as SPM
is stressed out to the max. I releived the stress by a certain margin with
lowering libvirtd and vdsm log levels to WARNING. That shortened out the
export procedure by at least five times. But vdsm process on the SPM node  is
still with high cpu usage so it's best that the SPM node should be left with a
decent CPU time amount to spare. Also, export of VM's with high vdisk capacity
and thin provisioning enabled (let's say 14GB used of 100GB defined) took
around 50min over a 10Gb ethernet interface to a 1Gb export NAS device that
was not stressed out at all by other processes. When I did that export with
debug log levels it took 5hrs :(

So lowering log levels is a must in production enviroment. I've deleted the
lun that I exported on the storage (removed it first from ovirt) and for the
next weekend I am planing to add a new one, export it again on all the nodes
and start a few fresh vm installations. Things I'm going to look for are
partition alignment and running them from different nodes in the cluster at
the same time. I just hope that not all I/O is going to pass through the SPM,
this is the one thing that bothers me the most.

I'll report back on these results next week, but if anyone has experience with
this kind of things or can point  to some documentation would be great.

On Monday, 2. June 2014. 18.51.52 you wrote:

I'm curious to hear what other comments arise, as we're analyzing a
production setup shortly.

On Sun, Jun 1, 2014 at 10:11 PM,   wrote:

I need to scratch gluster off because setup is based on CentOS 6.5, so
essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.

Gluster would still work with EL6, afaik it just won't use libgfapi and
instead use just a standard mount.


Any info regarding FC storage domain would be appreciated though.

Thanks

Ivan

On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:

Hi,

I have a 4 node cluster setup and my storage options right now are a FC
based storage, one partition per node on a local drive (~200GB each) and
a
NFS based NAS device. I want to setup export and ISO domain on the NAS
and
there are no issues or questions regarding those two. I wasn't aware of
any
other options at the time for utilizing a local storage (since this is a
shared based datacenter) so I exported a directory from each partition
via
NFS and it works. But I am little in the dark with the following:

1. Are there any advantages for switching from NFS based local storage to
a
Gluster based domain with blocks for each partition. I guess it can be
only
performance wise but maybe I'm wrong. If there are advantages, are there
any tips regarding xfs mount options etc ?

2. I've created a volume on the FC based storage and exported it to all
of
the nodes in the cluster on the storage itself. I've configured
multipathing correctly and added an alias for the wwid of the LUN so I
can
distinct this one and any other future volumes more easily. At first I
created a partition on it but since oVirt saw only the whole LUN as raw
device I erased it before adding it as the FC master storage domain. I've
imported a few VM's and point them to the FC storage domain. This setup
works, but:

- All of the nodes see a device with the alias for the wwid of the
volume,
but only the node wich is currently the SPM for the cluster can see
logical
volumes inside. Also when I setup the high availability for VM's residing
on the FC storage and select to start on any node on the cluster, they
always start on the SPM. Can multiple nodes run different VM's on the
same
FC storage at the same time (logical thing would be that they can, but I
wanted to be sure first). I am not familiar with the logic oVirt utilizes

Re: [ovirt-users] Recommended setup for a FC based storage domain

2014-06-10 Thread Andrew Lau
Interesting, which files did you modify to lower the log levels?

On Tue, Jun 3, 2014 at 12:38 AM,   wrote:
> One word of caution so far, when exporting any vm, the node that acts as SPM
> is stressed out to the max. I releived the stress by a certain margin with
> lowering libvirtd and vdsm log levels to WARNING. That shortened out the
> export procedure by at least five times. But vdsm process on the SPM node  is
> still with high cpu usage so it's best that the SPM node should be left with a
> decent CPU time amount to spare. Also, export of VM's with high vdisk capacity
> and thin provisioning enabled (let's say 14GB used of 100GB defined) took
> around 50min over a 10Gb ethernet interface to a 1Gb export NAS device that
> was not stressed out at all by other processes. When I did that export with
> debug log levels it took 5hrs :(
>
> So lowering log levels is a must in production enviroment. I've deleted the
> lun that I exported on the storage (removed it first from ovirt) and for the
> next weekend I am planing to add a new one, export it again on all the nodes
> and start a few fresh vm installations. Things I'm going to look for are
> partition alignment and running them from different nodes in the cluster at
> the same time. I just hope that not all I/O is going to pass through the SPM,
> this is the one thing that bothers me the most.
>
> I'll report back on these results next week, but if anyone has experience with
> this kind of things or can point  to some documentation would be great.
>
> On Monday, 2. June 2014. 18.51.52 you wrote:
>> I'm curious to hear what other comments arise, as we're analyzing a
>> production setup shortly.
>>
>> On Sun, Jun 1, 2014 at 10:11 PM,   wrote:
>> > I need to scratch gluster off because setup is based on CentOS 6.5, so
>> > essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.
>>
>> Gluster would still work with EL6, afaik it just won't use libgfapi and
>> instead use just a standard mount.
>>
>> > Any info regarding FC storage domain would be appreciated though.
>> >
>> > Thanks
>> >
>> > Ivan
>> >
>> > On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:
>> >> Hi,
>> >>
>> >> I have a 4 node cluster setup and my storage options right now are a FC
>> >> based storage, one partition per node on a local drive (~200GB each) and
>> >> a
>> >> NFS based NAS device. I want to setup export and ISO domain on the NAS
>> >> and
>> >> there are no issues or questions regarding those two. I wasn't aware of
>> >> any
>> >> other options at the time for utilizing a local storage (since this is a
>> >> shared based datacenter) so I exported a directory from each partition
>> >> via
>> >> NFS and it works. But I am little in the dark with the following:
>> >>
>> >> 1. Are there any advantages for switching from NFS based local storage to
>> >> a
>> >> Gluster based domain with blocks for each partition. I guess it can be
>> >> only
>> >> performance wise but maybe I'm wrong. If there are advantages, are there
>> >> any tips regarding xfs mount options etc ?
>> >>
>> >> 2. I've created a volume on the FC based storage and exported it to all
>> >> of
>> >> the nodes in the cluster on the storage itself. I've configured
>> >> multipathing correctly and added an alias for the wwid of the LUN so I
>> >> can
>> >> distinct this one and any other future volumes more easily. At first I
>> >> created a partition on it but since oVirt saw only the whole LUN as raw
>> >> device I erased it before adding it as the FC master storage domain. I've
>> >> imported a few VM's and point them to the FC storage domain. This setup
>> >> works, but:
>> >>
>> >> - All of the nodes see a device with the alias for the wwid of the
>> >> volume,
>> >> but only the node wich is currently the SPM for the cluster can see
>> >> logical
>> >> volumes inside. Also when I setup the high availability for VM's residing
>> >> on the FC storage and select to start on any node on the cluster, they
>> >> always start on the SPM. Can multiple nodes run different VM's on the
>> >> same
>> >> FC storage at the same time (logical thing would be that they can, but I
>> >> wanted to be sure first). I am not familiar with the logic oVirt utilizes
>> >> that locks the vm's logical volume to prevent corruption.
>> >>
>> >> - Fdisk shows that logical volumes on the LUN of the FC volume are
>> >> missaligned (partition doesn't end on cylindar boundary), so I wonder if
>> >> this is becuase I imported the VM's with disks that were created on local
>> >> storage before and that any _new_ VM's with disks on the fc storage would
>> >> be propperly aligned.
>> >>
>> >> This is a new setup with oVirt 3.4 (did an export of all the VM's on 3.3
>> >> and after a fresh installation of the 3.4 imported them back again). I
>> >> have room to experiment a little with 2 of the 4 nodes because currently
>> >> they are free from running any VM's, but I have limited room for
>> >> anything else that would cause an