Re: [ovirt-users] VMs freezing during heals

2015-04-04 Thread Jorick Astrego


On 04/03/2015 10:04 PM, Alastair Neil wrote:
 Any follow up on this?

  Are there known issues using a replica 3 glsuter datastore with lvm
 thin provisioned bricks?

 On 20 March 2015 at 15:22, Alastair Neil ajneil.t...@gmail.com
 mailto:ajneil.t...@gmail.com wrote:

 CentOS 6.6
  

  vdsm-4.16.10-8.gitc937927.el6
 glusterfs-3.6.2-1.el6
 2.6.32 - 504.8.1.el6.x86_64


 moved to 3.6 specifically to get the snapshotting feature, hence
 my desire to migrate to thinly provisioned lvm bricks.



Well on the glusterfs mailinglist there have been discussions:


 3.6.2 is a major release and introduces some new features in cluster
 wide concept. Additionally it is not stable yet.






 On 20 March 2015 at 14:57, Darrell Budic bu...@onholyground.com
 mailto:bu...@onholyground.com wrote:

 What version of gluster are you running on these?

 I’ve seen high load during heals bounce my hosted engine
 around due to overall system load, but never pause anything
 else. Cent 7 combo storage/host systems, gluster 3.5.2.


 On Mar 20, 2015, at 9:57 AM, Alastair Neil
 ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote:

 Pranith

 I have run a pretty straightforward test.  I created a two
 brick 50 G replica volume with normal lvm bricks, and
 installed two servers, one centos 6.6 and one centos 7.0.  I
 kicked off bonnie++ on both to generate some file system
 activity and then made the volume replica 3.  I saw no issues
 on the servers.   

 Not clear if this is a sufficiently rigorous test and the
 Volume I have had issues on is a 3TB volume  with about 2TB used.

 -Alastair


 On 19 March 2015 at 12:30, Alastair Neil
 ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote:

 I don't think I have the resources to test it
 meaningfully.  I have about 50 vms on my primary storage
 domain.  I might be able to set up a small 50 GB volume
 and provision 2 or 3 vms running test loads but I'm not
 sure it would be comparable.  I'll give it a try and let
 you know if I see similar behaviour.

 On 19 March 2015 at 11:34, Pranith Kumar Karampuri
 pkara...@redhat.com mailto:pkara...@redhat.com wrote:

 Without thinly provisioned lvm.

 Pranith

 On 03/19/2015 08:01 PM, Alastair Neil wrote:
 do you mean raw partitions as bricks or simply with
 out thin provisioned lvm?



 On 19 March 2015 at 00:32, Pranith Kumar Karampuri
 pkara...@redhat.com mailto:pkara...@redhat.com
 wrote:

 Could you let me know if you see this problem
 without lvm as well?

 Pranith

 On 03/18/2015 08:25 PM, Alastair Neil wrote:
 I am in the process of replacing the bricks
 with thinly provisioned lvs yes.



 On 18 March 2015 at 09:35, Pranith Kumar
 Karampuri pkara...@redhat.com
 mailto:pkara...@redhat.com wrote:

 hi,
   Are you using thin-lvm based backend
 on which the bricks are created?

 Pranith

 On 03/18/2015 02:05 AM, Alastair Neil wrote:
 I have a Ovirt cluster with 6 VM hosts and
 4 gluster nodes. There are two
 virtualisation clusters one with two
 nehelem nodes and one with  four
  sandybridge nodes. My master storage
 domain is a GlusterFS backed by a replica
 3 gluster volume from 3 of the gluster
 nodes.  The engine is a hosted engine
 3.5.1 on 3 of the sandybridge nodes, with
 storage broviede by nfs from a different
 gluster volume.  All the hosts are CentOS
 6.6.

  vdsm-4.16.10-8.gitc937927.el6
 glusterfs-3.6.2-1.el6
 2.6.32 - 504.8.1.el6.x86_64


 Problems happen when I try to add a new
 brick or replace a brick eventually the
 self heal will kill the VMs. In the VM's
 logs I see kernel hung task messages. 

 Mar 12 23:05:16 static1 kernel: INFO:
 task nginx:1736 blocked for more than
 120 seconds.
 Mar 12 23:05:16 static1 kernel:
  

[ovirt-users] about testing scenario

2015-04-04 Thread Leandro Roggerone
Hello , Everyone; My name is Leandro.
I have been reading about virtualization features and Its benefits so Im 
thinking about deploying a virtualized IP core enviroment.
Main services I will need to run are , dns , dhcp, radius , and openvpn.
Since I have never installed ovirt , I would like to deploy a 
testing/learning scenario using two i5 with 6gb ram memory laptops.
My idea is to run the ovirt engine in one machine and at least 3 virtual 
centos hosts in the other while I wait for the real servers.
I have no plan of deploying  any network storage.

Some questions come to my mind:
For the engine:
Is there any recommended iso/distro with the ovirt package or should I 
use a machine with fedora/centos already installed  ?

For the node.
Is there any recommended iso/distro?
Where should I keep the iso file of the virtualized OS. (ex centos / 
routerOS.)

Is it possible to deploy virtualized environment without network 
storage? I would like to run everything locally.
My services requieres very fast i/o processing from the hard disk, My 
consern is that since I have 1gb network interface, the process can 
experience some delay or timeouts waiting data from the network.
That is why I would like to keep the storage locally.

I appreciate you help.
Regards.
Leandro.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] self hosted storage engine disaster recovery procedure

2015-04-04 Thread Ron V.

Hello,

I am getting familiar with oVirt and using the self-hosted engine on an 
iSCSI SAN and so far everything seems quite impressive.


I am trying to document a disaster recovery mecanism using a full OS 
backup as well as a engine backup as described at 
http://www.ovirt.org/Ovirt-engine-backup


I am unclear however how to prepare the existing iSCSI LUN to receive 
the data restore.  I am able to destroy the engine and the VMs continue 
to run, however I am unclear how I can free the host lock on that LUN, 
format it and re-install on it.  Is there a howto/document I can refer 
to in order to do this?


I am looking for info as to how I can re-install a functional engine 
from a backup in the case of data-corruption or any other disaster that 
could happen on the iSCSI LUN set aside for the engine, and a means to 
re-install on that LUN should the need be, without having to power down 
the VMs or worse, power everything down and re-install everything from 
scratch.  How can I boot a CentOS install disk and have the previous's 
engine LUN as a install target, and then boot that installed OS to 
restore the engine backup data?


Thanks in advance,

Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users