On 04/03/2015 10:04 PM, Alastair Neil wrote:
Any follow up on this?
Are there known issues using a replica 3 glsuter datastore with lvm
thin provisioned bricks?
On 20 March 2015 at 15:22, Alastair Neil ajneil.t...@gmail.com
mailto:ajneil.t...@gmail.com wrote:
CentOS 6.6
vdsm-4.16.10-8.gitc937927.el6
glusterfs-3.6.2-1.el6
2.6.32 - 504.8.1.el6.x86_64
moved to 3.6 specifically to get the snapshotting feature, hence
my desire to migrate to thinly provisioned lvm bricks.
Well on the glusterfs mailinglist there have been discussions:
3.6.2 is a major release and introduces some new features in cluster
wide concept. Additionally it is not stable yet.
On 20 March 2015 at 14:57, Darrell Budic bu...@onholyground.com
mailto:bu...@onholyground.com wrote:
What version of gluster are you running on these?
I’ve seen high load during heals bounce my hosted engine
around due to overall system load, but never pause anything
else. Cent 7 combo storage/host systems, gluster 3.5.2.
On Mar 20, 2015, at 9:57 AM, Alastair Neil
ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote:
Pranith
I have run a pretty straightforward test. I created a two
brick 50 G replica volume with normal lvm bricks, and
installed two servers, one centos 6.6 and one centos 7.0. I
kicked off bonnie++ on both to generate some file system
activity and then made the volume replica 3. I saw no issues
on the servers.
Not clear if this is a sufficiently rigorous test and the
Volume I have had issues on is a 3TB volume with about 2TB used.
-Alastair
On 19 March 2015 at 12:30, Alastair Neil
ajneil.t...@gmail.com mailto:ajneil.t...@gmail.com wrote:
I don't think I have the resources to test it
meaningfully. I have about 50 vms on my primary storage
domain. I might be able to set up a small 50 GB volume
and provision 2 or 3 vms running test loads but I'm not
sure it would be comparable. I'll give it a try and let
you know if I see similar behaviour.
On 19 March 2015 at 11:34, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com wrote:
Without thinly provisioned lvm.
Pranith
On 03/19/2015 08:01 PM, Alastair Neil wrote:
do you mean raw partitions as bricks or simply with
out thin provisioned lvm?
On 19 March 2015 at 00:32, Pranith Kumar Karampuri
pkara...@redhat.com mailto:pkara...@redhat.com
wrote:
Could you let me know if you see this problem
without lvm as well?
Pranith
On 03/18/2015 08:25 PM, Alastair Neil wrote:
I am in the process of replacing the bricks
with thinly provisioned lvs yes.
On 18 March 2015 at 09:35, Pranith Kumar
Karampuri pkara...@redhat.com
mailto:pkara...@redhat.com wrote:
hi,
Are you using thin-lvm based backend
on which the bricks are created?
Pranith
On 03/18/2015 02:05 AM, Alastair Neil wrote:
I have a Ovirt cluster with 6 VM hosts and
4 gluster nodes. There are two
virtualisation clusters one with two
nehelem nodes and one with four
sandybridge nodes. My master storage
domain is a GlusterFS backed by a replica
3 gluster volume from 3 of the gluster
nodes. The engine is a hosted engine
3.5.1 on 3 of the sandybridge nodes, with
storage broviede by nfs from a different
gluster volume. All the hosts are CentOS
6.6.
vdsm-4.16.10-8.gitc937927.el6
glusterfs-3.6.2-1.el6
2.6.32 - 504.8.1.el6.x86_64
Problems happen when I try to add a new
brick or replace a brick eventually the
self heal will kill the VMs. In the VM's
logs I see kernel hung task messages.
Mar 12 23:05:16 static1 kernel: INFO:
task nginx:1736 blocked for more than
120 seconds.
Mar 12 23:05:16 static1 kernel: