Hi,
In the past couple of weeks, we've sent the following fixes concerning VM
corruption upon doing rebalance -
https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051
These fixes are very much part of the latest 3.10.2 release.
Satheesaran within Red Hat
On 05/16/2017 11:13 PM, mabi wrote:
Today I even saw up to 400k context switches for around 30 minutes on
my two nodes replica... Does anyone else have so high context switches
on their GlusterFS nodes?
I am wondering what is "normal" and if I should be worried...
Original
Today I even saw up to 400k context switches for around 30 minutes on my two
nodes replica... Does anyone else have so high context switches on their
GlusterFS nodes?
I am wondering what is "normal" and if I should be worried...
Original Message
Subject: 120k context switches
On 04/13/17 23:50, Pranith Kumar Karampuri wrote:
On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N > wrote:
Hi Pat,
I'm assuming you are using gluster native (fuse mount). If it
helps, you could try mounting it via gluster
On 05/10/17 14:18, Pat Haley wrote:
Hi Pranith,
Since we are mounting the partitions as the bricks, I tried the dd
test writing to
/.glusterfs/. The results
without oflag=sync were 1.6 Gb/s (faster than gluster but not as fast
as I was expecting given the 1.2 Gb/s to the no-gluster area w/
Hi Pranith,
Sorry for the delay. I never saw received your reply (but I did receive
Ben Turner's follow-up to your reply). So we tried to create a gluster
volume under /home using different variations of
gluster volume create test-volume mseas-data2:/home/gbrick_test_1
Hi,
I have three servers in the linked list topology [1], GlusterFS 3.8.10,
CentOS 7. Each server has two bricks, both on the same XFS filesystem.
The XFS is constructed over the whole MD RAID device:
md5 : active raid5 sdj1[6] sdh1[8] sde1[2] sdg1[9] sdd1[1] sdi1[5]
sdf1[3] sdc1[0]
Hi, all!
I erased the VG having snapshot LV related to gluster volumes
and then, I tried to restore volume;
1. vgcreate vg_cluster /dev/sdb
2. lvcreate --size=10G --type=thin-pool -n tp_cluster vg_cluster
3. lvcreate -V 5G --thinpool vg_cluster/tp_cluster -n test_vol vg_cluster
4. gluster v stop
Hi all,
I've hit a strange problem with geo-replication.
On gluster 3.10.1, I have set up geo replication between my replicated /
distributed instance and a remote replicated / distributed instance. The
master and slave instances are connected via VPN. Initially the
geo-replication setup was
Hi All.
I have a 9 node dockerized glusterfs cluster and I am seeing a situation
that:
1) docker daemon on 8th node failes and as a result glusterd on this node
is leaving the cluster
2) as a result on 1st node I see message about 8th node being unavailable:
[2017-05-15 12:48:22.142865] I
10 matches
Mail list logo