Hello,
As you might have read in my previous post on the mailing list I have added an
arbiter node to my GlusterFS 3.8.11 replica 2 volume. After some healing issues
and help of Ravi that could get fixed but now I just noticed that my quotas are
all gone.
When I run the following command:
Hi, community
I have a large distributed-replicated Glusterfs volume, that contains
few hundreds VM's images. Between servers 20Gb/sec link.
When I start some operations like healing or removing, storage
performance becomes too low for a few days and server load becomes like
this:
13:06:32
But I had not killed anything, unless system did for some
reason and silently, but I'd not think so.
It seems that one brick is particularly ill about it all.
I'd have to restart it but mostly this would not do and
actually reboot the system, then for I short while it would
be ok only soon
Could you share the output of 'gluster volume info ' ?
On Tue, Aug 1, 2017 at 5:03 PM, lejeczek wrote:
> .. is this default/desired behaviour?
>
> And is this configurable/controllable behaviour?
> I'm thinking - it would be nice not to have whole vol go read-only(three
>
This means shd client is not able to establish the connection with the
brick on port 49155. Now this could happen if glusterd has ended up
providing a stale port back which is not what brick is listening to. If you
had killed any brick process using sigkill signal instead of sigterm this
is
Are you referring to other names of peer status output? If so, then a
peerinfo entry having other names populated means it might be having
multiple n/w interfaces or the reverse address resolution is picking this
name. But why are you worried on the this part?
On Tue, 1 Aug 2017 at 23:24, peljasz
Adding Mohit who is experimenting with cgroups has found some way to
restrict glustershd's CPU usage using cgroups. Mohit maybe you want to
share the steps we need to follow to apply cgroups only to glustershd.
Thanks.
Ravi
On 08/01/2017 03:46 PM, Alexey Zakurin wrote:
Hi, community
I
.. is this default/desired behaviour?
And is this configurable/controllable behaviour?
I'm thinking - it would be nice not to have whole vol go
read-only(three peers in cluster) but at the same time have
gluster alert/highlight the problem to a user/admin.
ver. 3.10.3
thanks.
L.
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
hi
how to get rid of entries in "Other names" ?
thanks
L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
10 matches
Mail list logo