Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-05-07 Thread TomK

On 4/11/2018 11:54 AM, Alex K wrote:

Hey Guy's,

Returning to this topic, after disabling the the quorum:

cluster.quorum-type: none
cluster.server-quorum-type: none

I've ran into a number of gluster errors (see below).

I'm using gluster as the backend for my NFS storage.  I have gluster 
running on two nodes, nfs01 and nfs02.  It's mounted on /n on each host. 
 The path /n is in turn shared out by NFS Ganesha.  It's a two node 
setup with quorum disabled as noted below:


[root@nfs02 ganesha]# mount|grep gv01
nfs02:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


[root@nfs01 glusterfs]# mount|grep gv01
nfs01:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


Gluster always reports as working no matter when I type the below two 
commands:


[root@nfs01 glusterfs]# gluster volume info

Volume Name: gv01
Type: Replicate
Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: nfs01:/bricks/0/gv01
Brick2: nfs02:/bricks/0/gv01
Options Reconfigured:
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.readdir-ahead: on
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
[root@nfs01 glusterfs]# gluster status
unrecognized word: status (position 0)
[root@nfs01 glusterfs]# gluster volume status
Status of volume: gv01
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick nfs01:/bricks/0/gv01  49152 0  Y 1422
Brick nfs02:/bricks/0/gv01  49152 0  Y 1422
Self-heal Daemon on localhost   N/A   N/AY 1248
Self-heal Daemon on nfs02.nix.my.dom   N/A   N/AY   1251

Task Status of Volume gv01
--
There are no active volume tasks

[root@nfs01 glusterfs]#

[root@nfs01 glusterfs]# rpm -aq|grep -Ei gluster
glusterfs-3.13.2-2.el7.x86_64
glusterfs-devel-3.13.2-2.el7.x86_64
glusterfs-fuse-3.13.2-2.el7.x86_64
glusterfs-api-devel-3.13.2-2.el7.x86_64
centos-release-gluster313-1.0-1.el7.centos.noarch
python2-gluster-3.13.2-2.el7.x86_64
glusterfs-client-xlators-3.13.2-2.el7.x86_64
glusterfs-server-3.13.2-2.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64
glusterfs-cli-3.13.2-2.el7.x86_64
centos-release-gluster312-1.0-1.el7.centos.noarch
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-libs-3.13.2-2.el7.x86_64
glusterfs-extra-xlators-3.13.2-2.el7.x86_64
glusterfs-api-3.13.2-2.el7.x86_64
[root@nfs01 glusterfs]#

The short of it is that everything works and mounts on guests work as 
long as I don't try to write to the NFS share from my clients.  If I try 
to write to the share, everything comes apart like this:


-sh-4.2$ pwd
/n/my.dom/tom
-sh-4.2$ ls -altri
total 6258
11715278280495367299 -rw---. 1 t...@my.dom t...@my.dom 231 Feb 17 
20:15 .bashrc
10937819299152577443 -rw---. 1 t...@my.dom t...@my.dom 193 Feb 17 
20:15 .bash_profile
10823746994379198104 -rw---. 1 t...@my.dom t...@my.dom  18 Feb 17 
20:15 .bash_logout
10718721668898812166 drwxr-xr-x. 3 rootroot   4096 Mar 5 
02:46 ..
12008425472191154054 drwx--. 2 t...@my.dom t...@my.dom4096 Mar 18 
03:07 .ssh
13763048923429182948 -rw-rw-r--. 1 t...@my.dom t...@my.dom 6359568 Mar 25 
22:38 opennebula-cores.tar.gz
11674701370106210511 -rw-rw-r--. 1 t...@my.dom t...@my.dom   4 Apr  9 
23:25 meh.txt
 9326637590629964475 -rw-r--r--. 1 t...@my.dom t...@my.dom   24970 May  1 
01:30 nfs-trace-working.dat.gz
 9337343577229627320 -rw---. 1 t...@my.dom t...@my.dom3734 May  1 
23:38 .bash_history
11438151930727967183 drwx--. 3 t...@my.dom t...@my.dom4096 May  1 
23:58 .
 9865389421596220499 -rw-r--r--. 1 t...@my.dom t...@my.dom4096 May  1 
23:58 .meh.txt.swp

-sh-4.2$ touch test.txt
-sh-4.2$ vi test.txt
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri

This is followed by a slew of other errors in apps using the gluster 
volume.  These errors include:


02/05/2018 23:10:52 : epoch 5aea7bd5 : nfs02.nix.my.dom : 
ganesha.nfsd-5891[svc_12] nfs_rpc_process_request :DISP :INFO :Could not 
authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM



==> ganesha-gfapi.log <==
[2018-05-03 04:32:18.009245] I [MSGID: 114021] [client.c:2369:notify] 
0-gv01-client-0: current graph is no longer active, destroying rpc_client
[2018-05-03 04:32:18.009338] I [MSGID: 114021] [client.c:2369:noti

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-05-07 Thread TomK

On 4/11/2018 11:54 AM, Alex K wrote:

Hey Guy's,

Returning to this topic, after disabling the the quorum:

cluster.quorum-type: none
cluster.server-quorum-type: none

I've ran into a number of gluster errors (see below).

I'm using gluster as the backend for my NFS storage.  I have gluster 
running on two nodes, nfs01 and nfs02.  It's mounted on /n on each host. 
 The path /n is in turn shared out by NFS Ganesha.  It's a two node 
setup with quorum disabled as noted below:


[root@nfs02 ganesha]# mount|grep gv01
nfs02:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


[root@nfs01 glusterfs]# mount|grep gv01
nfs01:/gv01 on /n type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


Gluster always reports as working no matter when I type the below two 
commands:


[root@nfs01 glusterfs]# gluster volume info

Volume Name: gv01
Type: Replicate
Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: nfs01:/bricks/0/gv01
Brick2: nfs02:/bricks/0/gv01
Options Reconfigured:
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.readdir-ahead: on
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
[root@nfs01 glusterfs]# gluster status
unrecognized word: status (position 0)
[root@nfs01 glusterfs]# gluster volume status
Status of volume: gv01
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick nfs01:/bricks/0/gv01  49152 0  Y 
1422
Brick nfs02:/bricks/0/gv01  49152 0  Y 
1422
Self-heal Daemon on localhost   N/A   N/AY 
1248

Self-heal Daemon on nfs02.nix.my.dom   N/A   N/AY   1251

Task Status of Volume gv01
--
There are no active volume tasks

[root@nfs01 glusterfs]#

[root@nfs01 glusterfs]# rpm -aq|grep -Ei gluster
glusterfs-3.13.2-2.el7.x86_64
glusterfs-devel-3.13.2-2.el7.x86_64
glusterfs-fuse-3.13.2-2.el7.x86_64
glusterfs-api-devel-3.13.2-2.el7.x86_64
centos-release-gluster313-1.0-1.el7.centos.noarch
python2-gluster-3.13.2-2.el7.x86_64
glusterfs-client-xlators-3.13.2-2.el7.x86_64
glusterfs-server-3.13.2-2.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64
glusterfs-cli-3.13.2-2.el7.x86_64
centos-release-gluster312-1.0-1.el7.centos.noarch
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-libs-3.13.2-2.el7.x86_64
glusterfs-extra-xlators-3.13.2-2.el7.x86_64
glusterfs-api-3.13.2-2.el7.x86_64
[root@nfs01 glusterfs]#

The short of it is that everything works and mounts on guests work as 
long as I don't try to write to the NFS share from my clients.  If I try 
to write to the share, everything comes apart like this:


-sh-4.2$ pwd
/n/my.dom/tom
-sh-4.2$ ls -altri
total 6258
11715278280495367299 -rw---. 1 t...@my.dom t...@my.dom 231 Feb 17 
20:15 .bashrc
10937819299152577443 -rw---. 1 t...@my.dom t...@my.dom 193 Feb 17 
20:15 .bash_profile
10823746994379198104 -rw---. 1 t...@my.dom t...@my.dom  18 Feb 17 
20:15 .bash_logout
10718721668898812166 drwxr-xr-x. 3 rootroot   4096 Mar 
5 02:46 ..
12008425472191154054 drwx--. 2 t...@my.dom t...@my.dom4096 Mar 18 
03:07 .ssh
13763048923429182948 -rw-rw-r--. 1 t...@my.dom t...@my.dom 6359568 Mar 25 
22:38 opennebula-cores.tar.gz
11674701370106210511 -rw-rw-r--. 1 t...@my.dom t...@my.dom   4 Apr  9 
23:25 meh.txt
 9326637590629964475 -rw-r--r--. 1 t...@my.dom t...@my.dom   24970 May  1 
01:30 nfs-trace-working.dat.gz
 9337343577229627320 -rw---. 1 t...@my.dom t...@my.dom3734 May  1 
23:38 .bash_history
11438151930727967183 drwx--. 3 t...@my.dom t...@my.dom4096 May  1 
23:58 .
 9865389421596220499 -rw-r--r--. 1 t...@my.dom t...@my.dom4096 May  1 
23:58 .meh.txt.swp

-sh-4.2$ touch test.txt
-sh-4.2$ vi test.txt
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri
ls: cannot open directory .: Permission denied
-sh-4.2$ ls -altri

This is followed by a slew of other errors in apps using the gluster 
volume.  These errors include:


02/05/2018 23:10:52 : epoch 5aea7bd5 : nfs02.nix.mds.xyz : 
ganesha.nfsd-5891[svc_12] nfs_rpc_process_request :DISP :INFO :Could not 
authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM



==> ganesha-gfapi.log <==
[2018-05-03 04:32:18.009245] I [MSGID: 114021] [client.c:2369:notify] 
0-gv01-client-0: current graph is no longer active, destroying rpc_client
[2018-05-03 04:32:18.009338] I [MSGID: 114021] [client.c:2369

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-04-11 Thread Alex K
On Wed, Apr 11, 2018 at 4:35 AM, TomK  wrote:

> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing.  Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables quorum so as to avoid the issue. Glad that this helped.
Bare in in mind though that it is easier to face split-brain issues with
quorum is disabled, that's why 3 nodes at least are recommended. Just to
note that I have also a 2 node cluster which is running without issues for
long time.


> Thank you for that.
>
> Cheers,
> Tom
>
> Hi,
>>
>> You need 3 nodes at least to have quorum enabled. In 2 node setup you
>> need to disable quorum so as to be able to still use the volume when one of
>> the nodes go down.
>>
>> On Mon, Apr 9, 2018, 09:02 TomK > tomk...@mdevsys.com>> wrote:
>>
>> Hey All,
>>
>> In a two node glusterfs setup, with one node down, can't use the
>> second
>> node to mount the volume.  I understand this is expected behaviour?
>> Anyway to allow the secondary node to function then replicate what
>> changed to the first (primary) when it's back online?  Or should I
>> just
>> go for a third node to allow for this?
>>
>> Also, how safe is it to set the following to none?
>>
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>>
>>
>> [root@nfs01 /]# gluster volume start gv01
>> volume start: gv01: failed: Quorum not met. Volume operation not
>> allowed.
>> [root@nfs01 /]#
>>
>>
>> [root@nfs01 /]# gluster volume status
>> Status of volume: gv01
>> Gluster process TCP Port  RDMA Port
>>  Online  Pid
>> 
>> --
>> Brick nfs01:/bricks/0/gv01  N/A   N/AN
>>N/A
>> Self-heal Daemon on localhost   N/A   N/AY
>> 25561
>>
>> Task Status of Volume gv01
>> 
>> --
>> There are no active volume tasks
>>
>> [root@nfs01 /]#
>>
>>
>> [root@nfs01 /]# gluster volume info
>>
>> Volume Name: gv01
>> Type: Replicate
>> Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: nfs01:/bricks/0/gv01
>> Brick2: nfs02:/bricks/0/gv01
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> nfs.trusted-sync: on
>> performance.cache-size: 1GB
>> performance.io-thread-count: 16
>> performance.write-behind-window-size: 8MB
>> performance.readdir-ahead: on
>> client.event-threads: 8
>> server.event-threads: 8
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> [root@nfs01 /]#
>>
>>
>>
>>
>> ==> n.log <==
>> [2018-04-09 05:08:13.704156] I [MSGID: 100030]
>> [glusterfsd.c:2556:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
>> 3.13.2 (args: /usr/sbin/glusterfs --process-name fuse
>> --volfile-server=nfs01 --volfile-id=/gv01 /n)
>> [2018-04-09 05:08:13.711255] W [MSGID: 101002]
>> [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family'
>> is
>> deprecated, preferred is 'transport.address-family', continuing with
>> correction
>> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
>> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not
>> available"
>> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 1
>> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 2
>> [2018-04-09 05:08:13.738114] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 3
>> [2018-04-09 05:08:13.738203] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 4
>> [2018-04-09 05:08:13.738324] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 5
>> [2018-04-09 05:08:13.738330] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 6
>> [2018-04-09 05:08:13.738655] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with index 7
>> [2018-04-09 05:08:13.738742] I [MSGID: 101190]
>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started
>> thread
>> with in

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-04-10 Thread TomK

On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,

With two nodes, the setup works but both sides go down when one node is 
missing.  Still I set the below two params to none and that solved my issue:


cluster.quorum-type: none
cluster.server-quorum-type: none

Thank you for that.

Cheers,
Tom


Hi,

You need 3 nodes at least to have quorum enabled. In 2 node setup you 
need to disable quorum so as to be able to still use the volume when one 
of the nodes go down.


On Mon, Apr 9, 2018, 09:02 TomK > wrote:


Hey All,

In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume.  I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online?  Or should I just
go for a third node to allow for this?

Also, how safe is it to set the following to none?

cluster.quorum-type: auto
cluster.server-quorum-type: server


[root@nfs01 /]# gluster volume start gv01
volume start: gv01: failed: Quorum not met. Volume operation not
allowed.
[root@nfs01 /]#


[root@nfs01 /]# gluster volume status
Status of volume: gv01
Gluster process                             TCP Port  RDMA Port 
Online  Pid


--
Brick nfs01:/bricks/0/gv01                  N/A       N/A        N 
      N/A

Self-heal Daemon on localhost               N/A       N/A        Y
25561

Task Status of Volume gv01

--
There are no active volume tasks

[root@nfs01 /]#


[root@nfs01 /]# gluster volume info

Volume Name: gv01
Type: Replicate
Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: nfs01:/bricks/0/gv01
Brick2: nfs02:/bricks/0/gv01
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
nfs.trusted-sync: on
performance.cache-size: 1GB
performance.io-thread-count: 16
performance.write-behind-window-size: 8MB
performance.readdir-ahead: on
client.event-threads: 8
server.event-threads: 8
cluster.quorum-type: auto
cluster.server-quorum-type: server
[root@nfs01 /]#




==> n.log <==
[2018-04-09 05:08:13.704156] I [MSGID: 100030] [glusterfsd.c:2556:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
3.13.2 (args: /usr/sbin/glusterfs --process-name fuse
--volfile-server=nfs01 --volfile-id=/gv01 /n)
[2018-04-09 05:08:13.711255] W [MSGID: 101002]
[options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is
deprecated, preferred is 'transport.address-family', continuing with
correction
[2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not
available"
[2018-04-09 05:08:13.729025] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-04-09 05:08:13.737757] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2018-04-09 05:08:13.738114] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 3
[2018-04-09 05:08:13.738203] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 4
[2018-04-09 05:08:13.738324] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 5
[2018-04-09 05:08:13.738330] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 6
[2018-04-09 05:08:13.738655] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 7
[2018-04-09 05:08:13.738742] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 8
[2018-04-09 05:08:13.739460] W [MSGID: 101174]
[graph.c:363:_log_if_unknown_option] 0-gv01-readdir-ahead: option
'parallel-readdir' is not recognized
[2018-04-09 05:08:13.739787] I [MSGID: 114020] [client.c:2360:notify]
0-gv01-client-0: parent translators are ready, attempting connect on
transport
[2018-04-09 05:08:13.747040] W [socket.c:3216:socket_connect]
0-gv01-client-0: Error disabling sockopt IPV6_V6ONLY: "Protocol not
available"
[2018-04-09 05:08:13.747372] I [MSGID: 114020] [client.c:2360:notify]
0-gv01-client-1: parent translators are ready, attempting connect on
transport
[2018-04-09 05:08:13.747883] E [MSGID: 114058]
[client-hands

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-04-08 Thread Alex K
Hi,

You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.

On Mon, Apr 9, 2018, 09:02 TomK  wrote:

> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume.  I understand this is expected behaviour?
> Anyway to allow the secondary node to function then replicate what
> changed to the first (primary) when it's back online?  Or should I just
> go for a third node to allow for this?
>
> Also, how safe is it to set the following to none?
>
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
>
>
> [root@nfs01 /]# gluster volume start gv01
> volume start: gv01: failed: Quorum not met. Volume operation not allowed.
> [root@nfs01 /]#
>
>
> [root@nfs01 /]# gluster volume status
> Status of volume: gv01
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick nfs01:/bricks/0/gv01  N/A   N/AN
>  N/A
> Self-heal Daemon on localhost   N/A   N/AY
> 25561
>
> Task Status of Volume gv01
>
> --
> There are no active volume tasks
>
> [root@nfs01 /]#
>
>
> [root@nfs01 /]# gluster volume info
>
> Volume Name: gv01
> Type: Replicate
> Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: nfs01:/bricks/0/gv01
> Brick2: nfs02:/bricks/0/gv01
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> nfs.trusted-sync: on
> performance.cache-size: 1GB
> performance.io-thread-count: 16
> performance.write-behind-window-size: 8MB
> performance.readdir-ahead: on
> client.event-threads: 8
> server.event-threads: 8
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> [root@nfs01 /]#
>
>
>
>
> ==> n.log <==
> [2018-04-09 05:08:13.704156] I [MSGID: 100030] [glusterfsd.c:2556:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
> 3.13.2 (args: /usr/sbin/glusterfs --process-name fuse
> --volfile-server=nfs01 --volfile-id=/gv01 /n)
> [2018-04-09 05:08:13.711255] W [MSGID: 101002]
> [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is
> deprecated, preferred is 'transport.address-family', continuing with
> correction
> [2018-04-09 05:08:13.728297] W [socket.c:3216:socket_connect]
> 0-glusterfs: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
> [2018-04-09 05:08:13.729025] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-04-09 05:08:13.737757] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2018-04-09 05:08:13.738114] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 3
> [2018-04-09 05:08:13.738203] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 4
> [2018-04-09 05:08:13.738324] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 5
> [2018-04-09 05:08:13.738330] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 6
> [2018-04-09 05:08:13.738655] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 7
> [2018-04-09 05:08:13.738742] I [MSGID: 101190]
> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 8
> [2018-04-09 05:08:13.739460] W [MSGID: 101174]
> [graph.c:363:_log_if_unknown_option] 0-gv01-readdir-ahead: option
> 'parallel-readdir' is not recognized
> [2018-04-09 05:08:13.739787] I [MSGID: 114020] [client.c:2360:notify]
> 0-gv01-client-0: parent translators are ready, attempting connect on
> transport
> [2018-04-09 05:08:13.747040] W [socket.c:3216:socket_connect]
> 0-gv01-client-0: Error disabling sockopt IPV6_V6ONLY: "Protocol not
> available"
> [2018-04-09 05:08:13.747372] I [MSGID: 114020] [client.c:2360:notify]
> 0-gv01-client-1: parent translators are ready, attempting connect on
> transport
> [2018-04-09 05:08:13.747883] E [MSGID: 114058]
> [client-handshake.c:1571:client_query_portmap_cbk] 0-gv01-client-0:
> failed to get the port number for remote subvolume. Please run 'gluster
> volume status' on server to see if brick process is running.
> [2018-04-09 05:08:13.748026] I [MSGID: 114018]
> [client.c:2285:client_rpc_notify] 0-gv01-client-0: disconnected from
> gv01-client-0. Client process will keep trying to connect to glusterd
> until brick's port is available
> [2018-04-09 05:08:13.748070] W [MSGID: 108001]
> [afr-common.c:5391:afr_notify] 0-gv01-replicate-0: Clie