Hi Ravi,

Please see the log attached. The output of "gluster volume status" is as
follows. Should there be something listening on gfs3? I'm not sure whether
it having TCP Port and Pid as N/A is a symptom or cause. Thank you.

# gluster volume status
Status of volume: gvol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfs1:/nodirectwritedata/gluster/gvol0 49152     0          Y
7706
Brick gfs2:/nodirectwritedata/gluster/gvol0 49152     0          Y
7624
Brick gfs3:/nodirectwritedata/gluster/gvol0 N/A       N/A        N
N/A
Self-heal Daemon on localhost               N/A       N/A        Y
19853
Self-heal Daemon on gfs1                    N/A       N/A        Y
28600
Self-heal Daemon on gfs2                    N/A       N/A        Y
17614

Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks


On Wed, 22 May 2019 at 18:06, Ravishankar N <[email protected]> wrote:

> If you are trying this again, please 'gluster volume set $volname
> client-log-level DEBUG`before attempting the add-brick and attach the
> gvol0-add-brick-mount.log here. After that, you can change the
> client-log-level back to INFO.
>
> -Ravi
> On 22/05/19 11:32 AM, Ravishankar N wrote:
>
>
> On 22/05/19 11:23 AM, David Cunningham wrote:
>
> Hi Ravi,
>
> I'd already done exactly that before, where step 3 was a simple 'rm -rf
> /nodirectwritedata/gluster/gvol0'. Have you another suggestion on what the
> cleanup or reformat should be?
>
> `rm -rf /nodirectwritedata/gluster/gvol0` does look okay to me David.
> Basically, '/nodirectwritedata/gluster/gvol0' must be empty and must not
> have any extended attributes set on it. Why fuse_first_lookup() is failing
> is a bit of a mystery to me at this point. :-(
> Regards,
> Ravi
>
>
> Thank you.
>
>
> On Wed, 22 May 2019 at 13:56, Ravishankar N <[email protected]>
> wrote:
>
>> Hmm, so the volume info seems to indicate that the add-brick was
>> successful but the gfid xattr is missing on the new brick (as are the
>> actual files, barring the .glusterfs folder, according to your previous
>> mail).
>>
>> Do you want to try removing and adding it again?
>>
>> 1. `gluster volume remove-brick gvol0 replica 2
>> gfs3:/nodirectwritedata/gluster/gvol0 force` from gfs1
>>
>> 2. Check that gluster volume info is now back to a 1x2 volume on all
>> nodes and `gluster peer status` is  connected on all nodes.
>>
>> 3. Cleanup or reformat '/nodirectwritedata/gluster/gvol0' on gfs3.
>>
>> 4. `gluster volume add-brick gvol0 replica 3 arbiter 1
>> gfs3:/nodirectwritedata/gluster/gvol0` from gfs1.
>>
>> 5. Check that the files are getting healed on to the new brick.
>> Thanks,
>> Ravi
>> On 22/05/19 6:50 AM, David Cunningham wrote:
>>
>> Hi Ravi,
>>
>> Certainly. On the existing two nodes:
>>
>> gfs1 # getfattr -d -m. -e hex /nodirectwritedata/gluster/gvol0
>> getfattr: Removing leading '/' from absolute path names
>> # file: nodirectwritedata/gluster/gvol0
>> trusted.afr.dirty=0x000000000000000000000000
>> trusted.afr.gvol0-client-2=0x000000000000000000000000
>> trusted.gfid=0x00000000000000000000000000000001
>> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
>> trusted.glusterfs.volume-id=0xfb5af69e1c3e41648b23c1d7bec9b1b6
>>
>> gfs2 # getfattr -d -m. -e hex /nodirectwritedata/gluster/gvol0
>> getfattr: Removing leading '/' from absolute path names
>> # file: nodirectwritedata/gluster/gvol0
>> trusted.afr.dirty=0x000000000000000000000000
>> trusted.afr.gvol0-client-0=0x000000000000000000000000
>> trusted.afr.gvol0-client-2=0x000000000000000000000000
>> trusted.gfid=0x00000000000000000000000000000001
>> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
>> trusted.glusterfs.volume-id=0xfb5af69e1c3e41648b23c1d7bec9b1b6
>>
>> On the new node:
>>
>> gfs3 # getfattr -d -m. -e hex /nodirectwritedata/gluster/gvol0
>> getfattr: Removing leading '/' from absolute path names
>> # file: nodirectwritedata/gluster/gvol0
>> trusted.afr.dirty=0x000000000000000000000001
>> trusted.glusterfs.volume-id=0xfb5af69e1c3e41648b23c1d7bec9b1b6
>>
>> Output of "gluster volume info" is the same on all 3 nodes and is:
>>
>> # gluster volume info
>>
>> Volume Name: gvol0
>> Type: Replicate
>> Volume ID: fb5af69e-1c3e-4164-8b23-c1d7bec9b1b6
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfs1:/nodirectwritedata/gluster/gvol0
>> Brick2: gfs2:/nodirectwritedata/gluster/gvol0
>> Brick3: gfs3:/nodirectwritedata/gluster/gvol0 (arbiter)
>> Options Reconfigured:
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>>
>>
>> On Wed, 22 May 2019 at 12:43, Ravishankar N <[email protected]>
>> wrote:
>>
>>> Hi David,
>>> Could you provide the `getfattr -d -m. -e hex
>>> /nodirectwritedata/gluster/gvol0` output of all bricks and the output of
>>> `gluster volume info`?
>>>
>>> Thanks,
>>> Ravi
>>> On 22/05/19 4:57 AM, David Cunningham wrote:
>>>
>>> Hi Sanju,
>>>
>>> Here's what glusterd.log says on the new arbiter server when trying to
>>> add the node:
>>>
>>> [2019-05-22 00:15:05.963059] I [run.c:242:runner_log]
>>> (-->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0x3b2cd)
>>> [0x7fe4ca9102cd]
>>> -->/usr/lib64/glusterfs/5.6/xlator/mgmt/glusterd.so(+0xe6b85)
>>> [0x7fe4ca9bbb85] -->/lib64/libglusterfs.so.0(runner_log+0x115)
>>> [0x7fe4d5ecc955] ) 0-management: Ran script:
>>> /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
>>> --volname=gvol0 --version=1 --volume-op=add-brick
>>> --gd-workdir=/var/lib/glusterd
>>> [2019-05-22 00:15:05.963177] I [MSGID: 106578]
>>> [glusterd-brick-ops.c:1355:glusterd_op_perform_add_bricks] 0-management:
>>> replica-count is set 3
>>> [2019-05-22 00:15:05.963228] I [MSGID: 106578]
>>> [glusterd-brick-ops.c:1360:glusterd_op_perform_add_bricks] 0-management:
>>> arbiter-count is set 1
>>> [2019-05-22 00:15:05.963257] I [MSGID: 106578]
>>> [glusterd-brick-ops.c:1364:glusterd_op_perform_add_bricks] 0-management:
>>> type is set 0, need to change it
>>> [2019-05-22 00:15:17.015268] E [MSGID: 106053]
>>> [glusterd-utils.c:13942:glusterd_handle_replicate_brick_ops] 0-management:
>>> Failed to set extended attribute trusted.add-brick : Transport endpoint is
>>> not connected [Transport endpoint is not connected]
>>> [2019-05-22 00:15:17.036479] E [MSGID: 106073]
>>> [glusterd-brick-ops.c:2595:glusterd_op_add_brick] 0-glusterd: Unable to add
>>> bricks
>>> [2019-05-22 00:15:17.036595] E [MSGID: 106122]
>>> [glusterd-mgmt.c:299:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit
>>> failed.
>>> [2019-05-22 00:15:17.036710] E [MSGID: 106122]
>>> [glusterd-mgmt-handler.c:594:glusterd_handle_commit_fn] 0-management:
>>> commit failed on operation Add brick
>>>
>>> As before gvol0-add-brick-mount.log said:
>>>
>>> [2019-05-22 00:15:17.005695] I [fuse-bridge.c:4267:fuse_init]
>>> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
>>> 7.22
>>> [2019-05-22 00:15:17.005749] I [fuse-bridge.c:4878:fuse_graph_sync]
>>> 0-fuse: switched to graph 0
>>> [2019-05-22 00:15:17.010101] E [fuse-bridge.c:4336:fuse_first_lookup]
>>> 0-fuse: first lookup on root failed (Transport endpoint is not connected)
>>> [2019-05-22 00:15:17.014217] W [fuse-bridge.c:897:fuse_attr_cbk]
>>> 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)
>>> [2019-05-22 00:15:17.015097] W
>>> [fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse:
>>> 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
>>> is not connected)
>>> [2019-05-22 00:15:17.015158] W [fuse-bridge.c:3294:fuse_setxattr_resume]
>>> 0-glusterfs-fuse: 3: SETXATTR 00000000-0000-0000-0000-000000000001/1
>>> (trusted.add-brick) resolution failed
>>> [2019-05-22 00:15:17.035636] I [fuse-bridge.c:5144:fuse_thread_proc]
>>> 0-fuse: initating unmount of /tmp/mntYGNbj9
>>> [2019-05-22 00:15:17.035854] W [glusterfsd.c:1500:cleanup_and_exit]
>>> (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f7745ccedd5]
>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55c81b63de75]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55c81b63dceb] ) 0-:
>>> received signum (15), shutting down
>>> [2019-05-22 00:15:17.035942] I [fuse-bridge.c:5914:fini] 0-fuse:
>>> Unmounting '/tmp/mntYGNbj9'.
>>> [2019-05-22 00:15:17.035966] I [fuse-bridge.c:5919:fini] 0-fuse: Closing
>>> fuse connection to '/tmp/mntYGNbj9'.
>>>
>>> Here are the processes running on the new arbiter server:
>>> # ps -ef | grep gluster
>>> root      3466     1  0 20:13 ?        00:00:00 /usr/sbin/glusterfs -s
>>> localhost --volfile-id gluster/glustershd -p
>>> /var/run/gluster/glustershd/glustershd.pid -l
>>> /var/log/glusterfs/glustershd.log -S
>>> /var/run/gluster/24c12b09f93eec8e.socket --xlator-option
>>> *replicate*.node-uuid=2069cfb3-c798-47e3-8cf8-3c584cf7c412 --process-name
>>> glustershd
>>> root      6832     1  0 May16 ?        00:02:10 /usr/sbin/glusterd -p
>>> /var/run/glusterd.pid --log-level INFO
>>> root     17841     1  0 May16 ?        00:00:58 /usr/sbin/glusterfs
>>> --process-name fuse --volfile-server=gfs1 --volfile-id=/gvol0 /mnt/glusterfs
>>>
>>> Here are the files created on the new arbiter server:
>>> # find /nodirectwritedata/gluster/gvol0 | xargs ls -ald
>>> drwxr-xr-x 3 root root 4096 May 21 20:15 /nodirectwritedata/gluster/gvol0
>>> drw------- 2 root root 4096 May 21 20:15
>>> /nodirectwritedata/gluster/gvol0/.glusterfs
>>>
>>> Thank you for your help!
>>>
>>>
>>> On Tue, 21 May 2019 at 00:10, Sanju Rakonde <[email protected]> wrote:
>>>
>>>> David,
>>>>
>>>> can you please attach glusterd.logs? As the error message says, Commit
>>>> failed on the arbitar node, we might be able to find some issue on that
>>>> node.
>>>>
>>>> On Mon, May 20, 2019 at 10:10 AM Nithya Balachandran <
>>>> [email protected]> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, 17 May 2019 at 06:01, David Cunningham <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> We're adding an arbiter node to an existing volume and having an
>>>>>> issue. Can anyone help? The root cause error appears to be
>>>>>> "00000000-0000-0000-0000-000000000001: failed to resolve (Transport
>>>>>> endpoint is not connected)", as below.
>>>>>>
>>>>>> We are running glusterfs 5.6.1. Thanks in advance for any assistance!
>>>>>>
>>>>>> On existing node gfs1, trying to add new arbiter node gfs3:
>>>>>>
>>>>>> # gluster volume add-brick gvol0 replica 3 arbiter 1
>>>>>> gfs3:/nodirectwritedata/gluster/gvol0
>>>>>> volume add-brick: failed: Commit failed on gfs3. Please check log
>>>>>> file for details.
>>>>>>
>>>>>
>>>>> This looks like a glusterd issue. Please check the glusterd logs for
>>>>> more info.
>>>>> Adding the glusterd dev to this thread. Sanju, can you take a look?
>>>>>
>>>>> Regards,
>>>>> Nithya
>>>>>
>>>>>>
>>>>>> On new node gfs3 in gvol0-add-brick-mount.log:
>>>>>>
>>>>>> [2019-05-17 01:20:22.689721] I [fuse-bridge.c:4267:fuse_init]
>>>>>> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 
>>>>>> kernel
>>>>>> 7.22
>>>>>> [2019-05-17 01:20:22.689778] I [fuse-bridge.c:4878:fuse_graph_sync]
>>>>>> 0-fuse: switched to graph 0
>>>>>> [2019-05-17 01:20:22.694897] E [fuse-bridge.c:4336:fuse_first_lookup]
>>>>>> 0-fuse: first lookup on root failed (Transport endpoint is not connected)
>>>>>> [2019-05-17 01:20:22.699770] W
>>>>>> [fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse:
>>>>>> 00000000-0000-0000-0000-000000000001: failed to resolve (Transport 
>>>>>> endpoint
>>>>>> is not connected)
>>>>>> [2019-05-17 01:20:22.699834] W
>>>>>> [fuse-bridge.c:3294:fuse_setxattr_resume] 0-glusterfs-fuse: 2: SETXATTR
>>>>>> 00000000-0000-0000-0000-000000000001/1 (trusted.add-brick) resolution 
>>>>>> failed
>>>>>> [2019-05-17 01:20:22.715656] I [fuse-bridge.c:5144:fuse_thread_proc]
>>>>>> 0-fuse: initating unmount of /tmp/mntQAtu3f
>>>>>> [2019-05-17 01:20:22.715865] W [glusterfsd.c:1500:cleanup_and_exit]
>>>>>> (-->/lib64/libpthread.so.0(+0x7dd5) [0x7fb223bf6dd5]
>>>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x560886581e75]
>>>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x560886581ceb] ) 0-:
>>>>>> received signum (15), shutting down
>>>>>> [2019-05-17 01:20:22.715926] I [fuse-bridge.c:5914:fini] 0-fuse:
>>>>>> Unmounting '/tmp/mntQAtu3f'.
>>>>>> [2019-05-17 01:20:22.715953] I [fuse-bridge.c:5919:fini] 0-fuse:
>>>>>> Closing fuse connection to '/tmp/mntQAtu3f'.
>>>>>>
>>>>>> Processes running on new node gfs3:
>>>>>>
>>>>>> # ps -ef | grep gluster
>>>>>> root      6832     1  0 20:17 ?        00:00:00 /usr/sbin/glusterd -p
>>>>>> /var/run/glusterd.pid --log-level INFO
>>>>>> root     15799     1  0 20:17 ?        00:00:00 /usr/sbin/glusterfs
>>>>>> -s localhost --volfile-id gluster/glustershd -p
>>>>>> /var/run/gluster/glustershd/glustershd.pid -l
>>>>>> /var/log/glusterfs/glustershd.log -S
>>>>>> /var/run/gluster/24c12b09f93eec8e.socket --xlator-option
>>>>>> *replicate*.node-uuid=2069cfb3-c798-47e3-8cf8-3c584cf7c412 --process-name
>>>>>> glustershd
>>>>>> root     16856 16735  0 21:21 pts/0    00:00:00 grep --color=auto
>>>>>> gluster
>>>>>>
>>>>>> --
>>>>>> David Cunningham, Voisonics Limited
>>>>>> http://voisonics.com/
>>>>>> USA: +1 213 221 1092
>>>>>> New Zealand: +64 (0)28 2558 3782
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> [email protected]
>>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Sanju
>>>>
>>>
>>>
>>> --
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>>
>>> _______________________________________________
>>> Gluster-users mailing 
>>> [email protected]https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>>
>>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>
>

-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
[2019-05-22 23:03:23.344284] I [MSGID: 100030] [glusterfsd.c:2725:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.6 (args: /usr/sbin/glusterfs --volfile /tmp/gvol0.tcp-fuse.vol --client-pid -6 -l /var/log/glusterfs/gvol0-add-brick-mount.log /tmp/mntBBQr8A)
[2019-05-22 23:03:23.351566] D [MSGID: 0] [quick-read.c:1183:check_cache_size_ok] 0-gvol0-quick-read: Max cache size is 67334021120
[2019-05-22 23:03:23.351743] D [MSGID: 0] [io-cache.c:1595:check_cache_size_ok] 0-gvol0-io-cache: Max cache size is 67334021120
[2019-05-22 23:03:23.351809] D [MSGID: 0] [options.c:1227:xlator_option_init_size_uint64] 0-gvol0-readdir-ahead: option rda-request-size using set value 131072
[2019-05-22 23:03:23.351844] D [MSGID: 0] [options.c:1227:xlator_option_init_size_uint64] 0-gvol0-readdir-ahead: option rda-cache-limit using set value 10MB
[2019-05-22 23:03:23.351870] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-gvol0-readdir-ahead: option parallel-readdir using set value off
[2019-05-22 23:03:23.352098] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-gvol0-dht: option lock-migration using set value off
[2019-05-22 23:03:23.352143] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-gvol0-dht: option force-migration using set value off
[2019-05-22 23:03:23.352251] D [MSGID: 0] [dht-shared.c:356:dht_init_regex] 0-gvol0-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2019-05-22 23:03:23.352376] D [MSGID: 0] [options.c:1224:xlator_option_init_uint32] 0-gvol0-replicate-0: option arbiter-count using set value 1
[2019-05-22 23:03:23.352810] D [MSGID: 0] [options.c:1221:xlator_option_init_str] 0-gvol0-replicate-0: option afr-pending-xattr using set value gvol0-client-0,gvol0-client-1,gvol0-client-2
[2019-05-22 23:03:23.354194] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-gvol0-client-2: option ping-timeout using set value 42
[2019-05-22 23:03:23.354231] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-gvol0-client-2: option remote-subvolume using set value /nodirectwritedata/gluster/gvol0
[2019-05-22 23:03:23.354263] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-gvol0-client-2: option send-gids using set value true
[2019-05-22 23:03:23.354325] D [rpc-clnt.c:1002:rpc_clnt_connection_init] 0-gvol0-client-2: defaulting frame-timeout to 30mins
[2019-05-22 23:03:23.354352] D [rpc-clnt.c:1010:rpc_clnt_connection_init] 0-gvol0-client-2: setting ping-timeout to 42
[2019-05-22 23:03:23.354387] D [rpc-transport.c:269:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/5.6/rpc-transport/socket.so
[2019-05-22 23:03:23.361071] D [socket.c:4466:socket_init] 0-gvol0-client-2: Configued transport.tcp-user-timeout=0
[2019-05-22 23:03:23.361108] D [socket.c:4484:socket_init] 0-gvol0-client-2: Reconfigued transport.keepalivecnt=9
[2019-05-22 23:03:23.361129] D [socket.c:4169:ssl_setup_connection_params] 0-gvol0-client-2: SSL support on the I/O path is NOT enabled
[2019-05-22 23:03:23.361144] D [socket.c:4172:ssl_setup_connection_params] 0-gvol0-client-2: SSL support for glusterd is NOT enabled
[2019-05-22 23:03:23.361178] D [rpc-clnt.c:1579:rpcclnt_cbk_program_register] 0-gvol0-client-2: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1
[2019-05-22 23:03:23.361200] D [MSGID: 0] [client.c:2547:client_init_rpc] 0-gvol0-client-2: client init successful
[2019-05-22 23:03:23.361271] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-gvol0-client-1: option ping-timeout using set value 42
[2019-05-22 23:03:23.361303] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-gvol0-client-1: option remote-subvolume using set value /nodirectwritedata/gluster/gvol0
[2019-05-22 23:03:23.361336] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-gvol0-client-1: option send-gids using set value true
[2019-05-22 23:03:23.361357] D [rpc-clnt.c:1002:rpc_clnt_connection_init] 0-gvol0-client-1: defaulting frame-timeout to 30mins
[2019-05-22 23:03:23.361371] D [rpc-clnt.c:1010:rpc_clnt_connection_init] 0-gvol0-client-1: setting ping-timeout to 42
[2019-05-22 23:03:23.361390] D [rpc-transport.c:269:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/5.6/rpc-transport/socket.so
[2019-05-22 23:03:23.361564] D [socket.c:4466:socket_init] 0-gvol0-client-1: Configued transport.tcp-user-timeout=0
[2019-05-22 23:03:23.361591] D [socket.c:4484:socket_init] 0-gvol0-client-1: Reconfigued transport.keepalivecnt=9
[2019-05-22 23:03:23.361605] D [socket.c:4169:ssl_setup_connection_params] 0-gvol0-client-1: SSL support on the I/O path is NOT enabled
[2019-05-22 23:03:23.361617] D [socket.c:4172:ssl_setup_connection_params] 0-gvol0-client-1: SSL support for glusterd is NOT enabled
[2019-05-22 23:03:23.361630] D [rpc-clnt.c:1579:rpcclnt_cbk_program_register] 0-gvol0-client-1: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1
[2019-05-22 23:03:23.361649] D [MSGID: 0] [client.c:2547:client_init_rpc] 0-gvol0-client-1: client init successful
[2019-05-22 23:03:23.361699] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-gvol0-client-0: option ping-timeout using set value 42
[2019-05-22 23:03:23.361734] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-gvol0-client-0: option remote-subvolume using set value /nodirectwritedata/gluster/gvol0
[2019-05-22 23:03:23.361762] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-gvol0-client-0: option send-gids using set value true
[2019-05-22 23:03:23.361784] D [rpc-clnt.c:1002:rpc_clnt_connection_init] 0-gvol0-client-0: defaulting frame-timeout to 30mins
[2019-05-22 23:03:23.361797] D [rpc-clnt.c:1010:rpc_clnt_connection_init] 0-gvol0-client-0: setting ping-timeout to 42
[2019-05-22 23:03:23.361829] D [rpc-transport.c:269:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/5.6/rpc-transport/socket.so
[2019-05-22 23:03:23.361984] D [socket.c:4466:socket_init] 0-gvol0-client-0: Configued transport.tcp-user-timeout=0
[2019-05-22 23:03:23.362004] D [socket.c:4484:socket_init] 0-gvol0-client-0: Reconfigued transport.keepalivecnt=9
[2019-05-22 23:03:23.362018] D [socket.c:4169:ssl_setup_connection_params] 0-gvol0-client-0: SSL support on the I/O path is NOT enabled
[2019-05-22 23:03:23.362031] D [socket.c:4172:ssl_setup_connection_params] 0-gvol0-client-0: SSL support for glusterd is NOT enabled
[2019-05-22 23:03:23.362046] D [rpc-clnt.c:1579:rpcclnt_cbk_program_register] 0-gvol0-client-0: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1
[2019-05-22 23:03:23.362060] D [MSGID: 0] [client.c:2547:client_init_rpc] 0-gvol0-client-0: client init successful
[2019-05-22 23:03:23.362168] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-2: option 'transport.address-family' is not recognized
[2019-05-22 23:03:23.362194] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-2: option 'transport.tcp-user-timeout' is not recognized
[2019-05-22 23:03:23.362218] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-2: option 'transport.socket.keepalive-time' is not recognized
[2019-05-22 23:03:23.362239] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-2: option 'transport.socket.keepalive-interval' is not recognized
[2019-05-22 23:03:23.362260] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-2: option 'transport.socket.keepalive-count' is not recognized
[2019-05-22 23:03:23.362293] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-1: option 'transport.address-family' is not recognized
[2019-05-22 23:03:23.362317] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-1: option 'transport.tcp-user-timeout' is not recognized
[2019-05-22 23:03:23.362338] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-1: option 'transport.socket.keepalive-time' is not recognized
[2019-05-22 23:03:23.362358] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-1: option 'transport.socket.keepalive-interval' is not recognized
[2019-05-22 23:03:23.362380] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-1: option 'transport.socket.keepalive-count' is not recognized
[2019-05-22 23:03:23.362427] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-0: option 'transport.address-family' is not recognized
[2019-05-22 23:03:23.362447] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-0: option 'transport.tcp-user-timeout' is not recognized
[2019-05-22 23:03:23.362468] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-0: option 'transport.socket.keepalive-time' is not recognized
[2019-05-22 23:03:23.362502] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-0: option 'transport.socket.keepalive-interval' is not recognized
[2019-05-22 23:03:23.362530] D [MSGID: 101174] [graph.c:397:_log_if_unknown_option] 0-gvol0-client-0: option 'transport.socket.keepalive-count' is not recognized
[2019-05-22 23:03:23.362561] D [fuse-bridge.c:5332:notify] 0-fuse: got event 12 on graph 0
[2019-05-22 23:03:23.362615] D [MSGID: 0] [afr-common.c:5012:__afr_launch_notify_timer] 0-gvol0-replicate-0: Initiating child-down timer
[2019-05-22 23:03:23.362652] I [MSGID: 114020] [client.c:2358:notify] 0-gvol0-client-0: parent translators are ready, attempting connect on transport
[2019-05-22 23:03:23.367816] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-69.42.167.137 (port-24007) for hostname: gfs1 and port: 24007
[2019-05-22 23:03:23.367850] D [socket.c:3223:socket_fix_ssl_opts] 0-gvol0-client-0: disabling SSL for portmapper connection
[2019-05-22 23:03:23.368114] I [MSGID: 114020] [client.c:2358:notify] 0-gvol0-client-1: parent translators are ready, attempting connect on transport
[2019-05-22 23:03:23.372391] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-69.42.172.137 (port-24007) for hostname: gfs2 and port: 24007
[2019-05-22 23:03:23.372421] D [socket.c:3223:socket_fix_ssl_opts] 0-gvol0-client-1: disabling SSL for portmapper connection
[2019-05-22 23:03:23.372535] I [MSGID: 114020] [client.c:2358:notify] 0-gvol0-client-2: parent translators are ready, attempting connect on transport
[2019-05-22 23:03:23.376795] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-192.157.88.220 (port-24007) for hostname: gfs3 and port: 24007
[2019-05-22 23:03:23.376825] D [socket.c:3223:socket_fix_ssl_opts] 0-gvol0-client-2: disabling SSL for portmapper connection
[2019-05-22 23:03:23.378012] I [MSGID: 101190] [event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-05-22 23:03:23.378090] I [MSGID: 101190] [event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2019-05-22 23:03:23.378143] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-0: got RPC_CLNT_CONNECT
[2019-05-22 23:03:23.378154] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-1: got RPC_CLNT_CONNECT
[2019-05-22 23:03:23.378862] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 69.42.172.137:24007: ping timer event already removed
[2019-05-22 23:03:23.378881] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 69.42.167.137:24007: ping timer event already removed
[2019-05-22 23:03:23.378971] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_CONNECT
[2019-05-22 23:03:23.379191] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 192.157.88.220:24007: ping timer event already removed
[2019-05-22 23:03:23.379300] D [MSGID: 0] [client-handshake.c:1392:server_has_portmap] 0-gvol0-client-2: detected portmapper on server
[2019-05-22 23:03:23.379306] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-2: Ping latency is 0ms
[2019-05-22 23:03:23.379445] E [MSGID: 114058] [client-handshake.c:1449:client_query_portmap_cbk] 0-gvol0-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2019-05-22 23:03:23.379512] D [socket.c:2929:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:13) (non-SSL)
[2019-05-22 23:03:23.379544] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_DISCONNECT
[2019-05-22 23:03:23.379562] I [MSGID: 114018] [client.c:2288:client_rpc_notify] 0-gvol0-client-2: disconnected from gvol0-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2019-05-22 23:03:23.379586] E [MSGID: 108006] [afr-common.c:5314:__afr_handle_child_down_event] 0-gvol0-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.
[2019-05-22 23:03:23.379594] D [MSGID: 0] [client-handshake.c:1392:server_has_portmap] 0-gvol0-client-0: detected portmapper on server
[2019-05-22 23:03:23.380198] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-0: Ping latency is 1ms
[2019-05-22 23:03:23.380367] D [MSGID: 0] [client-handshake.c:1392:server_has_portmap] 0-gvol0-client-1: detected portmapper on server
[2019-05-22 23:03:23.380892] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gvol0-client-0: changing port to 49152 (from 0)
[2019-05-22 23:03:23.380927] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-1: Ping latency is 2ms
[2019-05-22 23:03:23.380951] D [socket.c:2929:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:11) (non-SSL)
[2019-05-22 23:03:23.380974] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-gvol0-client-0: got RPC_CLNT_DISCONNECT
[2019-05-22 23:03:23.380992] D [MSGID: 0] [client.c:2316:client_rpc_notify] 0-gvol0-client-0: disconnected (skipped notify)
[2019-05-22 23:03:23.382318] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gvol0-client-1: changing port to 49152 (from 0)
[2019-05-22 23:03:23.382377] D [socket.c:2929:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL)
[2019-05-22 23:03:23.382398] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-gvol0-client-1: got RPC_CLNT_DISCONNECT
[2019-05-22 23:03:23.382417] D [MSGID: 0] [client.c:2316:client_rpc_notify] 0-gvol0-client-1: disconnected (skipped notify)
[2019-05-22 23:03:23.384934] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-69.42.167.137 (port-24007) for hostname: gfs1 and port: 24007
[2019-05-22 23:03:23.386356] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-0: got RPC_CLNT_CONNECT
[2019-05-22 23:03:23.386603] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 69.42.167.137:49152: ping timer event already removed
[2019-05-22 23:03:23.387777] W [dict.c:1002:str_to_data] (-->/usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x403f7) [0x7f54b478d3f7] -->/lib64/libglusterfs.so.0(dict_set_str+0x16) [0x7f54bfe15b76] -->/lib64/libglusterfs.so.0(str_to_data+0x71) [0x7f54bfe12491] ) 0-dict: value is NULL [Invalid argument]
[2019-05-22 23:03:23.387811] I [MSGID: 114006] [client-handshake.c:1237:client_setvolume] 0-gvol0-client-0: failed to set process-name in handshake msg
[2019-05-22 23:03:23.387909] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-0: Ping latency is 1ms
[2019-05-22 23:03:23.388792] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-69.42.172.137 (port-24007) for hostname: gfs2 and port: 24007
[2019-05-22 23:03:23.389716] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-gvol0-client-0: Connected to gvol0-client-0, attached to remote volume '/nodirectwritedata/gluster/gvol0'.
[2019-05-22 23:03:23.389754] D [MSGID: 0] [client-handshake.c:945:client_post_handshake] 0-gvol0-client-0: No fds to open - notifying all parents child up
[2019-05-22 23:03:23.389780] D [MSGID: 0] [afr-common.c:5157:afr_get_halo_latency] 0-gvol0-replicate-0: Using halo latency 5
[2019-05-22 23:03:23.389811] I [MSGID: 108005] [afr-common.c:5237:__afr_handle_child_up_event] 0-gvol0-replicate-0: Subvolume 'gvol0-client-0' came back up; going online.
[2019-05-22 23:03:27.352904] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-192.157.88.220 (port-24007) for hostname: gfs3 and port: 24007
[2019-05-22 23:03:27.352959] D [socket.c:3223:socket_fix_ssl_opts] 0-gvol0-client-2: disabling SSL for portmapper connection
[2019-05-22 23:03:27.353228] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_CONNECT
[2019-05-22 23:03:27.353608] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 192.157.88.220:24007: ping timer event already removed
[2019-05-22 23:03:27.353741] D [MSGID: 0] [client-handshake.c:1392:server_has_portmap] 0-gvol0-client-2: detected portmapper on server
[2019-05-22 23:03:27.353853] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-2: Ping latency is 0ms
[2019-05-22 23:03:27.353922] D [MSGID: 0] [client-handshake.c:1455:client_query_portmap_cbk] 0-gvol0-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2019-05-22 23:03:27.353997] D [socket.c:2929:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:11) (non-SSL)
[2019-05-22 23:03:27.354042] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_DISCONNECT
[2019-05-22 23:03:27.354073] D [MSGID: 0] [client.c:2296:client_rpc_notify] 0-gvol0-client-2: disconnected from gvol0-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2019-05-22 23:03:30.359183] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-192.157.88.220 (port-24007) for hostname: gfs3 and port: 24007
[2019-05-22 23:03:30.359242] D [socket.c:3223:socket_fix_ssl_opts] 0-gvol0-client-2: disabling SSL for portmapper connection
[2019-05-22 23:03:30.359481] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_CONNECT
[2019-05-22 23:03:30.359846] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 192.157.88.220:24007: ping timer event already removed
[2019-05-22 23:03:30.359982] D [MSGID: 0] [client-handshake.c:1392:server_has_portmap] 0-gvol0-client-2: detected portmapper on server
[2019-05-22 23:03:30.360096] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-2: Ping latency is 0ms
[2019-05-22 23:03:30.360170] D [MSGID: 0] [client-handshake.c:1455:client_query_portmap_cbk] 0-gvol0-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2019-05-22 23:03:30.360248] D [socket.c:2929:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:11) (non-SSL)
[2019-05-22 23:03:30.360298] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_DISCONNECT
[2019-05-22 23:03:30.360332] D [MSGID: 0] [client.c:2296:client_rpc_notify] 0-gvol0-client-2: disconnected from gvol0-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2019-05-22 23:03:33.365443] D [MSGID: 0] [common-utils.c:536:gf_resolve_ip6] 0-resolver: returning ip-192.157.88.220 (port-24007) for hostname: gfs3 and port: 24007
[2019-05-22 23:03:33.365507] D [socket.c:3223:socket_fix_ssl_opts] 0-gvol0-client-2: disabling SSL for portmapper connection
[2019-05-22 23:03:33.365780] D [MSGID: 0] [client.c:2264:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_CONNECT
[2019-05-22 23:03:33.366149] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f54bfe1efbb] (--> /lib64/libgfrpc.so.0(+0x125bb)[0x7f54bfbed5bb] (--> /lib64/libgfrpc.so.0(+0x12d81)[0x7f54bfbedd81] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4bb)[0x7f54bfbea5fb] (--> /usr/lib64/glusterfs/5.6/xlator/protocol/client.so(+0x13f92)[0x7f54b4760f92] ))))) 0-: 192.157.88.220:24007: ping timer event already removed
[2019-05-22 23:03:33.366275] D [MSGID: 0] [client-handshake.c:1392:server_has_portmap] 0-gvol0-client-2: detected portmapper on server
[2019-05-22 23:03:33.366332] D [rpc-clnt-ping.c:204:rpc_clnt_ping_cbk] 0-gvol0-client-2: Ping latency is 0ms
[2019-05-22 23:03:33.366537] D [MSGID: 0] [client-handshake.c:1455:client_query_portmap_cbk] 0-gvol0-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2019-05-22 23:03:33.366625] D [socket.c:2929:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:11) (non-SSL)
[2019-05-22 23:03:33.366658] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-gvol0-client-2: got RPC_CLNT_DISCONNECT
[2019-05-22 23:03:33.366687] D [MSGID: 0] [client.c:2296:client_rpc_notify] 0-gvol0-client-2: disconnected from gvol0-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2019-05-22 23:03:34.365978] D [fuse-bridge.c:5332:notify] 0-fuse: got event 5 on graph 0
[2019-05-22 23:03:34.367664] D [MSGID: 0] [dht-diskusage.c:94:dht_du_info_cbk] 0-gvol0-dht: subvolume 'gvol0-replicate-0': avail_percent is: 82.00 and avail_space is: 1554380546048 and avail_inodes is: 98.00
[2019-05-22 23:03:34.367800] D [fuse-bridge.c:4919:fuse_get_mount_status] 0-fuse: mount status is 0
[2019-05-22 23:03:34.368012] D [fuse-bridge.c:4206:fuse_init] 0-glusterfs-fuse: Detected support for FUSE_AUTO_INVAL_DATA. Enabling fopen_keep_cache automatically.
[2019-05-22 23:03:34.368073] I [fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
[2019-05-22 23:03:34.368102] I [fuse-bridge.c:4878:fuse_graph_sync] 0-fuse: switched to graph 0
[2019-05-22 23:03:34.368323] D [MSGID: 0] [dht-common.c:3454:dht_do_fresh_lookup] 0-gvol0-dht: Calling fresh lookup for / on gvol0-replicate-0
[2019-05-22 23:03:34.370560] D [MSGID: 0] [afr-common.c:3188:afr_discover_do] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-replicate-0 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.370624] D [MSGID: 0] [dht-common.c:3013:dht_lookup_cbk] 0-gvol0-dht: fresh_lookup returned for / with op_ret -1 [Transport endpoint is not connected]
[2019-05-22 23:03:34.372619] D [MSGID: 0] [afr-common.c:3188:afr_discover_do] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-replicate-0 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.372693] D [MSGID: 0] [dht-common.c:1303:dht_lookup_dir_cbk] 0-gvol0-dht: lookup of / on gvol0-replicate-0 returned error [Transport endpoint is not connected]
[2019-05-22 23:03:34.372737] D [MSGID: 0] [dht-common.c:1473:dht_lookup_dir_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-dht returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.372783] D [MSGID: 0] [write-behind.c:2433:wb_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-write-behind returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.372826] D [MSGID: 0] [io-cache.c:263:ioc_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-io-cache returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.372870] D [MSGID: 0] [quick-read.c:627:qr_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-quick-read returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.372908] D [MSGID: 0] [md-cache.c:1213:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-md-cache returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.372954] D [MSGID: 0] [io-stats.c:2214:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.373070] E [fuse-bridge.c:4336:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
[2019-05-22 23:03:34.373985] D [fuse-helpers.c:649:fuse_ignore_xattr_set] 0-glusterfs-fuse: allowing setxattr: key [trusted.add-brick],  client pid [-6]
[2019-05-22 23:03:34.374251] D [MSGID: 0] [dht-common.c:3454:dht_do_fresh_lookup] 0-gvol0-dht: Calling fresh lookup for / on gvol0-replicate-0
[2019-05-22 23:03:34.376220] D [MSGID: 0] [afr-common.c:3188:afr_discover_do] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-replicate-0 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.376281] D [MSGID: 0] [dht-common.c:3013:dht_lookup_cbk] 0-gvol0-dht: fresh_lookup returned for / with op_ret -1 [Transport endpoint is not connected]
[2019-05-22 23:03:34.378241] D [MSGID: 0] [afr-common.c:3188:afr_discover_do] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-replicate-0 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378296] D [MSGID: 0] [dht-common.c:1303:dht_lookup_dir_cbk] 0-gvol0-dht: lookup of / on gvol0-replicate-0 returned error [Transport endpoint is not connected]
[2019-05-22 23:03:34.378330] D [MSGID: 0] [dht-common.c:1473:dht_lookup_dir_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-dht returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378360] D [MSGID: 0] [write-behind.c:2433:wb_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-write-behind returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378389] D [MSGID: 0] [io-cache.c:263:ioc_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-io-cache returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378421] D [MSGID: 0] [quick-read.c:627:qr_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-quick-read returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378448] D [MSGID: 0] [md-cache.c:1213:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0-md-cache returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378474] D [MSGID: 0] [io-stats.c:2214:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x7f5498001048, gvol0 returned -1 error: Transport endpoint is not connected [Transport endpoint is not connected]
[2019-05-22 23:03:34.378526] W [fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)
[2019-05-22 23:03:34.378565] W [fuse-bridge.c:3294:fuse_setxattr_resume] 0-glusterfs-fuse: 2: SETXATTR 00000000-0000-0000-0000-000000000001/1 (trusted.add-brick) resolution failed
[2019-05-22 23:03:34.394682] D [fuse-bridge.c:5038:fuse_thread_proc] 0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2019-05-22 23:03:34.394747] I [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: initating unmount of /tmp/mntBBQr8A
[2019-05-22 23:03:34.394830] D [logging.c:1805:gf_log_flush_extra_msgs] 0-logging-infra: Log buffer size reduced. About to flush 5 extra log messages
[2019-05-22 23:03:34.394900] D [logging.c:1808:gf_log_flush_extra_msgs] 0-logging-infra: Just flushed 5 extra log messages
[2019-05-22 23:03:34.395005] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f54bec80dd5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55be99fe4e75] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55be99fe4ceb] ) 0-: received signum (15), shutting down
[2019-05-22 23:03:34.395042] D [mgmt-pmap.c:86:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout arguments not given
[2019-05-22 23:03:34.395065] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/tmp/mntBBQr8A'.
[2019-05-22 23:03:34.395085] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to '/tmp/mntBBQr8A'.
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to