I don't see a "Connected to gvol0-client-1" in the log.  Perhaps a firewall issue like the last time? Even in the earlier add-brick log from the other email thread, connection to the 2nd brick was not established.

-Ravi

On 29/05/19 2:26 PM, David Cunningham wrote:
Hi Ravi and Joe,

The command "gluster volume status gvol0" shows all 3 nodes as being online, even on gfs3 as below. I've attached the glfsheal-gvol0.log, in which I can't see anything like a connection error. Would you have any further suggestions? Thank you.

[root@gfs3 glusterfs]# gluster volume status gvol0
Status of volume: gvol0
Gluster process                             TCP Port RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfs1:/nodirectwritedata/gluster/gvol0 49152 0          Y       7706
Brick gfs2:/nodirectwritedata/gluster/gvol0 49152 0          Y       7625
Brick gfs3:/nodirectwritedata/gluster/gvol0 49152 0          Y       7307
Self-heal Daemon on localhost               N/A N/A        Y       7316
Self-heal Daemon on gfs1                    N/A N/A        Y       40591
Self-heal Daemon on gfs2                    N/A N/A        Y       7634

Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks


On Wed, 29 May 2019 at 16:26, Ravishankar N <[email protected] <mailto:[email protected]>> wrote:


    On 29/05/19 6:21 AM, David Cunningham wrote:
    Hello all,

    We are seeing a strange issue where a new node gfs3 shows another
    node gfs2 as not connected on the "gluster volume heal" info:

    [root@gfs3 bricks]# gluster volume heal gvol0 info
    Brick gfs1:/nodirectwritedata/gluster/gvol0
    Status: Connected
    Number of entries: 0

    Brick gfs2:/nodirectwritedata/gluster/gvol0
    Status: Transport endpoint is not connected
    Number of entries: -

    Brick gfs3:/nodirectwritedata/gluster/gvol0
    Status: Connected
    Number of entries: 0


    However it does show the same node connected on "gluster peer
    status". Does anyone know why this would be?

    [root@gfs3 bricks]# gluster peer status
    Number of Peers: 2

    Hostname: gfs2
    Uuid: 91863102-23a8-43e1-b3d3-f0a1bd57f350
    State: Peer in Cluster (Connected)

    Hostname: gfs1
    Uuid: 32c99e7d-71f2-421c-86fc-b87c0f68ad1b
    State: Peer in Cluster (Connected)


    In nodirectwritedata-gluster-gvol0.log on gfs3 we see this logged
    with regards to gfs2:

    You need to check glfsheal-$volname.log on the node where you ran
    the command and check for any connection related errors.

    -Ravi


    [2019-05-29 00:17:50.646360] I [MSGID: 115029]
    [server-handshake.c:537:server_setvolume] 0-gvol0-server:
    accepted client from
    
CTX_ID:30d74196-fece-4380-adc0-338760188b81-GRAPH_ID:0-PID:7718-HOST:gfs2.xxx.com-PC_NAME:gvol0-client-2-RECON_NO:-0
    (version: 5.6)
    [2019-05-29 00:17:50.761120] I [MSGID: 115036]
    [server.c:469:server_rpc_notify] 0-gvol0-server: disconnecting
    connection from
    
CTX_ID:30d74196-fece-4380-adc0-338760188b81-GRAPH_ID:0-PID:7718-HOST:gfs2.xxx.com-PC_NAME:gvol0-client-2-RECON_NO:-0
    [2019-05-29 00:17:50.761352] I [MSGID: 101055]
    [client_t.c:435:gf_client_unref] 0-gvol0-server: Shutting down
    connection
    
CTX_ID:30d74196-fece-4380-adc0-338760188b81-GRAPH_ID:0-PID:7718-HOST:gfs2.xxx.com-PC_NAME:gvol0-client-2-RECON_NO:-0

    Thanks in advance for any assistance.

-- David Cunningham, Voisonics Limited
    http://voisonics.com/
    USA: +1 213 221 1092
    New Zealand: +64 (0)28 2558 3782

    _______________________________________________
    Gluster-users mailing list
    [email protected]  <mailto:[email protected]>
    https://lists.gluster.org/mailman/listinfo/gluster-users



--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to