Re: [Gluster-users] ​Can't mount particular brick even though the brick port is reachable, error message "Transport endpoint is not connected"

2022-03-28 Thread Olaf Buitelaar
Hi Peter,

I think Staril means, running the command; hosted-engine --set-maintenance
--mode=local, this is also possible from the ovirt ui, via the ribbon on
the hosts section;
[image: image.png]

>From the log's it seems gluster has difficulty find the shared's, e.g.;
.shard/e5f699e2-de11-41be-bd24-e29876928f0f.1279
(be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.1279
Do these files exist within your brick's directories?
Did you try to repair the filesystem using xfs_repair?

Best regards,

Olaf

Op ma 28 mrt. 2022 om 10:01 schreef Peter Schmidt <
peterschmidt18...@yandex.com>:

>
> Hi Olaf,
>
> I tried running "gluster volume start hdd force" but sadly it did not
> change anything.
>
> the raid rebuild has finished now and everything seems to be fine:
> md6 : active raid6 sdu1[2] sdx1[5] sds1[0] sdt1[1] sdz1[7] sdv1[3] sdw1[4]
> sdaa1[8] sdy1[6]
>   68364119040 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9]
> [U]
>   bitmap: 0/73 pages [0KB], 65536KB chunk
>
> Best regards
> Peter
>
> 25.03.2022, 12:36, "Olaf Buitelaar" :
>
> Hi Peter,
>
> I see your raid array is rebuilding, could it be your xfs needs a repair,
> using xfs_repair?
> did you try running gluster v hdd start force?
>
> Kind regards,
>
> Olaf
>
>
> Op do 24 mrt. 2022 om 15:54 schreef Peter Schmidt <
> peterschmidt18...@yandex.com>:
>
> Hello everyone,
>
> I'm running an oVirt cluster on top of a distributed-replicate gluster
> volume and one of the bricks cannot be mounted anymore from my oVirt hosts.
> This morning I also noticed a stack trace and a spike in TCP connections on
> one of the three gluster nodes (storage2), which I have attached at the end
> of this mail. Only this particular brick on storage2 seems to be causing
> trouble:
> *Brick storage2:/data/glusterfs/hdd/brick3/brick*
> *Status: Transport endpoint is not connected*
>
> I don't know what's causing this or how to resolve this issue. I would
> appreciate it if someone could take a look at my logs and point me in the
> right direction. If any additional logs are required, please let me know.
> Thank you in advance!
>
> Operating system on all hosts: Centos 7.9.2009
> oVirt version: 4.3.10.4-1
> Gluster versions:
> - storage1: 6.10-1
> - storage2: 6.7-1
> - storage3: 6.7-1
>
> 
> # brick is not connected/mounted on the oVirt hosts
>
> *[xlator.protocol.client.hdd-client-7.priv]*
> *fd.0.remote_fd = -1*
> *-- = --*
> *granted-posix-lock[0] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type =
> F_RDLCK, fl_start = 100, fl_end = 100, user_flock: l_type = F_RDLCK,
> l_start = 100, l_len = 1*
> *granted-posix-lock[1] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type =
> F_RDLCK, fl_start = 101, fl_end = 101, user_flock: l_type = F_RDLCK,
> l_start = 101, l_len = 1*
> *-- = --*
> *connected = 0*
> *total_bytes_read = 11383136800*
> *ping_timeout = 10*
> *total_bytes_written = 16699851552*
> *ping_msgs_sent = 1*
> *msgs_sent = 2*
>
> 
> # mount log from one of the oVirt hosts
> # the IP 172.22.102.142 corresponds to my gluster node "storage2"
> # the port 49154 corresponds to the brick
> storage2:/data/glusterfs/hdd/brick3/brick
>
> *[2022-03-24 10:59:28.138178] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk]
> 0-hdd-client-7: socket disconnected*
> *[2022-03-24 10:59:38.142698] I [rpc-clnt.c:2028:rpc_clnt_reconfig]
> 0-hdd-client-7: changing port to 49154 (from 0)*
> *The message "I [MSGID: 114018] [client.c:2331:client_rpc_notify]
> 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep
> trying to connect to glusterd until brick's port is available" repeated 4
> times between [2022-03-24 10:58:04.114741] and [2022-03-24 10:59:28.137380]*
> *The message "W [MSGID: 114032]
> [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received
> RPC status error [Transport endpoint is not connected]" repeated 4 times
> between [2022-03-24 10:58:04.115169] and [2022-03-24 10:59:28.138052]*
> *[2022-03-24 10:59:49.143217] C
> [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-hdd-client-7: server
> 172.22.102.142:49154  has not responded in
> the last 10 seconds, disconnecting.*
> *[2022-03-24 10:59:49.143838] I [MSGID: 114018]
> [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from
> hdd-client-7. Client process will keep trying to connect to glusterd until
> brick's port is available*
> *[2022-03-24 10:59:49.144540] E [rpc-clnt.c:346:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f6724643adb] (-->
> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (-->
> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (-->
> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ) 0-hdd-client-7: forced
> unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2022-03-24
> 10:59:38.145208 (xid=0x861)*

Re: [Gluster-users] ​Can't mount particular brick even though the brick port is reachable, error message "Transport endpoint is not connected"

2022-03-28 Thread Peter Schmidt
 Hi Olaf, I tried running "gluster volume start hdd force" but sadly it did not change anything. the raid rebuild has finished now and everything seems to be fine:md6 : active raid6 sdu1[2] sdx1[5] sds1[0] sdt1[1] sdz1[7] sdv1[3] sdw1[4] sdaa1[8] sdy1[6]  68364119040 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9] [U]  bitmap: 0/73 pages [0KB], 65536KB chunk Best regardsPeter 25.03.2022, 12:36, "Olaf Buitelaar" :Hi Peter, I see your raid array is rebuilding, could it be your xfs needs a repair, using xfs_repair?did you try running gluster v hdd start force?  Kind regards,  Olaf  Op do 24 mrt. 2022 om 15:54 schreef Peter Schmidt :Hello everyone, I'm running an oVirt cluster on top of a distributed-replicate gluster volume and one of the bricks cannot be mounted anymore from my oVirt hosts. This morning I also noticed a stack trace and a spike in TCP connections on one of the three gluster nodes (storage2), which I have attached at the end of this mail. Only this particular brick on storage2 seems to be causing trouble:Brick storage2:/data/glusterfs/hdd/brick3/brickStatus: Transport endpoint is not connected I don't know what's causing this or how to resolve this issue. I would appreciate it if someone could take a look at my logs and point me in the right direction. If any additional logs are required, please let me know. Thank you in advance! Operating system on all hosts: Centos 7.9.2009oVirt version: 4.3.10.4-1Gluster versions:- storage1: 6.10-1- storage2: 6.7-1- storage3: 6.7-1 # brick is not connected/mounted on the oVirt hosts [xlator.protocol.client.hdd-client-7.priv]fd.0.remote_fd = -1-- = --granted-posix-lock[0] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type = F_RDLCK, fl_start = 100, fl_end = 100, user_flock: l_type = F_RDLCK, l_start = 100, l_len = 1granted-posix-lock[1] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type = F_RDLCK, fl_start = 101, fl_end = 101, user_flock: l_type = F_RDLCK, l_start = 101, l_len = 1-- = --connected = 0total_bytes_read = 11383136800ping_timeout = 10total_bytes_written = 16699851552ping_msgs_sent = 1msgs_sent = 2 # mount log from one of the oVirt hosts# the IP 172.22.102.142 corresponds to my gluster node "storage2"# the port 49154 corresponds to the brick storage2:/data/glusterfs/hdd/brick3/brick   [2022-03-24 10:59:28.138178] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-hdd-client-7: socket disconnected[2022-03-24 10:59:38.142698] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-hdd-client-7: changing port to 49154 (from 0)The message "I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep trying to connect to glusterd until brick's port is available" repeated 4 times between [2022-03-24 10:58:04.114741] and [2022-03-24 10:59:28.137380]The message "W [MSGID: 114032] [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received RPC status error [Transport endpoint is not connected]" repeated 4 times between [2022-03-24 10:58:04.115169] and [2022-03-24 10:59:28.138052][2022-03-24 10:59:49.143217] C [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-hdd-client-7: server 172.22.102.142:49154 has not responded in the last 10 seconds, disconnecting.[2022-03-24 10:59:49.143838] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep trying to connect to glusterd until brick's port is available[2022-03-24 10:59:49.144540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f6724643adb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ) 0-hdd-client-7: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2022-03-24 10:59:38.145208 (xid=0x861)[2022-03-24 10:59:49.144557] W [MSGID: 114032] [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received RPC status error [Transport endpoint is not connected][2022-03-24 10:59:49.144653] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f6724643adb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ) 0-hdd-client-7: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2022-03-24 10:59:38.145218 (xid=0x862)[2022-03-24 10:59:49.144665] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-hdd-client-7: socket disconnected # netcat/telnet to the brick's port of storage2 are working [root@storage1 ~]#  netcat -z -v 172.22.102.142 49154Connection to 

Re: [Gluster-users] ​Can't mount particular brick even though the brick port is reachable, error message "Transport endpoint is not connected"

2022-03-28 Thread Peter Schmidt
Hello Strahil, I tried restarting the glusterd.service on storage2 but it had no effect. What do you mean exactly with "set the node in maintenance"? Only the "ovirthostX" are available as compute hosts in oVirt. Or is that some other option in oVirt that I don't know about? The gluster volume itself is configured as a storage domain in oVirt with these options:Storage Type: GlusterFSPath: storage1:/hddVFS Type: glusterfs I am planning to upgrade the gluster version soon, but I would like to fix this issue first. Thanks for your support in any case. I have attached the brick log of brick3 on storage2 below. Today it's only showing this:[2022-03-27 06:14:31.791596] E [rpc-clnt.c:183:call_bail] 0-glusterfs: bailing out frame type(GlusterFS Handshake), op(GETSPEC(2)), xid = 0x1e, unique = 0, sent = 2022-03-27 05:44:25.879160, timeout = 1800 for 172.22.102.142:24007 In the last couple of days it has thrown these errors: [2022-03-24 04:09:15.933837] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.9 [No data available][2022-03-24 04:09:15.934007] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233775258: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.9 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.9), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available][2022-03-24 04:09:42.885005] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.127 [No data available][2022-03-24 04:09:42.885066] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233783993: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.127 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.127), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available][2022-03-24 04:09:49.757098] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.160 [No data available][2022-03-24 04:09:49.757150] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233789725: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.160 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.160), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available][2022-03-24 04:09:50.914836] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.172 [No data available][2022-03-24 04:09:50.914885] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233790786: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.172 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.172), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available][2022-03-24 04:10:13.015609] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.239 [No data available][2022-03-24 04:10:13.015737] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233795641: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.239 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.239), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available][2022-03-24 04:10:13.067565] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.240 [No data available][2022-03-24 04:10:13.067670] E [MSGID: 115050] [server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-hdd-server: 233796273: LOOKUP /.shard/e5f699e2-de11-41be-bd24-e29876928f0f.240 (be318638-e8a0-4c6d-977d-7a937aa84806/e5f699e2-de11-41be-bd24-e29876928f0f.240), client: CTX_ID:5c068f01-30cf-44cf-a7e5-a0836312517b-GRAPH_ID:0-PID:19802-HOST:ovirthost7-PC_NAME:hdd-client-7-RECON_NO:-0, error-xlator: hdd-posix [No data available][2022-03-24 04:10:21.584760] E [MSGID: 113002] [posix-entry-ops.c:323:posix_lookup] 0-hdd-posix: buf->ia_gfid is null for /data/glusterfs/hdd/brick3/brick/.shard/e5f699e2-de11-41be-bd24-e29876928f0f.267 [No data avail

Re: [Gluster-users] ​Can't mount particular brick even though the brick port is reachable, error message "Transport endpoint is not connected"

2022-03-25 Thread Olaf Buitelaar
Hi Peter,

I see your raid array is rebuilding, could it be your xfs needs a repair,
using xfs_repair?
did you try running gluster v hdd start force?

Kind regards,

Olaf


Op do 24 mrt. 2022 om 15:54 schreef Peter Schmidt <
peterschmidt18...@yandex.com>:

> Hello everyone,
>
> I'm running an oVirt cluster on top of a distributed-replicate gluster
> volume and one of the bricks cannot be mounted anymore from my oVirt hosts.
> This morning I also noticed a stack trace and a spike in TCP connections on
> one of the three gluster nodes (storage2), which I have attached at the end
> of this mail. Only this particular brick on storage2 seems to be causing
> trouble:
> *Brick storage2:/data/glusterfs/hdd/brick3/brick*
> *Status: Transport endpoint is not connected*
>
> I don't know what's causing this or how to resolve this issue. I would
> appreciate it if someone could take a look at my logs and point me in the
> right direction. If any additional logs are required, please let me know.
> Thank you in advance!
>
> Operating system on all hosts: Centos 7.9.2009
> oVirt version: 4.3.10.4-1
> Gluster versions:
> - storage1: 6.10-1
> - storage2: 6.7-1
> - storage3: 6.7-1
>
> 
> # brick is not connected/mounted on the oVirt hosts
>
> *[xlator.protocol.client.hdd-client-7.priv]*
> *fd.0.remote_fd = -1*
> *-- = --*
> *granted-posix-lock[0] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type =
> F_RDLCK, fl_start = 100, fl_end = 100, user_flock: l_type = F_RDLCK,
> l_start = 100, l_len = 1*
> *granted-posix-lock[1] = owner = 9d673ffe323e25cd, cmd = F_SETLK fl_type =
> F_RDLCK, fl_start = 101, fl_end = 101, user_flock: l_type = F_RDLCK,
> l_start = 101, l_len = 1*
> *-- = --*
> *connected = 0*
> *total_bytes_read = 11383136800*
> *ping_timeout = 10*
> *total_bytes_written = 16699851552*
> *ping_msgs_sent = 1*
> *msgs_sent = 2*
>
> 
> # mount log from one of the oVirt hosts
> # the IP 172.22.102.142 corresponds to my gluster node "storage2"
> # the port 49154 corresponds to the brick
> storage2:/data/glusterfs/hdd/brick3/brick
>
> *[2022-03-24 10:59:28.138178] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk]
> 0-hdd-client-7: socket disconnected*
> *[2022-03-24 10:59:38.142698] I [rpc-clnt.c:2028:rpc_clnt_reconfig]
> 0-hdd-client-7: changing port to 49154 (from 0)*
> *The message "I [MSGID: 114018] [client.c:2331:client_rpc_notify]
> 0-hdd-client-7: disconnected from hdd-client-7. Client process will keep
> trying to connect to glusterd until brick's port is available" repeated 4
> times between [2022-03-24 10:58:04.114741] and [2022-03-24 10:59:28.137380]*
> *The message "W [MSGID: 114032]
> [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received
> RPC status error [Transport endpoint is not connected]" repeated 4 times
> between [2022-03-24 10:58:04.115169] and [2022-03-24 10:59:28.138052]*
> *[2022-03-24 10:59:49.143217] C
> [rpc-clnt-ping.c:155:rpc_clnt_ping_timer_expired] 0-hdd-client-7: server
> 172.22.102.142:49154  has not responded in the
> last 10 seconds, disconnecting.*
> *[2022-03-24 10:59:49.143838] I [MSGID: 114018]
> [client.c:2331:client_rpc_notify] 0-hdd-client-7: disconnected from
> hdd-client-7. Client process will keep trying to connect to glusterd until
> brick's port is available*
> *[2022-03-24 10:59:49.144540] E [rpc-clnt.c:346:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f6724643adb] (-->
> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (-->
> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (-->
> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ) 0-hdd-client-7: forced
> unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2022-03-24
> 10:59:38.145208 (xid=0x861)*
> *[2022-03-24 10:59:49.144557] W [MSGID: 114032]
> [client-handshake.c:1546:client_dump_version_cbk] 0-hdd-client-7: received
> RPC status error [Transport endpoint is not connected]*
> *[2022-03-24 10:59:49.144653] E [rpc-clnt.c:346:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f6724643adb] (-->
> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f67243ea7e4] (-->
> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f67243ea8fe] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f67243eb987] (-->
> /lib64/libgfrpc.so.0(+0xf518)[0x7f67243ec518] ) 0-hdd-client-7: forced
> unwinding frame type(GF-DUMP) op(NULL(2)) called at 2022-03-24
> 10:59:38.145218 (xid=0x862)*
> *[2022-03-24 10:59:49.144665] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk]
> 0-hdd-client-7: socket disconnected*
>
> 
> # netcat/telnet to the brick's port of storage2 are working
>
> *[root@storage1  ~]#  netcat -z -v 172.22.102.142 49154*
> *Connection to 172.22.102.142 49154 port [tcp/*] succeeded!*
>
> *[root@storage3  ~]# netcat -z -v 172.22.102.142 49154*
> *Connection to 172.22.102