This means shd client is not able to establish the connection with the brick on port 49155. Now this could happen if glusterd has ended up providing a stale port back which is not what brick is listening to. If you had killed any brick process using sigkill signal instead of sigterm this is expected as portmap_signout is not received by glusterd in the former case and the old portmap entry is never wiped off.
Please restart glusterd service. This should fix the problem. On Tue, 1 Aug 2017 at 23:03, peljasz <[email protected]> wrote: > how critical is above? > I get plenty of these on all three peers. > > hi guys > > I've recently upgraded from 3.8 to 3.10 and I'm seeing weird > behavior. > I see: $gluster vol status $_vol detail; takes long timeand > mostly times out. > I do: > $ gluster vol heal $_vol info > and I see: > Brick > 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA > Status: Transport endpoint is not connected > Number of entries: - > > Brick > 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA > Status: Connected > Number of entries: 0 > > Brick > 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA > Status: Transport endpoint is not connected > Number of entries: - > > Ibegin to worry that 3.10 @centos7.3might have not been a > good idea. > many thanks. > L. > _______________________________________________ > Gluster-users mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-users > -- - Atin (atinm)
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
