Hi all!

I have setup a replicated/distributed gluster cluster 2 x (2 + 1).

Centos 7 and gluster version 3.12.6 on server.

All machines have two network interfaces and connected to two different 
networks,

10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)

192.168.67.0/24 (with ldap, gluster version 3.13.1)

Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe ...and 
so on.

All nodes are available on both networks and have the same names on both 
networks.


Now to my problem, the gluster cluster is mounted on multiple clients on the 
192.168.67.0/24 net

and a process was running on one of the clients, reading and writing to files.

At the same time I mounted the cluster on a client on the 10.10.0.0/16 net and 
started to create

and edit files on the cluster. Around the same time the process on the 192-net 
stopped without any

specific errors. Started other processes on the 192-net and continued to make 
changes on the 10-net

and got the same behavior with stopping processes on the 192-net.


Is there any known problems with this type of setup?

How do I proceed to figure out a solution as I need access from both networks?


Following error shows a couple of times on server (systemd -> glusterd):

[2018-04-09 11:46:46.254071] C [mem-pool.c:613:mem_pools_init_early] 
0-mem-pool: incorrect order of mem-pool initialization (init_done=3)


Client logs:

Client on 192-net:

[2018-04-09 11:35:31.402979] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 5-urd-gds-volume-client-1: 
Connected to urd-gds-volume-client-1, attached to remote volume 
'/urd-gds/gluster'.
[2018-04-09 11:35:31.403019] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 5-urd-gds-volume-client-1: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 11:35:31.403051] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 5-urd-gds-volume-snapd-client: 
Connected to urd-gds-volume-snapd-client, attached to remote volume 
'snapd-urd-gds-vo\
lume'.
[2018-04-09 11:35:31.403091] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 5-urd-gds-volume-snapd-client: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 11:35:31.403271] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-3: 
Server lk version = 1
[2018-04-09 11:35:31.403325] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-4: 
Server lk version = 1
[2018-04-09 11:35:31.403349] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-0: 
Server lk version = 1
[2018-04-09 11:35:31.403367] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-2: 
Server lk version = 1
[2018-04-09 11:35:31.403616] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-1: 
Server lk version = 1
[2018-04-09 11:35:31.403751] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
5-urd-gds-volume-client-5: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2018-04-09 11:35:31.404174] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 
5-urd-gds-volume-snapd-client: Server lk version = 1
[2018-04-09 11:35:31.405030] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 5-urd-gds-volume-client-5: 
Connected to urd-gds-volume-client-5, attached to remote volume 
'/urd-gds/gluster2'.
[2018-04-09 11:35:31.405069] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 5-urd-gds-volume-client-5: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 11:35:31.405585] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 5-urd-gds-volume-client-5: 
Server lk version = 1
[2018-04-09 11:42:29.622006] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: 
switched to graph 5
[2018-04-09 11:42:29.627533] I [MSGID: 109005] 
[dht-selfheal.c:2458:dht_selfheal_directory] 5-urd-gds-volume-dht: Directory 
selfheal failed: Unable to form layout for directory /
[2018-04-09 11:42:29.627935] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-client-0: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.628013] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-client-1: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.628047] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-client-2: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.628069] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-0: disconnected from 
urd-gds-volume-client-0. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:42:29.628077] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-client-3: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.628184] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-1: disconnected from 
urd-gds-volume-client-1. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:42:29.628191] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-2: disconnected from 
urd-gds-volume-client-2. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:42:29.628272] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 
2-urd-gds-volume-replicate-0: Client-quorum is not met
[2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-client-4: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.628349] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-client-5: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.628382] I [MSGID: 114021] [client.c:2369:notify] 
2-urd-gds-volume-snapd-client: current graph is no longer active, destroying 
rpc_client
[2018-04-09 11:42:29.632749] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-3: disconnected from 
urd-gds-volume-client-3. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:42:29.632804] E [MSGID: 108006] 
[afr-common.c:5143:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-0: 
All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 11:42:29.637247] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-4: disconnected from 
urd-gds-volume-client-4. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:42:29.637294] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 
2-urd-gds-volume-replicate-1: Client-quorum is not met
[2018-04-09 11:42:29.637330] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-5: disconnected from 
urd-gds-volume-client-5. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:42:29.641674] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-snapd-client: disconnected 
from urd-gds-volume-snapd-client. Client process will keep trying to connect to 
glust\
erd until brick's port is available
[2018-04-09 11:42:29.641701] E [MSGID: 108006] 
[afr-common.c:5143:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-1: 
All subvolumes are down. Going offline until atleast one of them comes back up.


Other client on 192-net:

[2018-04-09 14:13:57.816783] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-1: 
Server lk version = 1
[2018-04-09 14:13:57.817092] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
0-urd-gds-volume-client-3: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2018-04-09 14:13:57.817208] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 
0-urd-gds-volume-client-4: changing port to 49152 (from 0)
[2018-04-09 14:13:57.817388] W [socket.c:3216:socket_connect] 
0-urd-gds-volume-client-2: Error disabling sockopt IPV6_V6ONLY: "Protocol not 
available"
[2018-04-09 14:13:57.817623] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 
0-urd-gds-volume-client-5: changing port to 49153 (from 0)
[2018-04-09 14:13:57.817658] I [rpc-clnt.c:1994:rpc_clnt_reconfig] 
0-urd-gds-volume-snapd-client: changing port to 49153 (from 0)
[2018-04-09 14:13:57.822047] W [socket.c:3216:socket_connect] 
0-urd-gds-volume-client-4: Error disabling sockopt IPV6_V6ONLY: "Protocol not 
available"
[2018-04-09 14:13:57.823419] W [socket.c:3216:socket_connect] 
0-urd-gds-volume-client-5: Error disabling sockopt IPV6_V6ONLY: "Protocol not 
available"
[2018-04-09 14:13:57.823613] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-3: 
Connected to urd-gds-volume-client-3, attached to remote volume 
'/urd-gds/gluster'.
[2018-04-09 14:13:57.823634] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-3: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 14:13:57.823684] I [MSGID: 108005] 
[afr-common.c:5066:__afr_handle_child_up_event] 0-urd-gds-volume-replicate-1: 
Subvolume 'urd-gds-volume-client-3' came back up; going online.
[2018-04-09 14:13:57.825689] W [socket.c:3216:socket_connect] 
0-urd-gds-volume-snapd-client: Error disabling sockopt IPV6_V6ONLY: "Protocol 
not available"
[2018-04-09 14:13:57.825845] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-3: 
Server lk version = 1
[2018-04-09 14:13:57.825873] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
0-urd-gds-volume-client-2: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2018-04-09 14:13:57.826270] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
0-urd-gds-volume-client-4: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2018-04-09 14:13:57.826414] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
0-urd-gds-volume-client-5: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)
[2018-04-09 14:13:57.826562] I [MSGID: 114057] 
[client-handshake.c:1484:select_server_supported_programs] 
0-urd-gds-volume-snapd-client: Using Program GlusterFS 3.3, Num (1298437), 
Version (330)
[2018-04-09 14:13:57.827226] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-2: 
Connected to urd-gds-volume-client-2, attached to remote volume 
'/urd-gds/gluster1'.
[2018-04-09 14:13:57.827245] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-2: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 14:13:57.827594] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-4: 
Connected to urd-gds-volume-client-4, attached to remote volume 
'/urd-gds/gluster'.
[2018-04-09 14:13:57.827630] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-4: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 14:13:57.827750] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-client-5: 
Connected to urd-gds-volume-client-5, attached to remote volume 
'/urd-gds/gluster2'.
[2018-04-09 14:13:57.827775] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-client-5: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 14:13:57.827782] I [MSGID: 114046] 
[client-handshake.c:1231:client_setvolume_cbk] 0-urd-gds-volume-snapd-client: 
Connected to urd-gds-volume-snapd-client, attached to remote volume 
'snapd-urd-gds-vo\
lume'.
[2018-04-09 14:13:57.827802] I [MSGID: 114047] 
[client-handshake.c:1242:client_setvolume_cbk] 0-urd-gds-volume-snapd-client: 
Server and Client lk-version numbers are not same, reopening the fds
[2018-04-09 14:13:57.829136] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-2: 
Server lk version = 1
[2018-04-09 14:13:57.829173] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-5: 
Server lk version = 1
[2018-04-09 14:13:57.829180] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 0-urd-gds-volume-client-4: 
Server lk version = 1
[2018-04-09 14:13:57.829210] I [MSGID: 114035] 
[client-handshake.c:202:client_set_lk_version_cbk] 
0-urd-gds-volume-snapd-client: Server lk version = 1
[2018-04-09 14:13:57.829295] I [fuse-bridge.c:4205:fuse_init] 0-glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26
[2018-04-09 14:13:57.829320] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: 
switched to graph 0
[2018-04-09 14:13:57.833539] I [MSGID: 109005] 
[dht-selfheal.c:2458:dht_selfheal_directory] 0-urd-gds-volume-dht: Directory 
selfheal failed: Unable to form layout for directory /


Client on 10-net:

[2018-04-09 11:35:31.113283] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-1: disconnected from 
urd-gds-volume-client-1. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:35:31.113289] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 
2-urd-gds-volume-replicate-0: Client-quorum is not met
[2018-04-09 11:35:31.113289] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-2: disconnected from 
urd-gds-volume-client-2. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:35:31.113351] E [MSGID: 108006] 
[afr-common.c:5006:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-0: 
All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 11:35:31.113367] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-3: disconnected from 
urd-gds-volume-client-3. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:35:31.113492] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-4: disconnected from 
urd-gds-volume-client-4. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:35:31.113500] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 
2-urd-gds-volume-replicate-1: Client-quorum is not met
[2018-04-09 11:35:31.113511] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-5: disconnected from 
urd-gds-volume-client-5. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 11:35:31.113554] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 2-urd-gds-volume-snapd-client: disconnected 
from urd-gds-volume-snapd-client. Client process will keep trying to connect to 
glust\
erd until brick's port is available
[2018-04-09 11:35:31.113567] E [MSGID: 108006] 
[afr-common.c:5006:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-1: 
All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 12:05:35.111892] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: 
switched to graph 5
[2018-04-09 12:05:35.116187] I [MSGID: 114021] [client.c:2369:notify] 
0-urd-gds-volume-client-0: current graph is no longer active, destroying 
rpc_client
[2018-04-09 12:05:35.116214] I [MSGID: 114021] [client.c:2369:notify] 
0-urd-gds-volume-client-1: current graph is no longer active, destroying 
rpc_client
[2018-04-09 12:05:35.116223] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-0: disconnected from 
urd-gds-volume-client-0. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 12:05:35.116227] I [MSGID: 114021] [client.c:2369:notify] 
0-urd-gds-volume-client-2: current graph is no longer active, destroying 
rpc_client
[2018-04-09 12:05:35.116252] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-1: disconnected from 
urd-gds-volume-client-1. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 12:05:35.116257] I [MSGID: 114021] [client.c:2369:notify] 
0-urd-gds-volume-client-3: current graph is no longer active, destroying 
rpc_client
[2018-04-09 12:05:35.116258] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-2: disconnected from 
urd-gds-volume-client-2. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 12:05:35.116273] I [MSGID: 114021] [client.c:2369:notify] 
0-urd-gds-volume-client-4: current graph is no longer active, destroying 
rpc_client
[2018-04-09 12:05:35.116273] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 
0-urd-gds-volume-replicate-0: Client-quorum is not met
[2018-04-09 12:05:35.116288] I [MSGID: 114021] [client.c:2369:notify] 
0-urd-gds-volume-client-5: current graph is no longer active, destroying 
rpc_client
[2018-04-09 12:05:35.116393] E [MSGID: 108006] 
[afr-common.c:5006:__afr_handle_child_down_event] 0-urd-gds-volume-replicate-0: 
All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 12:05:35.116397] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-3: disconnected from 
urd-gds-volume-client-3. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 12:05:35.116574] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-4: disconnected from 
urd-gds-volume-client-4. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 12:05:35.116575] I [MSGID: 114018] 
[client.c:2285:client_rpc_notify] 0-urd-gds-volume-client-5: disconnected from 
urd-gds-volume-client-5. Client process will keep trying to connect to glusterd 
unti\
l brick's port is available
[2018-04-09 12:05:35.116592] W [MSGID: 108001] [afr-common.c:5233:afr_notify] 
0-urd-gds-volume-replicate-1: Client-quorum is not met
[2018-04-09 12:05:35.116646] E [MSGID: 108006] 
[afr-common.c:5006:__afr_handle_child_down_event] 0-urd-gds-volume-replicate-1: 
All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 12:13:18.767382] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 
5-urd-gds-volume-dht: renaming /interbull/backup/scripts/backup/gsnapshotctl.sh 
(hash=urd-gds-volume-replicate-0/cache=urd-gds-volum\
e-replicate-0) => /interbull/backup/scripts/backup/gsnapshotctl.sh~ 
(hash=urd-gds-volume-replicate-1/cache=<nul>)
[2018-04-09 13:34:54.031860] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 
5-urd-gds-volume-dht: renaming 
/interbull/backup/scripts/backup/bkp_gluster_to_ribston.sh 
(hash=urd-gds-volume-replicate-0/cache=urd\
-gds-volume-replicate-0) => 
/interbull/backup/scripts/backup/bkp_gluster_to_ribston.sh~ 
(hash=urd-gds-volume-replicate-1/cache=urd-gds-volume-replicate-0)




Many thanks in advance!!


Best regards

Marcus


--
**************************************************
* Marcus Pedersén                                *
* System administrator                           *
**************************************************
* Interbull Centre                               *
* ================                               *
* Department of Animal Breeding & Genetics - SLU *
* Box 7023, SE-750 07                            *
* Uppsala, Sweden                                *
**************************************************
* Visiting address:                              *
* Room 55614, Ulls väg 26, Ultuna                *
* Uppsala                                        *
* Sweden                                         *
*                                                *
* Tel: +46-(0)18-67 1962                         *
*                                                *
**************************************************
*     ISO 9001 Bureau Veritas No SE004561-1      *
**************************************************

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to