Hi Ravi,
back to our client-cannot-reconnect-to-gluster-brick problem ...
> Von: Ravishankar N [[email protected]]
> Gesendet: Montag, 29. Mai 2017 06:34
> An: Markus Stockhausen; [email protected]
> Betreff: Re: [Gluster-users] gluster heal entry reappears
>
> > On 05/28/2017 10:31 PM, Markus Stockhausen wrote:
> > Hi,
> >
> > I'm fairly new to gluster and quite happy with it. We are using it in an
> > OVirt
> > environment that stores its VM images in the gluster. Setup is as follows
> > and
> > Clients mount the volume with gluster native fuse protocol.
> >
> > 3 storage nodes: Centos 7, Gluster 3.8.12 (managed by me), 2 bricks each
> > 5 virtualization nodes: Centos 7, Gluster 3.8.12 (managed by OVirt engine)
> >
> > After todays reboot of one of the storage nodes the recovery did not finish
> > successfully. The state of one brick remained in:
> >
> > [root@cfiler301 dom_md]# gluster volume heal gluster1 info
> > ...
> > Brick cfilers201:/var/data/brick1/brick
> > /b1de7818-020b-4f47-938f-f3ebb51836a3/dom_md/ids
> > Status: Connected
> > Number of entries: 1
> > ...
> >
> > The above file is used by sanlock runing on the OVirt nodes to handle VM
> > image locking. Issuing a manual heal with "gluster volume heal gluster1"
> > fixed
> > the problem but the unsynced entry reappeared a few seconds later.
> >
> > My question: Should this situation be recovered automatically and if yes
> > what might be the culprit?
>
> We have had this observed by our QE guys while testing too, but in all cases,
> there was an intermittent disconnect from the fuse mount to the bricks,
> leading to the 'ids' file needing heal (and being healed on reconnect) again
> and again.
> Perhaps you should check if and why the mount is getting disconnected from
> the bricks.
>
> HTH,
> Ravi
Just finished a maintenance of one cluster brick node
and the error reappeared. Status is as follows:
- 3 gluster brick nodes: working flawlessley. logs are silent after resync
- 3 gluster fuse client nodes: working flawlessley. logs are silent after
resync
- 1 gluster fuse client node: spitting errors every 2 minutes since the
maintenane
Client log beginning at the maintenance time:
[2017-06-17 09:08:26.032957] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gluster1-client-1: server
100.64.252.1:49154 has not responded in the last 42 seconds, disconnecting.
[2017-06-17 09:08:26.033302] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gluster1-client-2: server
100.64.252.1:49155 has not responded in the last 42 seconds, disconnecting.
[2017-06-17 09:08:26.033751] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4]
)))))))))) 0-gluster1-client-1: forced unwinding frame type(GlusterFS 3.3)
op(LOOKUP(27)) called at 2017-06-17 09:07:43.578036 (xid=0x10e233)
[2017-06-17 09:08:26.033753] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4]
)))))))))) 0-gluster1-client-2: forced unwinding frame type(GlusterFS 3.3)
op(FINODELK(30)) called at 2017-06-17 09:07:59.914614 (xid=0xa4141)
[2017-06-17 09:08:26.033788] E [MSGID: 114031]
[client-rpc-fops.c:1596:client3_3_finodelk_cbk] 0-gluster1-client-2: remote
operation failed [Transport endpoint is not connected]
[2017-06-17 09:08:26.033799] W [MSGID: 114031]
[client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gluster1-client-1: remote
operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport
endpoint is not connected]
[2017-06-17 09:08:26.034008] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-2: forced unwinding frame type(GlusterFS 3.3) op(LOOKUP(27))
called at 2017-06-17 09:07:43.578069 (xid=0xa413e)
[2017-06-17 09:08:26.034029] W [MSGID: 114031]
[client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gluster1-client-2: remote
operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport
endpoint is not connected]
[2017-06-17 09:08:26.034078] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-1: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at
2017-06-17 09:07:44.028106 (xid=0x10e234)
[2017-06-17 09:08:26.034150] W [rpc-clnt-ping.c:203:rpc_clnt_ping_cbk]
0-gluster1-client-1: socket disconnected
[2017-06-17 09:08:26.034345] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-1: forced unwinding frame type(GlusterFS 3.3) op(LOOKUP(27))
called at 2017-06-17 09:07:48.933278 (xid=0x10e235)
[2017-06-17 09:08:26.034459] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at
2017-06-17 09:07:44.028115 (xid=0xa413f)
[2017-06-17 09:08:26.034478] W [rpc-clnt-ping.c:203:rpc_clnt_ping_cbk]
0-gluster1-client-2: socket disconnected
[2017-06-17 09:08:26.034535] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-1: forced unwinding frame type(GlusterFS 3.3) op(LOOKUP(27))
called at 2017-06-17 09:08:01.714032 (xid=0x10e236)
[2017-06-17 09:08:26.034614] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-2: forced unwinding frame type(GlusterFS 3.3) op(LOOKUP(27))
called at 2017-06-17 09:07:48.933304 (xid=0xa4140)
[2017-06-17 09:08:26.034820] E [rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] (-->
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x94)[0x7f8fa7167a64] )))))
0-gluster1-client-2: forced unwinding frame type(GlusterFS 3.3) op(LOOKUP(27))
called at 2017-06-17 09:08:01.714058 (xid=0xa4142)
[2017-06-17 09:08:26.034923] I [MSGID: 114018]
[client.c:2280:client_rpc_notify] 0-gluster1-client-2: disconnected from
gluster1-client-2. Client process will keep trying to connect to glusterd until
brick's port is available
[2017-06-17 09:08:26.037127] I [socket.c:3401:socket_submit_request]
0-gluster1-client-1: not connected (priv->connected = 0)
[2017-06-17 09:08:26.037145] W [rpc-clnt.c:1657:rpc_clnt_submit]
0-gluster1-client-1: failed to submit rpc-request (XID: 0x10e237 Program:
GlusterFS 3.3, ProgVers: 330, Proc: 27) to rpc-transport (gluster1-client-1)
[2017-06-17 09:08:26.037172] W [MSGID: 114031]
[client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gluster1-client-1: remote
operation failed. Path: /b1de7818-020b-4f47-938f-f3ebb51836a3
(663eae40-1818-43bb-b479-dec81057b5e1) [Transport endpoint is not connected]
[2017-06-17 09:08:26.037195] I [MSGID: 114018]
[client.c:2280:client_rpc_notify] 0-gluster1-client-1: disconnected from
gluster1-client-1. Client process will keep trying to connect to glusterd until
brick's port is available
[2017-06-17 09:08:26.038702] W [MSGID: 114031]
[client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gluster1-client-1: remote
operation failed. Path: (null) (00000000-0000-0000-0000-000000000000)
[Transport endpoint is not connected]
[2017-06-17 09:08:26.121603] W [MSGID: 114031]
[client-rpc-fops.c:2524:client3_3_lk_cbk] 0-gluster1-client-2: remote operation
failed [Transport endpoint is not connected]
[2017-06-17 09:08:29.040058] E [socket.c:2309:socket_connect_finish]
0-gluster1-client-1: connection to 100.64.252.1:24007 failed (No route to host)
[2017-06-17 09:08:38.142061] E [socket.c:2309:socket_connect_finish]
0-gluster1-client-2: connection to 100.64.252.1:24007 failed (No route to host)
The message "W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk]
0-gluster1-client-1: remote operation failed. Path: /
(00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]"
repeated 2 times between [2017-06-17 09:08:26.033799] and [2017-06-17
09:08:26.034553]
The message "W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk]
0-gluster1-client-2: remote operation failed. Path: /
(00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]"
repeated 2 times between [2017-06-17 09:08:26.034029] and [2017-06-17
09:08:26.034850]
[2017-06-17 09:08:26.035008] E [MSGID: 114031]
[client-rpc-fops.c:1596:client3_3_finodelk_cbk] 0-gluster1-client-2: remote
operation failed [Transport endpoint is not connected]
The message "W [MSGID: 114031] [client-rpc-fops.c:2524:client3_3_lk_cbk]
0-gluster1-client-2: remote operation failed [Transport endpoint is not
connected]" repeated 3 times between [2017-06-17 09:08:26.121603] and
[2017-06-17 09:09:01.336388]
[2017-06-17 09:10:01.462178] W [MSGID: 114031]
[client-rpc-fops.c:2524:client3_3_lk_cbk] 0-gluster1-client-2: remote operation
failed [Transport endpoint is not connected]
The message "W [MSGID: 114031] [client-rpc-fops.c:2524:client3_3_lk_cbk]
0-gluster1-client-2: remote operation failed [Transport endpoint is not
connected]" repeated 3 times between [2017-06-17 09:10:01.462178] and
[2017-06-17 09:11:01.668207]
[2017-06-17 09:12:01.792341] W [MSGID: 114031]
[client-rpc-fops.c:2524:client3_3_lk_cbk] 0-gluster1-client-2: remote operation
failed [Transport endpoint is not connected]
The message "W [MSGID: 114031] [client-rpc-fops.c:2524:client3_3_lk_cbk]
0-gluster1-client-2: remote operation failed [Transport endpoint is not
connected]" repeated 3 times between [2017-06-17 09:12:01.792341] and
[2017-06-17 09:13:02.002464]
...
And so on
...
Digging deeper into the problems with an strace of the gluster process
on the client I found the following sequence (100.64.252.1 being the
brick that was in maintenance):
4543 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 5
4543 setsockopt(5, SOL_TCP, TCP_NODELAY, [1], 4) = 0
4543 setsockopt(5, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
4543 setsockopt(5, SOL_TCP, TCP_KEEPIDLE, [20], 4) = 0
4543 setsockopt(5, SOL_TCP, TCP_KEEPINTVL, [2], 4) = 0
4543 setsockopt(5, SOL_TCP, TCP_USER_TIMEOUT, [0], 4) = 0
4543 open("/proc/sys/net/ipv4/ip_local_reserved_ports", O_RDONLY) = 7
4543 read(7, "\n", 4096) = 1
4543 close(7) = 0
4543 bind(5, {sa_family=AF_INET, sin_port=htons(49151),
sin_addr=inet_addr("100.64.251.6")}, 16) = 0
4543 fcntl(5, F_GETFL) = 0x2 (flags O_RDWR)
4543 fcntl(5, F_SETFL, O_RDWR|O_NONBLOCK) = 0
4543 connect(5, {sa_family=AF_INET, sin_port=htons(24007),
sin_addr=inet_addr("100.64.252.1")}, 16) = -1 EINPROGRESS (Operation now in
progress)
4543 fcntl(5, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
4543 fcntl(5, F_SETFL, O_RDWR|O_NONBLOCK) = 0
4543 epoll_ctl(3, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLPRI|EPOLLOUT|EPOLLONESHOT,
{u32=2, u64=4952097292290}}) = 0
...
4543 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 7
4543 setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
4543 setsockopt(7, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
4543 setsockopt(7, SOL_TCP, TCP_KEEPIDLE, [20], 4) = 0
4543 setsockopt(7, SOL_TCP, TCP_KEEPINTVL, [2], 4) = 0
4543 setsockopt(7, SOL_TCP, TCP_USER_TIMEOUT, [0], 4) = 0
4543 open("/proc/sys/net/ipv4/ip_local_reserved_ports", O_RDONLY) = 12
4543 read(12, "\n", 4096) = 1
4543 close(12) = 0
4543 bind(7, {sa_family=AF_INET, sin_port=htons(49151),
sin_addr=inet_addr("100.64.251.6")}, 16) = -1 EADDRINUSE (Address already in
use)
4543 bind(7, {sa_family=AF_INET, sin_port=htons(49150),
sin_addr=inet_addr("100.64.251.6")}, 16) = -1 EADDRINUSE (Address already in
use)
4543 bind(7, {sa_family=AF_INET, sin_port=htons(49149),
sin_addr=inet_addr("100.64.251.6")}, 16) = 0
4543 fcntl(7, F_GETFL) = 0x2 (flags O_RDWR)
4543 fcntl(7, F_SETFL, O_RDWR|O_NONBLOCK) = 0
4543 connect(7, {sa_family=AF_INET, sin_port=htons(24007),
sin_addr=inet_addr("100.64.252.1")}, 16) = -1 EINPROGRESS (Operation now in
progress)
4543 fcntl(7, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
4543 fcntl(7, F_SETFL, O_RDWR|O_NONBLOCK) = 0
4543 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLPRI|EPOLLOUT|EPOLLONESHOT,
{u32=4, u64=4810363371524}}) = 0
...
4550 <... epoll_wait resumed> {{EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=2,
u64=4952097292290}}}, 1, -1) = 1
4548 <... epoll_wait resumed> {{EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=4,
u64=4810363371524}}}, 1, -1) = 1
4550 getsockopt(5, SOL_SOCKET, SO_ERROR, <unfinished ...>
4548 getsockopt(7, SOL_SOCKET, SO_ERROR, <unfinished ...>
4550 <... getsockopt resumed> [110], [4]) = 0
4548 <... getsockopt resumed> [110], [4]) = 0
4550 shutdown(5, SHUT_RDWR <unfinished ...>
4548 shutdown(7, SHUT_RDWR <unfinished ...>
4550 <... shutdown resumed> ) = -1 ENOTCONN (Transport endpoint is
not connected)
4548 <... shutdown resumed> ) = -1 ENOTCONN (Transport endpoint is
not connected)
...
Direct ping+ssh between server and client (and vice versa) works fine.
Looking at a tcpdump on the brick side I see the TCP handshake for the
client. Looks strange because it is repeated after 1,2,4 seconds and so
on
18:55:51.176832 IP cadmins101.49151 > cfilers201.24007: Flags [S], seq
3453818375, win 28364, options [mss 4052,sackOK,TS val 362061385 ecr
0,nop,wscale 7], length 0
18:55:51.176884 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
1929363096, ack 3453818376, win 28280, options [mss 4052,sackOK,TS val 13791464
ecr 362061385,nop,wscale 7], length 0
18:55:51.179391 IP cadmins101.49149 > cfilers201.24007: Flags [S], seq
2219902567, win 28364, options [mss 4052,sackOK,TS val 362061388 ecr
0,nop,wscale 7], length 0
18:55:51.179433 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
2413229577, ack 2219902568, win 28280, options [mss 4052,sackOK,TS val 13791466
ecr 362061388,nop,wscale 7], length 0
18:55:52.176704 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
1929363096, ack 3453818376, win 28280, options [mss 4052,sackOK,TS val 13792464
ecr 362061385,nop,wscale 7], length 0
18:55:52.178695 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
2413229577, ack 2219902568, win 28280, options [mss 4052,sackOK,TS val 13792466
ecr 362061388,nop,wscale 7], length 0
18:55:52.178951 IP cadmins101.49151 > cfilers201.24007: Flags [S], seq
3453818375, win 28364, options [mss 4052,sackOK,TS val 362062388 ecr
0,nop,wscale 7], length 0
18:55:52.178971 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
1929363096, ack 3453818376, win 28280, options [mss 4052,sackOK,TS val 13792466
ecr 362061385,nop,wscale 7], length 0
18:55:52.180951 IP cadmins101.49149 > cfilers201.24007: Flags [S], seq
2219902567, win 28364, options [mss 4052,sackOK,TS val 362062390 ecr
0,nop,wscale 7], length 0
18:55:52.180979 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
2413229577, ack 2219902568, win 28280, options [mss 4052,sackOK,TS val 13792468
ecr 362061388,nop,wscale 7], length 0
18:55:54.178704 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
1929363096, ack 3453818376, win 28280, options [mss 4052,sackOK,TS val 13794466
ecr 362061385,nop,wscale 7], length 0
18:55:54.180704 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
2413229577, ack 2219902568, win 28280, options [mss 4052,sackOK,TS val 13794468
ecr 362061388,nop,wscale 7], length 0
18:55:54.182952 IP cadmins101.49151 > cfilers201.24007: Flags [S], seq
3453818375, win 28364, options [mss 4052,sackOK,TS val 362064392 ecr
0,nop,wscale 7], length 0
18:55:54.182982 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
1929363096, ack 3453818376, win 28280, options [mss 4052,sackOK,TS val 13794470
ecr 362061385,nop,wscale 7], length 0
18:55:54.182991 IP cadmins101.49149 > cfilers201.24007: Flags [S], seq
2219902567, win 28364, options [mss 4052,sackOK,TS val 362064392 ecr
0,nop,wscale 7], length 0
18:55:54.183004 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
2413229577, ack 2219902568, win 28280, options [mss 4052,sackOK,TS val 13794470
ecr 362061388,nop,wscale 7], length 0
Same on the client side (tcpdump at later time):
19:02:18.241460 IP cadmins101.49151 > cfilers201.24007: Flags [S], seq
911752495, win 28364, options [mss 4052,sackOK,TS val 362448449 ecr
0,nop,wscale 7], length 0
19:02:18.241642 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
3682274790, ack 911752496, win 28280, options [mss 4052,sackOK,TS val 14178528
ecr 362448449,nop,wscale 7], length 0
19:02:18.243955 IP cadmins101.49149 > cfilers201.24007: Flags [S], seq
3972802780, win 28364, options [mss 4052,sackOK,TS val 362448452 ecr
0,nop,wscale 7], length 0
19:02:18.244132 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
4166140203, ack 3972802781, win 28280, options [mss 4052,sackOK,TS val 14178530
ecr 362448452,nop,wscale 7], length 0
19:02:19.241228 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
3682274790, ack 911752496, win 28280, options [mss 4052,sackOK,TS val 14179528
ecr 362448449,nop,wscale 7], length 0
19:02:19.243220 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
4166140203, ack 3972802781, win 28280, options [mss 4052,sackOK,TS val 14179530
ecr 362448452,nop,wscale 7], length 0
19:02:19.244000 IP cadmins101.49151 > cfilers201.24007: Flags [S], seq
911752495, win 28364, options [mss 4052,sackOK,TS val 362449452 ecr
0,nop,wscale 7], length 0
19:02:19.244121 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
3682274790, ack 911752496, win 28280, options [mss 4052,sackOK,TS val 14179530
ecr 362448449,nop,wscale 7], length 0
19:02:19.245995 IP cadmins101.49149 > cfilers201.24007: Flags [S], seq
3972802780, win 28364, options [mss 4052,sackOK,TS val 362449454 ecr
0,nop,wscale 7], length 0
19:02:19.246097 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
4166140203, ack 3972802781, win 28280, options [mss 4052,sackOK,TS val 14179532
ecr 362448452,nop,wscale 7], length 0
19:02:21.243229 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
3682274790, ack 911752496, win 28280, options [mss 4052,sackOK,TS val 14181530
ecr 362448449,nop,wscale 7], length 0
19:02:21.245222 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
4166140203, ack 3972802781, win 28280, options [mss 4052,sackOK,TS val 14181532
ecr 362448452,nop,wscale 7], length 0
19:02:21.247997 IP cadmins101.49151 > cfilers201.24007: Flags [S], seq
911752495, win 28364, options [mss 4052,sackOK,TS val 362451456 ecr
0,nop,wscale 7], length 0
19:02:21.248007 IP cadmins101.49149 > cfilers201.24007: Flags [S], seq
3972802780, win 28364, options [mss 4052,sackOK,TS val 362451456 ecr
0,nop,wscale 7], length 0
19:02:21.248130 IP cfilers201.24007 > cadmins101.49151: Flags [S.], seq
3682274790, ack 911752496, win 28280, options [mss 4052,sackOK,TS val 14181534
ecr 362448449,nop,wscale 7], length 0
19:02:21.248173 IP cfilers201.24007 > cadmins101.49149: Flags [S.], seq
4166140203, ack 3972802781, win 28280, options [mss 4052,sackOK,TS val 14181534
ecr 362448452,nop,wscale 7], length 0
Out of ideas now. Maybe you see something.
Markus****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Ãber das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users