Hey list,
I have following relay configured on two-node setup. Each node acts as MASTER
for one IP and BACKUP for another.
The opposite on the second node.
tcp protocol tcp_proto {
tcp { nodelay, sack, socket buffer 65536, backlog 128 }
}
relay rabbitmq {
listen on $VIP1 port 5672
listen on $VIP2 port 5672
protocol tcp_proto
# session timeout 10800
forward to <rabbitmqpool> port 5672 mode roundrobin check tcp
forward to <rabbitmqfallback> port 5672 mode roundrobin check tcp
}
Default timeout is 600s (10 min).
In my test I have a client requesting data from a machine(web1) sitting behind
this relay(node1).
web1 has its def gw pointed to node1, e.g. NAT:ed.
node1, obviously, have EXT and INT interfaces, VIP1 and VIP2 been external ones.
What I see in tcpdump running on both EXT and INT interfaces is a tcp stream
between
client and web1 via node1 - PUSH ACK. At some point (600s) node1 cuts the
connection (FIN seen in tcpdump).
Question is this expected behavior?
As states in PF are updated due to continuous tcp stream, session shouldn’t be
cut, right?
Any clues?
node1 runs 6.0-stable
Br
mxb