All,
I'm trying to understand an odd issue I'm observing with stunnel which
is proxying 4 separate AJP connections between my web server and my
Tomcat server.
Note that there doesn't seem to be any interruption in service or
anything. It's just bothering my brain that I'm getting these log
messages and I don't think this should be happening.
Every few seconds, I'm seeing this in my stunnel log file on the Tomcat
server-side. So this is the "server" side of the stunnel:
Apr 16 10:54:33 host stunnel: LOG3[107806]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
Apr 16 10:54:48 host stunnel: LOG3[107820]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
Apr 16 10:55:02 host stunnel: LOG3[107818]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
Apr 16 10:55:14 host stunnel: LOG3[107847]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
Apr 16 10:55:39 host stunnel: LOG3[107855]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
Apr 16 10:56:10 host stunnel: LOG3[107898]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
Apr 16 10:56:12 host stunnel: LOG3[107905]: transfer: s_poll_wait:
TIMEOUTclose exceeded: closing
I'm also seeing similar issues on the stunnel CLIENT side as well.
I'm trying to understand why these connections are being closed.
I'm using Tomcat 8.5.65, httpd 2.4.46 with mod_jk 1.2.41, and stunnel
4.56 (client) and 5.50 (server).
Working backward from Tomcat, I'm using the AJP NIO connector with most
defaults, which should have 10k max connections and an infinite
connectionTimeout.
My stunnel configurations are using default timeouts for various things,
which include:
TIMEOUTbusy = 300 seconds
TIMEOUTclose = 60 seconds
TIMEOUTconnect = 10 seconds
TIMEOUTidle = 43200 seconds
I'm not seeing any problems with idle timeouts (which would have been my
first thought about timeouts), only these TIMEOUTclose timeouts.
TIMEOUTclose appears to occur when stunnel decides to close a connection
and has to forcibly close the socket after 60 seconds of waiting for the
peer to close the socket as well.
On both the web server and the application server, I can see some
percentage of stunnel connections in the TIME_WAIT state.
TIME_WAIT means that *this side* of the connection has requested a
close, but the other side hasn't definitely terminated the connection.
I'm not seeing any CLOSE_WAIT on either web or application server, which
seems strange to me.
So I have two questions for anyone who can help me figure out what's
happening, here:
1. Why is stunnel choosing to close connections? I shouldn't have any
problems with "busy" or "connect" problems. The idle timeout is 12 hours
which is a long time.
2. When the connections are closed, why are they not closing cleanly,
and sitting in TIME_WAIT for 60 seconds until stunnel kills them? I
don't believe I have any firewall rules between these servers that would
be killing these connections due to e.g. timeout.
Thanks,
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org