On 23.08.2017 09:43, Grigor Aleksanyan wrote:
Hi André,

Thanks for your detailed response,  I really appreciate your help. I see
what you mean and have a couple of more question regarding to this topic.

I was looking at possibilities to use Spring framework's evening facilities
to register *TcpConnectionClose *events listener in my web application
using *org.springframework.integration.ip.tcp.connection.TcpConnectionCloseEvent
*and *ApplicationListener *classes.

I was hoping to get *TcpConnectionClose *events when remote client socket
disconnects. In this case I hoped to to get these events in my listener
class and use connection ids to do proper cleanup. However I wasn't able to
receive *TcpConnectionCloseEvent*-s from Tomcat's connector port as I am
not the one that initializes Server listening on HTTP connector port
(wasn't able to register listener for it) and client side socket is not
created by me in java side to call close and receive close events .

However, I was able to receive *TcpConnectionClose *events when I created
my own, separate server, registered listener for TcpConnectionClose events
for it, created a client socket, connected it to my server's port (again in
java not c++) and *explicitly* called close for that socket. Do you think
that this evening approach doesn't have potential to work for Tomcat's HTTP
connector server and I should not look into this further to make it work ?

Also one more consideration, I was thinking about re this

In case when one uses simple TCP (not HTTP) client/server protocol, once
server calls *recv *function and client side is disconnected, error code is
returned on the server.

*I wonder why in case of Tomcat calling read on requests's InputStream
doesn't throw when client is disconnected ?*

My initial assumption was the following. As I use *HTTP 1.1, *which
supports persistent connections and pipe lining of the requests (client
need not wait to receive the response for one request before sending
another request on the same connection), Tomcat should call *recv *on the
handle to receive new requests from the same remote client's socket (untill
it sends Connection: Close header). However my tests show that
*doFilter *function
is not called second time until *doFilter *called for the first request
(from the same connection) is finished.

*Is this done by design to make sure that server sends responses (on a
given connection) in the same order that it received the corresponding

And what happens when I call *HttpServletRequest.getInputStream().read(...)* in
doFilter function, when remote client that has submitted ongoing request
was disconnected. My feeling is that Tomcat accepts connection from the
client (receives a socket handle to read requests from), reads first
request from accepted connection and propagates request to doFilter
function (without passing actual server side handle). So
*HttpServletRequest *is completely separated from the underlying socket,
  thus calling read on request's input stream doesn't actually call *recv *on
TCP socket handle. Am I right ? If yes this could explain why calling read
doesn't throw.

But if pipe lining is supported, from what I understand, server should keep
calling *recv *on accepted handle (somewhere in the low level code) thus
client's disconnect should return SOCKET_ERROR, and this somehow should be
visible in Tomcat (let's first assume there are no middleware proxies )?
What do you think, do I miss something ?

Although (I repeat) this is not really my area of expertise, I think that I should warn you of a couple of aspects : - a TCP socket is a bi-directional "thing", which each side having its "sending half" and its "receiving half". And I believe that it is perfectly protocol-compliant, for a client to close its "sending part", while still keeping its "receiving part" open and reading from it. So, as far as I understand, if the server detects that the client's "sending part" is now closed (iow a *recv* triggers an "end-of-file" eror), it does not mean that the client has gone away. It just means that the client has nothing more to send. But the client may still be reading its "receiving part", until it gets the full response to a previous request. - the "HTTP keepalive" and "pipelining" functionality, is meant to avoid the overhead of establishing, and then later tearing down, a TCP connection between client and server, for each request. That was introduced at a time when such procedures were comparatively time- and resource-consuming (several packets being exchanged back and forth), and lines were comparatively slow. So for example retrieving a HTML page containing 10 image pointers, resulted in 11 succesive connections being created and torn down. The "keepalive" instead, allows to send all 11 requests, and 11 responses, one after the other, on the same unique TCP/HTTP connection. That saves 10 TCP setup/teardown sequences. But that does not mean that the client does not expect the 11 responses to come back in the same order as the requests that it sent (otherwise, how would it know where to insert the images into the page ? in the responses, the server is not sending back a copy of the original requests, nor any pointer to them).

I think that what you are describing above is more like websockets.

To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to