Hello everyone,

i face a problem with 

- apache-2.0.59-prefork as httpd,
- mod_caucho from resin 2.1.17 and 
- Linux Debian Etch (2.6.18-4-amd64)

The httpd uses the mod_deflate module to compress the output. It delivers
the output in chunks to the client, as far as i can see.

This is the scenario:

Resin passes the output for a huge page (e.g. 5MB) back to mod_caucho. 
mod_caucho passes it to the httpd.
The httpd delivers the compressed chunks to the client. 

So far so good.

But now the client decides "i have got everything i need", and "pushes the
stop button". 
So not everything from those 5MB have been received by the client.

a) the socket "client <-> httpd" on httpd server goes into 'CLOSE_WAIT' 
("The remote end has shut down, waiting for the httpd socket to close")

b) the socket "mod_caucho" <-> "resin" on httpd server is 'ESTABLISHED' 
and has data in its tcp input queue 
(can not pass the content to httpd anymore?)

c) the socket "resin <-> mod_caucho" on resins server is 'ESTABLISHED'
 and has data in its tcp output queue 
(can not pass the content to mod_caucho on httpd server)

But for any reason, as long as the socket from a) stays in CLOSE_WAIT, 
the connection from b) can not be re-used for new requests. It just hangs.

So, if more clients do the same as above, earlier or later all connections
to the Resin are busy.

Does someone know this kind of problem and has some ideas to share?

Why the httpd could stay so long in CLOSE_WAIT? Why it could not shut down
and send his FIN?  

Can the mod_caucho recognize, that the socket from its httpd process has
been closed by the remote client 
and could release all ressources? So mod_caucho can go on with the next
request from another httpd process?

Or is mod_caucho bound to one httpd process as long this httpd process
"lives" ?

Thanks for some ideas or tips or hints.

Olaf Krische
View this message in context: 
Sent from the Resin mailing list archive at Nabble.com.

resin-interest mailing list

Reply via email to