Hi, all.

My company has been experimenting with GeoServer (2.0.2 and 2.1) for several 
months.  We use Ruby's Net::HTTP to publish image mosaics to the GeoServer via 
the REST API.  Both the publishing code and the GeoServer are running on 64-bit 
RedHat 5 Linux (CentOS 5) with JDK 6u26.  We have a cron that executes the 
publishing script once every minute, and we publish at least 10-20 mosaics per 
minute 24/7.

We have had resource problems since we started running GeoServer.  The problems 
get worse when we publish more mosaics to the GeoServer.  We finally narrowed 
it down to file descriptors/TCP sockets.  We are also seeing memory growth 
(leak?) and process growth (growth of the number of processes or threads) that 
closely follow the growing trend of the open TCP sockets.  We use Ganglia to 
monitor these metrics.  To temporarily alleviate the problem, we raised the 
file descriptor limit from the default 1024 to somewhere in the 5-figure range. 
 But then the memory limit was reached before the file descriptor limit was 
reached.  We have tried running the GeoServer in either the jetty container 
that it's packaged with or a tomcat 6 container.  We got the same resource 
problems.

Other than monitoring the metrics graphically using Ganglia, we also use 
"netstat -anp --inet" to check the TCP sockets occasionally.  Here is a snippet 
of the output:

Proto Recv-Q Send-Q Local Address               Foreign Address             
State       PID/Program name   
tcp        0      0 0.0.0.0:53088               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:33920               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:38912               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:49408               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:44800               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:48288               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:39040               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:59776               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:39936               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:45632               0.0.0.0:*                   
LISTEN      4404/java           
tcp        0      0 0.0.0.0:58816               0.0.0.0:*                   
LISTEN      4404/java           

The number of them increases with the time we publish image mosaics to the 
GeoServer.  We have had to restart the GeoServer once every few hours to clear 
out those sockets (and the memory).

I have searched the GEOS bug reports and the JETTY bug reports and seen 
sporadic reports of socket leaks, but, from their resolved state, it is still 
unclear to us whether socket leaks exist or not and where they may exist (in 
GeoServer or in Jetty).

My questions are:
1. Is either GeoServer or Jetty leaking sockets/memory/threads?
2. If yes, is it being worked on?
3. If not, what is the proper way to have GeoServer close the sockets?
3. Is there a way to force GeoServer to close the sockets no matter what?

Thank you in advance for any help or insight that you may offer.


------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
_______________________________________________
Geoserver-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geoserver-users

Reply via email to