Hi Eric,

thank you for your quick and ample response.
I have already suspected that the OS is not well configured, so I will
incorporate your proposals into my etc/system settings.
Regarding the tomcat 4.1.27: I will install this version because I think
that my problem is more related to the (mis)configuration of the OS.

Thanks,
Thomas


> -----Ursprüngliche Nachricht-----
> Von: Eric J. Pinnell [mailto:[EMAIL PROTECTED]
> Gesendet am: Montag, 1. September 2003 16:03
> An: Tomcat Users List
> Cc: Haug Thomas
> Betreff: Re: Socket Problem with tomcat on Solaris 8
> 
> Hi,
> 
> Other people have had this problem.  You might want to try 
> Tomcat 4.1.27
> as it has a number of Coyote fixes.
> 
> As for Solaris settings here are some that I use:  (excuse the cut and
> paste)
> 
> Install Patch IDs
> 103582-12 or better -SynFlood & Listen Queue management fix
> 103597-04 or better -TCP patch
> 104212-13 or better -HME half/full duplex negotiation patch
> 103093 Required for Netscape 3.x
> 
> In /etc/system
> set tcp:tcp_conn_hash_size=8192
> set ncsize = 30000
> set ufs_inode = 30000
> 
> set rlim_fd_max=4096
> Changes the maximum allow files open per process
> 
> set rlim_fd_cur=1024
> Changes the current open files per process
> 
> set autoup=240
> Number of seconds in age for a memory page to be written to disk
> 
> set tune_t_fsflushr=120
> Number of seconds between wake up times for the fsflush daemon
> 
> NDD Settings
> 
> tcp_close_wait_interval to 70000 - miliseconds to wait before 
> reclaiming
> the so cket resource
> 
> tcp_fin_wait_2_flush_interval to 25000 - miliseconds to wait before
> closing soc ket resources that have missed a FIN packet
> 
> ip_path_mtu_discovery to 0 - turn off MTU discovery - must 
> retune for IPV6
> 
> tcp_conn_req_max_q to 512 - max number of queue size for 
> holding partially
> star ted connections
> 
> tcp_conn_req_max_q0 to 1024 - number of connections to wait 
> holding before
> serv er issues an "unable to connect to server" message
> 
> tcp_xmit_hiwat to 65535
> 
> tcp_recv_hiwat to 65535 increases the size of the send and 
> receive buffers
> 
> tcp_cwnd_max to 65535 - increases the congestion window size used with
> congesti on avoidance and slow start - prevents byte 
> overflows in the tcp
> stack
> 
> tcp_keepalive_interval to 90000 - miliseconds of idle time on 
> keepalive
> connect ions
> 
> tcp_ip_abort_interval to 90000 - miliseconds of time 
> retransmissions for
> connec tions in Established state should be retried. Cleans up hung
> connections on web
>  servers.
> 
> tcp_ip_abort_cinterval - 60000 - miliseconds of time 
> retransmissions in
> connect ions started but not established will continue. 
> Protects from over
> powerful SYN flood attacks as well as dropped proxy connections.
> 
> tcp_rexmit_interval_initial to 3000 - miliseconds before a 
> retransmit is
> sent -needs to be lowed due to Internet latency
> 
> tcp_rexmit_interval_min to 3000 - see above
> 
> tcp_rexmit_interval_max to 5000 - see above
> 
> tcp_conn_grace_period to 500
> 
> ip_ignore_redirect to 1 - ignores IP level redirects
> 
> tcp_slow_start_initial to 2 - Microsoft & BSD TCP/IP 
> implementations do
> not follow the RFC (2001) for TCP/IP. When communicating with Solaris
> this causes a 1- 2 second delay in web page delivery. This fixes.
> 
> tcp_deferred_ack_interval to 300 - miliseconds before sending 
> delayed ack
> allow s ACK and response to be combined in 1 send for many 
> HTTP requests
> 
> 
> -e
> 
> 
> On Mon, 1 Sep 2003, Haug Thomas wrote:
> 
> > Hi everybody,
> >
> > I have a problem with tomcat 4.1.24 running on Solaris 8 (with JDK
> > 1.4.1_02).
> > A client of mine is sending requests to the Tomcat instance 
> in a loop (but
> > sequentially). After a while the client receives a 
> java.net.ConnectException
> > with the error message
> > "Connection timed out: connect". On the server side I do 
> not find any
> > exception trace in the log files of the Tomcat instance. 
> The only remarkable
> > thing I detected is that there are thousands of sockets in 
> the TIME WAIT
> > state: netstat -a | grep "8080" results in
> > myserver.8080 myclient.1679  7480      0 24820      0 TIME_WAIT
> > myserver.8080 myclient.1680  7480      0 24820      0 TIME_WAIT
> > myserver.8080 myclient.1680  7480      0 24820      0 TIME_WAIT
> > ... [many more of these lines]
> > After a while these sockets get collected by the OS. Then 
> the client runs
> > again without getting a connection exception (at least for 
> a while then the
> > 'process' repeats).
> >
> > Has anybody of you experienced a similar behaviour ?
> > Has anybody of you a solution to this problem: Do I need to 
> configure Tomcat
> > in a special way or is it a problem of the OS and I have to 
> configure the OS
> > accordingly. At the moment Tomcat is using the coyote 
> connector in the
> > following way:
> >    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
> > port="8080"
> >               minProcessors="5" maxProcessors="75" 
> enableLookups="false"
> >               redirectPort="8443" acceptCount="100" debug="0"
> >               connectionTimeout="20000" useURIValidationHack="false"
> > disableUp
> >               loadTimeout="true"/>
> > As far as I remember is the 'connectionLinger' time disabled if this
> > configuration is not set in the connector. Is this correct 
> and does this
> > configuration affect the problem I am experiencing ?
> >
> > Thank you very much,
> > Thomas
> >
> > 
> ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to