Hi Alex, > Below are the settings I'm going to use. Please tell me if I'm wrong > about any. It's worth mentioning that all servers run in dedicated Tomcats, > and the HTTP connector is configured with maxThreads="1000", so supposedly > 1000 concurrent requests can be handled. Server hosts are beefy > multi-processor systems with at least 2Gb of memory, and Tomcat is given > 512m. Network connections are usually 100Mbps.
I'm not a Tomcat expert. But 1000 service threads sounds like an extremely high number, even if the JVM has more than 1 GB of memory. If the service is lightweight, you may get away with it. Have you _tested_ those servers for load? I would expect them to desperately run garbage collection when more than a few hundred requests have to be served simultaneously. If they have to access some backend like a database which limits it's own connections to a few dozens, you'd also be wasting resources because most requests just queue up waiting for a backend connection. You can enable verbose GC in the server JVM to get performance data. If you see the GC run every 5 seconds for 1 second, that means the JVM is spending 20% of it's performance for garbage collection. As a rule of thumb, 5% is good, 10% should be the upper bound. If you can service 1000 connections within those limits, you're fine. For backend access (if any), the pool settings for datasources and such stuff have to be reviewed. On each of the backend servers. If either one of them runs into overload, it can grab all connections and slow down the whole system. And remember: there is _no_ way of telling whether a server can sustain load _except_ running a load test on it. I've seen a single misplaced synchronized statement in application code slow down a system with 4 multiprocessor servers and plenty of memory to a grinding halt. > MaxHostConnections = 1000 > MaxTotalConnections = 1000 > CloseIdleConnectionsPeriod = 1 minute > IdleConnectionTimeout = 3 minutes > DeleteClosedConnectionsPeriod = 10 minutes The timeouts seem reasonable. For connections, see above. > I decided to occasionally delete closed connections, just to be on the safe > side. I ran the system overnight without any incoming connections, and the > pool stayed at the max size it reached. Netstat did show that there were no > open sockets, but it looks like HttpConnections never got deleted. I'll > test it a bit more, maybe I'm missing something. I didn't ask which version you're using. There have been some fixes to idle connection handling in 3.1, like HTTPCLIENT-597 [1]. If you're on 3.0, calling deleteClosedConnections is a good idea. HttpClient 3.1 RC1 is going to be released in a few weeks, upgrading to that version would even be better. > Clients can track only one site at a time by design. These are rich, thick > clients, and have to display a great deal of information. I got your original mail wrong on this point. I thought that the single instance of HttpClient in your CentralServer would be able to connect to a single backend server only. Thanks for clearing that up. cheers, Roland [1] https://issues.apache.org/jira/browse/HTTPCLIENT-597 --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
