Hi Ren,
Here's my wild guess...
You haven't adjusted the stack space for threads, and when you get
thousands of them allocated at one time you get into disk swap hell.
Eventually the JVM gets enough time to reclaim the connection threads,
and you return to normal.
So I'd check the status of your system with top when you find yourself
in the situation below.
If you are in swap hell, then you'd want to do the following:
1. Before running your Java code, from the same shell environment
execute:
ulimit -s 512
This limits the native memory allocated per thread to 512K (assuming
that's enough)
2. Use the -Xss512k Java parameter to limit Java memory allocated per
thread.
-- Ken
On Jun 3, 2010, at 6:07pm, Renaud Waldura wrote:
I'm trying to track down a performance/resource consumption problem
which seems to involve HttpClient. After running for few days with
low to medium traffic (with low concurrency), my application stops
responding. A thread dump reveals many thousands of threads (between
3K and 4K) waiting for the following condition (see below). The VM
eventually recovers, the thread count goes wayyy down (around 100 or
so, which what I'd expect with the kind of traffic we see.)
"search" prio=10 tid=0x0000002ae20f6000 nid=0x38d4 waiting on
condition [0x0000002b35504000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000002aa9aa4dc0> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
at java.util.concurrent.locks.AbstractQueuedSynchronizer
$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
at
org
.apache.http.impl.conn.tsccm.WaitingThread.await(WaitingThread.java:
159)
at
org
.apache
.http
.impl
.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:339)
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute
$1.getPoolEntry(ConnPoolByRoute.java:238)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager
$1.getConnection(ThreadSafeClientConnManager.java:175)
at
org
.apache
.http
.impl
.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:
324)
at
org
.apache
.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:
555)
at
org
.apache
.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:
487)
at
org
.apache
.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:
465)
... (truncated)
The connection parameters are like so:
ConnManagerParams.setMaxTotalConnections(params, 400);
ConnManagerParams.setMaxConnectionsPerRoute(params, new
ConnPerRouteBean(400));
HttpProtocolParams.setVersion(params,
HttpVersion.HTTP_1_1);
ConnManagerParams.setTimeout(params, 10);
sharedConnectionManager = new
ThreadSafeClientConnManager(params, schemeRegistry);
I've turned on logging, and I've seen the following counters go
pretty high:
org.apache.http.impl.conn.tsccm.ConnPoolByRoute Total issued
connections: 300
org.apache.http.impl.conn.tsccm.ConnPoolByRoute Total allocated
connection: 300 out of 400
org.apache.http.impl.conn.tsccm.ConnPoolByRoute Total
connections kept alive: 46
My application uses multiple instances of HttpClient, about a dozen
or so, each configured as above.
Question: what is "total issued connections", and how does it
compare to "total allocated connections"?
Environment:
HttpClient 4.0-beta1
HttpCore 4.0-beta2
Java HotSpot(TM) 64-Bit Server VM (16.3-b01 mixed mode):
Linux 2.6.9-78.ELsmp
I know this isn't much to go on, and I apologize. But I'm hoping it
may ring a bell with someone who's got an idea why this is
happening. Kind of a reality check too. Does any of this smell funny?
Thanks!
--Ren
--------------------------------------------
Ken Krugler
+1 530-210-6378
http://bixolabs.com
e l a s t i c w e b m i n i n g
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]