Jared Jacobs wrote:
Thanks for the response, Oleg.

Have you increased the max limit on connections per host, which is set to 2
per default?


Yes, I did increase the limit. Here's how I initialized the HttpClient:

  private static HttpClient newMultiThreadedHttpClient() {
    return new DefaultHttpClient(
        new ThreadSafeClientConnManager(
            new BasicHttpParams()
              .setParameter(STALE_CONNECTION_CHECK, false)
              .setParameter(MAX_TOTAL_CONNECTIONS, 10)
              .setParameter(MAX_CONNECTIONS_PER_ROUTE, new ConnPerRoute() {
                public int getMaxForRoute(HttpRoute route) {
                  return 10;
                }}),
            createSchemeRegistry()),
        null);
  }


What is the point of having 10 connection limit and using 50 worker threads?

Regardless of what the max connections per host limit is set to, though, at
least the first request should not block at all.

Why?

 Notice that the
*minimum* elapsed
time of the N=50 requests done in each of my trial runs are all very high
when using pooled connections. This means that even the first request is
consistently slow.


All these numbers are meaningless given such a small number of requests. You should be executing 10,000 HTTP requests in order to get any meaningful performance data.

I'd be happy to send you my benchmark source code. It's a single file, 100
lines.


Send the log of the session with connection pooling.

For now, we're content using a disposable HttpClient per request. I hope to
have time to profile and investigate the connection pooling issue further
soon.

My main reason for posting to this list was the hope that someone would be
able to contradict me, ideally with measurements of their own.


I suspect your measurements are flawed, mainly due to unrepresentative number of requests they are based upon.

Oleg

Regards,
Jared


On Thu, Nov 5, 2009 at 1:18 PM, Oleg Kalnichevski <[email protected]> wrote:

Jared Jacobs wrote:

Hi there. I'm new to the group. Just upgraded from 3.1 to 4.0 for a
high-traffic production server cluster and noticed a drop in performance.
Requests are consistently taking ~40% longer. Disabling *
http.connection.stalecheck* had little impact.

While investigating the issue, I noticed that switching from a shared
HttpClient with a ThreadSafeClientConnManager to a new simple HttpClient
per
request cuts down minimum and average request times dramatically (over
80%).

It seems the overhead for pooling and reusing connections dwarfs the
overhead of establishing HTTP connections. Is this just me? Anyone else
seen
this?

Jared


Jared

Have you increased the max limit on connections per host, which is set to 2
per default? Most likely your 50 worker threads spend most of their time
blocked waiting for one of those two connections to become available.

You can see what exactly is happening with the connection pool using the
following logging config:

-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog
-Dorg.apache.commons.logging.simplelog.showdatetime=true
-Dorg.apache.commons.logging.simplelog.log.org.apache.http.impl.conn=DEBUG

Hope this helps

Oleg

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]





---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to