Hi there. I'm new to the group. Just upgraded from 3.1 to 4.0 for a
high-traffic production server cluster and noticed a drop in performance.
Requests are consistently taking ~40% longer. Disabling *
http.connection.stalecheck* had little impact.

While investigating the issue, I noticed that switching from a shared
HttpClient with a ThreadSafeClientConnManager to a new simple HttpClient per
request cuts down minimum and average request times dramatically (over 80%).

It seems the overhead for pooling and reusing connections dwarfs the
overhead of establishing HTTP connections. Is this just me? Anyone else seen
this?

Jared

P.S.

Here's some of my raw benchmarking data. These numbers are for simple GETs
to http://www.google.com. The results are nearly identical for our
production situation (talking to a specific low-latency, non-Google web
service).

My benchmark just makes 50 requests to the same URL either serially or in
parallel. The timed code block is simply this:
   client.execute(newHttpRequest()).getEntity().consumeContent();

HttpClient 4.0

*ThreadSafeClientConnManager*
N=50, avg=305.8ms, min=218, max=444
N=50, avg=323.5ms, min=221, max=564
N=50, avg=519.9ms, min=223, max=1102
N=50, avg=410.2ms, min=197, max=693
N=50, avg=313.0ms, min=204, max=449

*SingleClientConnManager*
N=50, avg=36.1ms, min=20, max=474
N=50, avg=39.0ms, min=27, max=395
N=50, avg=37.9ms, min=28, max=368

HttpClient 3.1 (for comparison)

*MultiThreadedHttpConnectionManager*

N=50, avg=221.7, min=122, max=350
N=50, avg=215.3, min=133, max=303
N=50, avg=205.1, min=132, max=284
N=50, avg=170.6, min=105, max=250
N=50, avg=276.3, min=102, max=525

*SimpleHttpConnectionManager*

N=50, avg=37.9, min=29, max=173
N=50, avg=29.8, min=19, max=198
N=50, avg=26.1, min=18, max=143
N=50, avg=27.7, min=18, max=147
N=50, avg=29.7, min=20, max=189

Reply via email to