Hans Bergsten wrote:

>>>             Without pooling  With pooling  Reuse w/o overhead
>>>-------------------------------------------------------------
>>>5 threads
>>>   Avg.:              330 ms        349 ms                 N/A
>>>   Rate:            15.2/sec      13.6/sec                 N/A
>>>
>>>20 threads
>>>   Avg.:            1,752 ms      1,446 ms            1,265 ms
>>>   Rate:            12.1/sec      13.6/sec            14.7/sec
>>>
>>>To me, this indicates that if you can avoid _all_ reuse overhead,
>>>there's some performace to be gained from reuse but not much. With the
>> 
>> 
>>>From 1.2s to 1.7s there is about 35% difference. I would call this
>> quite significant. Even between 1.4 and 1.7 - you have 20%. Try to
>> increase the thread count to 100 - and you'll see this going up.
>> 
>> The difference ( 0.5s ) is probably 2-3 times the response time of
>> apache for a static page. And most users will feel it.
> 
> I agree that in percentage, the difference is somewhat significant,
> but don't make too much out of the real value. My test server is not
> representative of the type of hardware you would use for a site with
> this type of load. On hardware suitable for the task, the difference in

And the test page is not representative of the type of pages that will
run on a real site.  I know that.

All we can measure with relative accuracy is the overhead of the 
container/jsp implementation - at least in relative terms. 
Take as the reference the time ( or RPS ) for Apache to serve the same
output as a static page. Or the time a servlet will take to generate
the same output. Run your tests with 5, 20, 100 RPS ( and "ab" may be
a better driver ). Compare the results - and most likely a production
server will see similar ratios.

I'll try to find some time ( next week - I hope ) and run the same 
tests with the "no sync" pool.


> the real values will likely be a lot smaller, and IMHO, insignificant.
> But please, let's not start a long debate about what's significant or
> not (that depends on too many factors). All I'm trying to show with
> these simple tests is that for pooling to really make a difference at
> all, you need to avoid all overhead, which may be very hard, and that
> the overhead with current pooling seems to eat all potential gain.

Well - it shows pretty clearly that the _current_ implementation
of thread pool is broken. Even if we don't take sync into account, the pool
has a 5 object limit - what else could you expect ??


> I ran 10,000 requests for each test case after a manual warm up (just a
> few requests to give the JIT a chance to kick in). If I rerun the tests
> to capture GC data (as Glen was asking for), I can run a longer warm-up
> as well. I didn't record the max values, but IIRC they were around 100
> sec in all cases.

The 1.4 JIT takes some time to kick in, if you run batches of 1000 requests
you'll see the time keeps improving. I would do at leat 5000 request to
warm up the jit.

>> This is a very good start, thanks for bringing this up.
> 
> I hope it at least gives us a better idea about what types of gains
> we can realistically expect from tag handler reuse.

Most of the improvements in coyote ( or in 3.3 over 3.2 ) are due to 
object reuse. It is possible that tag handlers are different and 
the other overheads will obscure any benefit ( at least under low load ),
but I can bet that under heavy load recycling will be very significant, if 
done correctly. 

Costin




--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to