Peter Lin wrote:

> 
>  
> I haven't read all the posts on this discussion, but here's some facts
> from personal observations.
> for pages with only a few tags, ie less than 30, tag pooling doesn't help.
>  On the otherhand, if your page has 100+ tags, it improves performance.
> Some of the pages I benchmarked with had about 135 tags. In those
> situations, I saw a 20-50% improvement. I would argue that sites that
> don't have a lot of load should simply turn off tag pooling.  Site that
> use tags extensively and get 1millions page views a day, will gain
> significantly from tag pooling.


Is this based on the current tag pool implementation in jasper2 ?
Because it is pretty clear that the tag pool has few problems. 

I would say the nature of the tags will also have a big impact. If your
tag is very simple - you'll probably get some "small" benefits under load
( 20..30% ?). If the tag uses internal data structures, buffers, etc - 
it's very likely you'll see more ( since creating each tag instance will
also create the additional hashtable, StringBuffers, etc ).

I would bet that with complex tags that are specifically written to take 
advantage of the recycling you would see at least 2x better performance ( 
with a good sync-free and large enough tag pool ). If your tag is using 
any buffers or complex/expensive data structures that can be recycled - 
you'll save a lot. 

I don't think the number of tags in a page is too important - even if you
have 1 complex tag - with 100 concurent users - you should see a difference.

In an ideal world, all "core" tags would be recyclable and garbage-free - 
that may allow them to run at comparable speed with a hard-coded page.


Costin


>  
> peter lin
>  
>  Costin Manolache <[EMAIL PROTECTED]> wrote:Hans Bergsten wrote:
> 
>>>> Without pooling With pooling Reuse w/o overhead
>>>>-------------------------------------------------------------
>>>>5 threads
>>>> Avg.: 330 ms 349 ms N/A
>>>> Rate: 15.2/sec 13.6/sec N/A
>>>>
>>>>20 threads
>>>> Avg.: 1,752 ms 1,446 ms 1,265 ms
>>>> Rate: 12.1/sec 13.6/sec 14.7/sec
>>>>
>>>>To me, this indicates that if you can avoid _all_ reuse overhead,
>>>>there's some performace to be gained from reuse but not much. With the
>>> 
>>> 
>>>>From 1.2s to 1.7s there is about 35% difference. I would call this
>>> quite significant. Even between 1.4 and 1.7 - you have 20%. Try to
>>> increase the thread count to 100 - and you'll see this going up.
>>> 
>>> The difference ( 0.5s ) is probably 2-3 times the response time of
>>> apache for a static page. And most users will feel it.
>> 
>> I agree that in percentage, the difference is somewhat significant,
>> but don't make too much out of the real value. My test server is not
>> representative of the type of hardware you would use for a site with
>> this type of load. On hardware suitable for the task, the difference in
> 
> And the test page is not representative of the type of pages that will
> run on a real site. I know that.
> 
> All we can measure with relative accuracy is the overhead of the
> container/jsp implementation - at least in relative terms.
> Take as the reference the time ( or RPS ) for Apache to serve the same
> output as a static page. Or the time a servlet will take to generate
> the same output. Run your tests with 5, 20, 100 RPS ( and "ab" may be
> a better driver ). Compare the results - and most likely a production
> server will see similar ratios.
> 
> I'll try to find some time ( next week - I hope ) and run the same
> tests with the "no sync" pool.
> 
> 
>> the real values will likely be a lot smaller, and IMHO, insignificant.
>> But please, let's not start a long debate about what's significant or
>> not (that depends on too many factors). All I'm trying to show with
>> these simple tests is that for pooling to really make a difference at
>> all, you need to avoid all overhead, which may be very hard, and that
>> the overhead with current pooling seems to eat all potential gain.
> 
> Well - it shows pretty clearly that the _current_ implementation
> of thread pool is broken. Even if we don't take sync into account, the
> pool has a 5 object limit - what else could you expect ??
> 
> 
>> I ran 10,000 requests for each test case after a manual warm up (just a
>> few requests to give the JIT a chance to kick in). If I rerun the tests
>> to capture GC data (as Glen was asking for), I can run a longer warm-up
>> as well. I didn't record the max values, but IIRC they were around 100
>> sec in all cases.
> 
> The 1.4 JIT takes some time to kick in, if you run batches of 1000
> requests you'll see the time keeps improving. I would do at leat 5000
> request to warm up the jit.
> 
>>> This is a very good start, thanks for bringing this up.
>> 
>> I hope it at least gives us a better idea about what types of gains
>> we can realistically expect from tag handler reuse.
> 
> Most of the improvements in coyote ( or in 3.3 over 3.2 ) are due to
> object reuse. It is possible that tag handlers are different and
> the other overheads will obscure any benefit ( at least under low load ),
> but I can bet that under heavy load recycling will be very significant, if
> done correctly.
> 
> Costin
> 
> 
> 
> 
> --
> To unsubscribe, e-mail:
> For additional commands, e-mail:
> 
> 
> 
> ---------------------------------
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now



--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to