Hi Sandy,
thanks for the reply.

Moving to the new version, basically, the code has switched from fine-grain locks to a "big lock" that conveys almost all the calls through the same critical region. This assumes that the time spent into the factory API is negligible, which may be the case for certain kind of applications but is not the general one.

In example, when you are managing socket resources, creation, validation and destruction times are significant when compared with the service time taken on pool resource by the application. In these cases, the big lock jeopardizes the performances, as you can spend as much time as in your application just waiting the validation or creation process within the factory, which is under the big lock itself.

Even if we remove the validating time, the creation time of the objects is time spent in the critical region. This implies that when you have to re-populate the entries for any reason, you end up accessing sequentially to the pool itself, i.e., only one thread at time is able to actually process the application code.

Please take as example this real world case:

High load server that has to serve 50 parallel requests *assuring valid objects*(avg validate time 1000ms) with an slow makeObject(avg timeout 2500 ms). Asuming an "out of the pool" AVG request serving time of 2000 ms, as not important because out of lock, we will have this behaviours: In worst condition, in which we have not valid objects and we must to recreate them, (for example whan the beckend is restarted) we will have this avarage wait time for resource:
           1^st req:    wait time: 2500ms + 1000ms
           2^nd req:   wait time: 1^st req wait time + 2500ms + 1000ms
n^nd req: wait time: (SUM from 1 to n-1) req wait time + 2500ms + 1000ms * Req avg wait time synchronizing around the factory:* (SUM for i from 1 to N(i*3500ms ))/N=(3500ms*(50(50+1)/2))/50=*89,250 seconds* * Req avg **wait time NOT** synchronizing around the factory:* 2500ms + 1000ms= *3,5 seconds*

   If we consider the optimal case, considering only the validate time:

* Req avg wait time **synchronizing around the factory: *(SUM for i from 1 to N(i*1000ms ))/N=(1000ms*(50(50+1)/2))/50 =*25 seconds* * Req avg wait time **NOT synchronizing around the factory:* 1000ms = *1 second*

I hope I didn't make calculation mistakes :-)

In my example above I am not considering also other real world factors, as for example occuring of long validate/make objects ... that would abate further the performances.

I am not talking about micro-benchmarks but about real world production macro-systems behaviour experience when pooling sockets resources.

Thanks Again and Cheers
Mat

Sandy McArthur ha scritto:
On 12/12/06, Mat <[EMAIL PROTECTED]> wrote:
if the borrowobject blocks on the factory
methods there will be an huge performance issue.

Yes, but without it there are thread-safety issues. I agree, it's a
performance issue, but I'm not sure how huge it is except under
micro-benchmarks.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to