On 5/25/05, Vicenc Beltran Querol <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I'm absolutely disconcerted. In your previous answeryou agreed that the
> AB test is not good for comparing two different architectural
> approaches. And you still wanna compare the performance of the hybrid
> architecture using it. But when I look for APR results on the net, I
> find that in the message 70876
> (http://www.mail-archive.com/tomcat-dev@jakarta.apache.org/msg70876.html)
> of this list that you're using JMeter and think-times in other experiments.
> Have you looked at any of the results I've post for realistic benchmarks?
> Why are you so obsessed with the AB results with concurrency level 20?
> Sorry, but I don't see the point on it...
> 
> 
> Using non-realistic benchmarks and very-oriented performance tricks only
> leads to winning few milliseconds in the response time of the server.
> But it's not a real benefit for the clients. When the server is
> overloaded (when performance improvements are really determinant), these
> benefits are negligible... In my opinion, following these development
> criterias is counterproductive and makes the server worse in the real
> world (where users put it into production). Surely, you disagree...
> 
> 

I'm not sure I agree with that statement. The reason for using apache
AB for small files under 2K is that JMeter is unable to max out the
server with tiny files. You can see the original number I produced
here http://people.apache.org/~woolfel/tc_results.html.

Since the bulk of my work the last 4 years has been with large
applications handling millions of pageviews a day, I can safely say
that most large deployment will rarely exceed 50 concurrent requests
for extended period of time. this is just my experience on real
applications, but we generally buffer the entire page and then send it
in one shot.  this is done for several reasons.

1. WAN latency - as you already stated
2. improve accuracy of performance logging. we log the page generation
to make sure we know exactly how much time is spent for the query,
page markup and transfering the data.
3. allows us to track network bottleneck more accurately

In my mind, the argument for tomcat supporting 1000 concurrent
connections for an extended period of time isn't valid from my
experience. There's typically a large cluster of servers that are load
balanced behind a load balancing router. For me, throughput is far
more important because most the images and files range from 5-15K in
size. In these cases, maximizing throughput is more important. So
small sites trying to deal with the /. effect, it's not worth it.  I
say that because the network will die long before tomcat will. Any
site with serious performance requirements will host at a tier 1
provider and have a cluster of servers.  small personal sites are
shared hosted and often don't have enough bandwidth.

my bias .02 cents.

peter lin

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to