Vicenc Beltran Querol wrote:
It has been a pleasure to post this information, and to receive constructive
and technically-reasoned answers like yours. Deciding which parameters
define the performance of a server is a great and never-ending discussion topic.
Anyway, feel free to send my any
Hi,
The results of the AB benchmark configured with 20 concurrent clients are
posted below,
If somebody is interested in more configurations (from 20 to 1 concurrent
clients)
they are available at http://www.bsc.es/edragon/pdf/TestAb.tgz
BTW, there is also available a comparison between
Am I reading the results correctly?
tomcat 5.5.9 - 16,331.81/sec
hybrid - 7,085.54/sec
that means the hybrid connector is 2x slower. If those results are
accurate, I would say the APR connector is much better choice.
peter lin
On 5/25/05, Vicenc Beltran Querol [EMAIL PROTECTED] wrote:
Hi,
Peter Lin wrote:
Am I reading the results correctly?
tomcat 5.5.9 - 16,331.81/sec
hybrid - 7,085.54/sec
that means the hybrid connector is 2x slower. If those results are
accurate, I would say the APR connector is much better choice.
It's more complex than that.
The APR connector has a
Hi,
The APR connector has a trick to optimize pipelining (where a client
would do many requests on a single connection, but with a small delay
between requests - typically, it would happen when getting lots of
images from a website).
What's the trick? Are you trying to do blocking read
Vicenc Beltran Querol wrote:
It's great to read your opinions... ;)
Let's cut down on the broken record effect then: -1 for your code,
it's not a clean implementation ;) (I end up with a smiley, since you
did as well)
Rémy
On 5/25/05, Vicenc Beltran Querol [EMAIL PROTECTED] wrote:
Hi,
I'm absolutely disconcerted. In your previous answeryou agreed that the
AB test is not good for comparing two different architectural
approaches. And you still wanna compare the performance of the hybrid
architecture using it.
Peter Lin wrote:
I'm not sure I agree with that statement. The reason for using apache
AB for small files under 2K is that JMeter is unable to max out the
server with tiny files. You can see the original number I produced
here http://people.apache.org/~woolfel/tc_results.html.
Since the bulk of
I took a look at the AB and Rubis numbers. Honestly I don't
understand the rubis graphs. From the AB results, it looks like the
connect, processing and wait times are lower for the hybrid. That's a
good achievement and congrats to you on that.
I'm not convinced of the benefit of the hybrid
Remy Maucherat wrote:
In my mind, the argument for tomcat supporting 1000 concurrent
connections for an extended period of time isn't valid from my
experience.
- all the other APR features which are really useful and not provided by
the core Java platform
Actually I just read a perfect
Mladen Turk wrote:
Actually I just read a perfect use case scenario request for
the new APR connector on [EMAIL PROTECTED]
With only couple of threads all the 1000 connections could be handled
without having 1000 threads.
Actually, it seems a lot more a case of using the servlet API in a way
Hi Peter,
I took a look at the AB and Rubis numbers. Honestly I don't
understand the rubis graphs.
You can find an explanation about the httperf numbers on the man page
of Httperf, or looking at
http://www.hpl.hp.com/personal/David_Mosberger/httperf.html.
Rubis is the dynamic application
Hi,
By the way, this is my last post about this topic. I've perfectly
understood Remy's messages (in the list and in my personal address),
so I will not waste your time anymore.
It was far from a waste of time. Please don't hesitate to contribute again
in performance tuning or other areas.
Hi,
I've repeated the tests on the hybrid architecture using the AB.
You can find them attached to this mail. I've run the AB with several
concurrency levels, ranging from 20 to 1. You can see all the
results in a plot.
Running a test with ab (ab -k -c 20 -n 2
Vicenc Beltran Querol wrote:
Hi,
I've repeated the tests on the hybrid architecture using the AB.
You can find them attached to this mail. I've run the AB with several
concurrency levels, ranging from 20 to 1. You can see all the
results in a plot.
-c 20 -k is basically the only thing I
Remy Maucherat wrote:
I've repeated the tests on the hybrid architecture using the AB.
You can find them attached to this mail. I've run the AB with several
concurrency levels, ranging from 20 to 1. You can see all the
results in a plot.
Here are the results.
Rémy
Remy Maucherat wrote:
Remy Maucherat wrote:
I've repeated the tests on the hybrid architecture using the AB.
You can find them attached to this mail. I've run the AB with several
concurrency levels, ranging from 20 to 1. You can see all the
results in a plot.
Here are the results.
On Fri, May 20, 2005 at 12:05:51PM +0200, Mladen Turk wrote:
Vicenç Beltran wrote:
Hi,
attached you'll find a patch that changes the coyote multithreading
model to a hybrid threading model (NIO+Mulithread). It's fully
compatible with the existing Catalina code and is SSL enabled.
diff
Vicenc Beltran Querol wrote:
I've rebuilt the patch following your indications (hope). You can
find at http://www.bsc.es/edragon/pdf/tomcat-5.5.9-NIO-patch (now it is bigger
so it can't be attached)
The benchmarking results I've obtained for a static content workload can be downloaded
from
Vicenç Beltran wrote:
Hi,
attached you'll find a patch that changes the coyote multithreading
model to a hybrid threading model (NIO+Mulithread). It's fully
compatible with the existing Catalina code and is SSL enabled.
diff -uprN
Vicenç Beltran wrote:
Hi,
attached you'll find a patch that changes the coyote multithreading
model to a hybrid threading model (NIO+Mulithread). It's fully
compatible with the existing Catalina code and is SSL enabled.
The Hybrid model breaks the limitation of one thread per connection,
thus
Mladen Turk wrote:
Vicenç Beltran wrote:
Can't you simply make two new files
Http11NioProcessor and Http11NioProtocol.
It definitely needs to be a (clean, this means no multiple /* */ in
patch submissions ;) ) separate implementation. Actually it will also
need a separate NioEndpoint (I would
Hi guys,
I'm not trying to be a Tomcat developer. I'm working on my PhD on web
performance and just decided to share with you the experimental code I've
developed after studying the performance obtained ;).
Anyway, it's OK. I'll work on the new patch and resubmit it.
Thanks for the comments,
Vicenc Beltran Querol wrote:
Hi guys,
I'm not trying to be a Tomcat developer. I'm working on my PhD on web
performance and just decided to share with you the experimental code I've
developed after studying the performance obtained ;).
I've done some serious testings with HTTP server and NIO.
Mladen Turk wrote:
Vicenc Beltran Querol wrote:
Hi guys,
I'm not trying to be a Tomcat developer. I'm working on my PhD on web
performance and just decided to share with you the experimental code
I've developed after studying the performance obtained ;).
I've done some serious testings with
Jeanfrancois Arcand wrote:
I disagree ;-) I would like to see your implementation, because from
what I'm seeing/measuring, it is completely the inverse. I would be
interested to see how you did implement your NIO connector. The problem
with HTTP is not NIO, but the strategy to use for
- Original Message -
From: Jeanfrancois Arcand [EMAIL PROTECTED]
To: Tomcat Developers List tomcat-dev@jakarta.apache.org
Sent: Friday, May 20, 2005 6:56 AM
Subject: Re: Hybrid (NIO+Multithread, SSL enabled) architecture for Coyote
Mladen Turk wrote:
Vicenc Beltran Querol wrote:
Hi
I'm not a committer, but I think evidence proves that native sockets +
JNI is the way to go. To my knowledge, weblogic, websphere and Resin
all use native sockets. having a pure Java approach sounds nice and
all, but in the edge cases where high concurrent connection is needed,
I much rather go
Jeanfrancois Arcand wrote:
I've done some serious testings with HTTP server and NIO.
The results were always bad for NIO.
Blocking I/O is minimum 25% faster then NIO.
Faster in what? Throughput and/or scalability?
I disagree ;-) I would like to see your implementation, because from
what I'm
Remy Maucherat wrote:
Jeanfrancois Arcand wrote:
I disagree ;-) I would like to see your implementation, because from
what I'm seeing/measuring, it is completely the inverse. I would be
interested to see how you did implement your NIO connector. The
problem with HTTP is not NIO, but the
Mladen Turk wrote:
Jeanfrancois Arcand wrote:
I've done some serious testings with HTTP server and NIO.
The results were always bad for NIO.
Blocking I/O is minimum 25% faster then NIO.
Faster in what? Throughput and/or scalability?
I disagree ;-) I would like to see your implementation, because
Jeanfrancois Arcand wrote:
Well, the strategy you use is important. If you can predict the size of
the stream (by let say discovering the content-length), you can make
uploading task as fast as with blocking IO (OK, maybe a little slower
since you parse the header, and the channel may not reads
32 matches
Mail list logo