In the real world what network connections will exist between you and
your real clients?
How many ports do you have on your server?
How fast are they?
How big are the pages being requested? Including images, css files etc?
If you have 500 concurrent users will you have 500 times the traffic on
the wire? or will you have some intermediate caching? You talk of a db
so it is unlikely that caching can be used to much effect.
If have (say) 100kbytes total per transfer x 500 = 50 MBytes for 500
people each making one request. Unless you can guarantee a GB connection
to them all the thing that's going to take the most time is the time to
transfer that data over the wire.
50 Megbytes (I spell it out to be clear what I am talking about) is 400
megabits If you have a 100 megabit connection to your clients then about
5 s will of delay (latency) will have to be shared between your clients.
Some will get less, some will get more latency
In the real world very few end users have even a 100 Mbit point to point
connection to the server. If you factor in the 'down the wire' time for
someone with a dodgy cable connection then a response time of 250 ms
would be good for a given individual.
I have spent a lot of time recently tuning and examining the peformance
of tomcat and I came to the conclusions:
1. On a modern machine it takes very little time indeed for tomcat to
process an incoming request.
2. As the number of requests goes up the probability of issues with I/O
outside of the 'control' of tomcat become prevalent.
3. In the real world with real applications its things like database
access which dominate performance.
4. Users' network connections are probably a major peformance issue
unless you have extremely good connections between the server and the
clients.
You suggested earlier that you might have to use an alternative. Out of
interest what alternatives are you considering?
wicket0123 wrote:
It matters to us because we are talking about response time of under 10ms for
500 concurrent users. Our internal application metrics are in the
nano-seconds. If the container adds a lot of overhead, we may want to
switch to other containers.
For scalability testing, our testing is done in a closed network. All
machines are on same subnet, so we eliminate network being the bottleneck
here. If anything, it will be either: client machine, server machine, app
code, DB. No firewall or anything in between for initial test.
Basically, I used JMeter to run 500 concurrent users against our app. The
response time shown by JMeter is the total response time.
total response time = network round trip + time spend on server
network round trip = time spend sending request + time spend receiving
response
time spend on server = time spend running container code + time spend
running app code + time spend talking to DB (include db round trip)
Let's take an example using my results,
JMeter reports that for 500 concurrent users making request to our
application, the average response time was 1 second. That already broke our
SLA which is 15 milliseconds.
My questions that needs to be answered:
1) out of that 1 second, how much was due to network?
2) out of that 1 second, how much was spend running our application code? I
got this, we have internal metrics
3) out of that 1 second, how much was spend running tomcat code?
this maybe a bit off track to the initial question, but those are the things
I"m trying to answer. I've been looking at tools like "Manage Engine" which
shows you average response time which they claim is time spend in tomcat.
However, they only show down to the millisecond level. And, if they're able
to show that, it means that they're calling some tomcat APIs. I want to
know which APIs, so that I can write my own program to query them.
Christopher Schultz-2 wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Wicket,
wicket0123 wrote:
| Hi Charles,
| Thanks for the reply. JMeter doesn't help me here because the
response
| time includes network time. The reason I'm looking into the tomcat API
is
| because i want a way to query tomcat for the numbers. So, the metrics
I am
| after are:
|
| 1) How much time was spend in tomcat? no network
You can't get this information without a real profiler, which will, of
course, interfere with performance.
| 2) How much time was spend on the servlet?
The best you can do, here, is to write yourself a Filter (or Valve, I
suppose) and simply take timestamps. As with all instrumentation, taking
samples takes time. Rest assured that reading the system clock is /very/
fast. ;)
I suppose if you know the total response time and the servlet time, you
could simply subtract the servlet time to see how much Tomcat "overhead"
is in there. Does it really matter?
| 3) What is the overall average response time for a request when there
are X
| number of users active?
To me, this all comes down to /useful/ metrics. For instance: who cares
what the response time is on the server for a single request? Nobody,
that's who. This last metric is the only useful one you've requested,
and I would argue that you ought to do it over a network (even if it's a
local one).
All users will be remote. Why artificially lower your response times
when everyone will have /at least/ the overhead of going over a local
network segment?
Unfortunately, your choices are:
- - instrument your server and get inaccurate, but fine-grained data
- - instrument your client and get accurate, but somewhat coarser data
Just my two and a half cents (US dollar really sucks these days),
- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkfr4RMACgkQ9CaO5/Lv0PCYHQCfa+hDE77eSM476JmIqVpv2/ed
3DgAoKQEvRO7KJOp3swJ21sMwjAcHOX7
=2q9H
-----END PGP SIGNATURE-----
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]