good morning,

I have some beginner doubts on how exactly to read measurements for a simple Fast CGI website (or perhaps I don't understand how to use loops), and I did not find an exact answer on the documentation.
My test plan is simple, I have a thread group with 20 threads, no ramp up time, loop 15 times
then
Interleave controller (ignore sub-controller blocks is selected)
HTTP request 1
HTTP request 2
HTTP request 3
HTTP request 4
View Results in table
View results in tree
Aggregate report


the 4 http requests go to FastCGI pages that have very similar execution times, and for each page I have a simple Response Assertion (same for all 4 pages)

At the end of the run, View results in Table gives me an average of 1884 ms for the 300 samples.
Aggregate Report gives me a thoughput rate between 2.2/sec and 2.8/sec for each of the 4 http requests, and a total rate of 10.1/sec.


Now, if 10.1 total requests are processed in a second, and I have 20 threads, where does the average sample time of 1884 ms come from? I would expect something like
99 ms (since throughput is 10.1 requests in a second), or
1980, which is 99 ms times 20 (the number of threads)


Also, occasionally, in View Results in Table, I see a sample time of 0 ms, which looks odd.

I am running JMeter 2.0.1 on German Windows 2000 (with latest service packs) and JVM 1.4.2_04. The server is a Linux box with Apache and Fast CGI

Apologies if this is already covered in some faq or in the docs, I could not find it.

thanks and best regards,
Ivan Rancati
QA engineer - SharpMind.de

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to