Hi,

Here is a description of my test:

I have two thread groups which are identical and run in parallel.  Each one
looks like this:

- Thread group - 15 threads, ramp 0, loop 300
-- Loop Controller - loop 5 times
---- request index.html

So that is repeated twice and run in parallel. If there is a much better way
that I should compose a test to try to put maximum load on a webserver, let
me know.

When I run the test from the command line, I get the worst results:
Generate Summary Results = 45000 in  86.3s =  521.5/s Avg:    49 Min:     1
Max: 48876 Err:     0 (0.00%)

Note that the maximum request rate is extremely high which can't be a good
sign.

The first time I run through the GUI I get:
Generate Summary Results = 45000 in  82.8s =  543.8/s Avg:    27 Min:     1
Max:  9358 Err:     0 (0.00%)

The throughput for the first gui run is on par with the non-gui runs, but
does tend ot have a much lower max response time.

The second GUI run (clear results, and hit run again):

Generate Summary Results = 45000 in  48.5s =  928.1/s Avg:    16 Min:     1
Max:  2819 Err:     0 (0.00%)

Here my throughput almost doubles and my max response time drops again.

Repeating the test after that gives a small improvement up to about
1000req/s after 3 or 4 runs.

I've just tried doubling the length of the test (changed each thread group
from 300 to 600 loops), but get similar results.  In this case, the test
takes about 160 seconds to complete in non gui (aka slow) mode.  I also
noted that in 5 consecutive tests, the request with the maximum response
time (about 30-40 seconds!) always occured between 40 and 45 seconds into
the test.

Thanks for any suggestions or ideas.
-Tim



On 5/7/06, sebb <[EMAIL PROTECTED]> wrote:

It might be useful to look at the min/max response times in the two cases.

The cheapest way to do this is to add a Summariser Post-Processor -
this will generate summaries to the log every 3 minutes. The summary
includes the min/max and average times.

As to why you are seeing the difference - could there be some kind of
caching occurring?

How long do you run the test for?

How many threads in each Thread group? How many loops?

S.
On 06/05/06, Tim Wood <[EMAIL PROTECTED]> wrote:
> Hi,
> I am getting inconsistent results when I run a test multiple times
without
> restarting jmeter between trials.  I'm trying to see the maximum
throughput
> I can get from a web server, so my test layout is simply two thread
groups
> which repeatedly access a page on the server.
>
> The first time I run this test I get about 550req/sec throughput.  If I
> repeat the test without exiting jmeter (ie. I just clear the results and
hi=
> t
> run again) I get a rate of almost 1000 req/sec.  From then on, I will
> consistently get about 1000req/sec.  This is "consistently inconsistent"
in
> that if I exit jmeter and restart it, I will get the same pattern - one
ver=
> y
> poor run followed by almost 2x increase in performance.  I also find
that i=
> f
> I run the test using the non-gui mode of jmeter I will consistently get
the
> low rate.
>
> Does anyone have any ideas of why this might be occuring?  I'm only
barely
> maxing out the server I am benchmarking when I get 1000 req/sec, so it
does
> not seem like a limitation of the server in any way.  My client and
server
> are both linux 2.4ghz with 1gb ram, using java 1.5 and jmeter 2.1.1.
>
> Any ideas?  Thanks!
>
> -Tim
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to