> Only if the server can cope with the load. If the server is slow in
> responding, then the JMeter threads will be slowed down also.
>
> This is how request-response protocols work - the thread cannot
> proceed with another request until the previous response has arrived.

I see, so new requests are made when a response is received unless
they're part of a newly created thread in which case there's no
previous request to wait for; So in the example below (if the server
is busy) requests within each thread will wait for a response before
sending a new request but a new thread will be created (and make a
request) every second (for 100 seconds) regardless of how busy the
server is;

Test Plan (Run each Thread group separately)
   Thread Group (users 100, ramp 100, count 20)
        HTTP Request HTTPClient (SSL request - KeepAlive enabled)
   Uniform Random Timer (Offset 100, dev 10)

> Also, if you want to maintain a constant load, then you should
> consider using the Constant Throughput timer. This adjusts the waits
> according to the current rate. But of course it won't be able to
> maintain a throughput greater than that supported by the server or the
> the JMeter host.

I've been trying this but not sure if I'm taking the right approach. I
want to simulate a constant throughput with new users arriving/leaving
throughout the experiment;

Test Plan (Duration 100)
   Thread Group (users 100, ramp 100, count Forever)
        HTTP Request HTTPClient (SSL request - KeepAlive enabled)
   Constant Throughput Timer (120 req/m)

So the above testplan means this; After 1 second there will be one
user doing 120req/m and after 100 seconds there will be 100 users
sharing that responsibility each doing a fraction of the 120req/m

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to