On 06/24/2011 01:37 AM, sebb wrote: >> It means, keep up the Y requests / time unit... and you cannot do that with >> JMeter or any other load test harness with a similar threading model. This >> threading model is easy to write, easy to understand but self throttling. >> Event base models are harder to understand, more difficult to implement but >> open by design and less likely to self throttle. > What I would do here is increase the number of threads. > This will allow more requests to be generated, at least until the > number of threads / memory use starts to be an issue. > At which point, add another JMeter client to share the load. > > Or am I overlooking something? >
I believe so. Many web applications reach a throughput plateau and cannot generate more than X responses per time unit. If you double (triple,...) the number of threads, you have some Jmeter instances with twice (three times,...) as many threads patiently waiting for the server. The load as perceived by the system under test remains roughly equal. (Please take my word for it. Same as Kirk, I suffered from this on several occasions before. Nevertheless, Jmeter has proven time and again to be both the most versatile and powerful load generator from the arsenal I've yet tried.) If the system under test is a simplistic web application, I've used timeouts to enable my threads to send more requests than the server will readily respond to. Of course, the response data collected by such a Thread Group is specious. So I'll have another, smaller Thread Group, which acts patiently and collects performance and correctness data. All of this is a crude work-around of course. If Kirk has a model in mind that will make such devices unnecessary, any positive developments would be greatly appreciated. Regards, Felix --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

