Hi,

In the past I've used slamd to stress test various LDAP servers to determine what the max throughput is that they could handle for various operations or sets of operations. However, in reading the jmeter docs, I'm not clear how one would replicate this type of testing.

Generally, in SLAMD, I could configure things to behave in the following way:

1) Set up your distributed clients (as can be done with jmeter)

2) Configure the overall system so that each client starts out with initially 1 thread executing your task, with W amount of warmup time, and then a duration of X, with a cooldown of Y seconds afterward. Then increment the thread count on each client, and do the same thing. You do this until there is no performance increase for at least Z iterations. Once that maximum is determined, re-run the best iteration for a configured amount of time.

For example, here's the generated report for a stress test I ran several years ago:

<https://mishikal.files.wordpress.com/2013/05/mdb_slamd_data_report.pdf>

In this case, I had 16 distributed clients. They would run each iteration for 1200 seconds, with a 30 second delay between iterations. They would start with 1 thread each, and no specified maximum (since this is controlled by the improvement counter). There was a warmup of 60 seconds before statistics would be gathered and the iteration timer was started and a cooldown of 60 seconds at the end of the iteration. Statistics are collected every 10 seconds.


Does anyone have pointers and/or documentation that would allow me to set up a similar sort of stress test with jmeter as slamd was abandoned several years ago?

Thanks,
Quanah

--

Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
For additional commands, e-mail: user-h...@jmeter.apache.org

Reply via email to