Jmeter supports a lot of this functionality already, and it supports pluging in custom protocols. You should check it out.
Jmeter supports distributed (multi-machine) testing using one machine as the controller and other machines as the load generator. Jmeter also has an http proxy for recording http traffic which is used as the basis for creating tests afterwards. You could create a custom proxy for each protocol you wish to test in order to record the appropriate data. Then, much as jmeter does now you could allow editing of that data to customize the tests. For example, jmeter lets you capture a HTTP request using the proxy. Later you can edit that request to plug in all sorts of custom parameters. The parameter values can be read from input files during the test run. This allows you to simulate various users and user sessions from a single machine, and you can still use multiple machines in a distributed scenario to create higher load. The Jmeter UI (last time I used it) left a bit to be desired, but at least there is one and it is usable. You can plug in your own UI components for your protocol too. Rob --- Alex Karasulu <[EMAIL PROTECTED]> wrote: > Hi folks, > > > Problems (Itch) > =============== > > As some of you know, I've been trying to benchmark > and stress test > ApacheDS and other LDAP servers using simple tests. > I've found this > very hard to do. Some servers are a black box. > It's also very hard to > measure throughput with load and concurrency since > you have to hit the > server with may clients often from different > machines then gather > statistics. Plus it's just a nightmare to setup > tests and trigger then > from many machines at the same time. > > LDAP is the first of many protocols that will need > to be benchmarked and > stress tested for ApacheDS. Others like Kerberos, > Changepw, DNS, and > DHCP will also need benchmarks as well. > > > Pie In the Sky Solution > ======================= > > It would rock to have an imaginary tool, for the > following imaginary > work flow to stress test a MINA protocol server. > > > Step I: Setup toolkit daemons on machines > ----------------------------------------- > > (a) install the stat gathering daemon on a machine > used to control the > client drivers and gather throughput statistics from > them; call this the > central command and control daemon (CCCD) > (b) install the server vitals monitoring daemon on > the machine hosting > the MINA protocol server; call this the vitals > daemon or VD. configure > it to push vitals at some frequency to the CCCD > (c) install the client driver daemon (CDD) on each > client machine used > to generate load; > > > Step II: Capture req/resp sequence > ----------------------------------------- > > I fire up my MINA server and turn on ethereal to > capture the traffic > between my server and a client. Then I generate an > ethereal or pcap > dump of the client server conversation. This req > sequence is what I > want replayed over and over again by several > clients. > > > Step III: Create a test configuration > ----------------------------------------- > > Here we create a new test configuration by > specifying the server and > port to be tested. When this is done the VD service > is discovered to > control it's operational parameters for capturing > vitals from the > command console on the CCCD. > > Each client driver machine running a CDD is > specified. It would be nice > if the CDDs can be discovered and selected for > participation in the test > configuration using a list of check boxes. Once > discovered and included > in the test configuration the CCCD can control, > and gather stats from > these client daemons to make them pound on the MINA > server. > > Next setup the request sequence to hit the server > with using the dump > captured in step II. I feed this conversation dump > to the CCCD which > has a nice web UI (embedded jetty). On the upload > page I have to give > it the source (ip /port) of the client. The CCCD > uses that to determine > the bytes of the requests of the client and > responses from the server: > the replay PDUs. > > I simply tell the CCCD how many times or for how > long (time wise) to > keep running the sequence of PDUs against the MINA > server. I can also > tell the CCCD to skew clients by time. Also I > should be able to control > the way in which the req sequence is pumped in. > I.e. new connection > each time? > > Now my test configuration is complete with tests > ready to run. > > > Step IV: Schedule/Run a test configuration > ------------------------------------------ > > The CCCD can schedule runs of a test configuration. > It can also trigger > these runs manually and record the results for > future progression analysis. > > So I pick a test configuration and ask to manually > fire off a new test. > When I do this the CCCD asks me to provide some > notes about this test > for future reference as well as the version/build > number of the MINA > server being tested. Again this is for future > comparisons. > > After providing this information the test starts. > The first thing that > happens is client drivers are primmed with the > requests that need to be > played over and over again against the server. Then > time > synchronization is performed and a checkpoint is > noted before starting > the recording of throughput. Then the MINA server > starts getting pounded. > > The CDD's push throughput rate information to the > CCCD which collect it > for the test run and sums up totals. Once the test > completes the data > can be visualized with nice graphing tools built > into the console of the > CCCD. > > > Would others like to have such a tool? I think we > can write this tool > using MINA. SLAMD does this to a degree but it's > missing several > features needed for the smooth flow above and is a > bit hard to setup and > configure. > > Thoughts? Comments? > > Alex > > > > > __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
