It's always nice to see other people thinking in the same general directions that I am.

I think Jordi is on the right track about having a separate analysis component. I would like to keep the Visualizers out of the Test Plan -- leave the Test Plan with the job of describing the test. Make some basic statistics available at runtime during the test -- how many samples passed/failed, an estimate of throughput and response time, and perhaps some other data which can be calculated or estimated cheaply at runtime (including with remote engines). Have each engine track more detailed data which can be aggregated at the end of the run, and then more detailed analysis can be done on this data.

Obviously there are some cases that have to be treated specially -- some data is expensive to collect, so you wouldn't necessarily want to store it unless the user specifically requested it.

Another extension I would like to see is pluggable module to provide extra data which can be correlated with the data that JMeter collects. One such module would be to get the CPU utilization on the remote server system. Another could get performance statistics from a Tomcat server. Or a WebSphere server. Or whatever else somebody felt was useful enough to write a module for. JMeter wouldn't need to know the details about what is being stored...we just have to develop some kind of generic way to store it.


Regarding single threaded operation: I think single threaded would probably not be a good idea. But since most threads are sleeping most of the time, perhaps we can come up with some sort of thread pool, so that a large number of JMeter "threads" (perhaps better to call them "users" in this case) could be handled by a smaller number of JVM threads. It could be a bit tricky to ensure that we have the right number of JVM threads to handle the JMeter users, and that samples are executed when they are supposed to. But it seems like there could be potential.


Some performance and accuracy tests would also be great. I'm thinking on how to do those. An important bit would be unused hardware available for a long term for this purpose only (or almost)... I think I can provide this.



I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra test client with a very low load and comparing its numbers to the numbers of the high-load clients. I think the best way to handle it is through documentation to explain these techniques and other ways of analyzing data. Another way to help might be a visualizer that shows samples as a line that demonstrates it's beginning time and end time, making it easy to see overlapping samples, and thus see potential timing conflicts.


When I read Jordi's message, I thought he was referring to have a system dedicated to performance regression tests, so that we can see the effects of changes to JMeter on its performance. For example, if we start messing with a thread pool, we would need to be certain that we weren't impacting the results (at least not negatively -- but even if we made an improvement it would be good to document that).


Seems like we've got some high hopes for JMeter 2.0...even in just a short discussion -- I'm looking forward to getting started on it.


Jeremy
http://xirr.com/~jeremy_a



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to