I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things about it. One is that it's very tedious to use, and so a lot of my thoughts have to do with creating more powerful tools to manipulate test scripts. I think I'd like to introduce the idea of alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can be brought to the forefront.
It's true that test editing is tedious, but I don't really see different "aspects" in such a heavy way as Eclipse -- maybe visualization options?
Control vs. non-control elements: you had commented in the past about control elements (controllers & samplers) vs. non-control elements (where order essentially doesn't matter). Would be great to have an option to show/hide those non-control elements when viewing the tree. Also to see them in a separate panel showing all those applying to the current control element -- with 'inherited' ones greyed out. Most importantly because it would provide new (and not-so-new) users a clearer view of which non-control elements apply to which control elements.
Tree editing: Eclipse trees have a nice way of indicating whether to insert before, insert after, or add as child which would be very handy -- our current way is a pain. I don't know if that's doable in Swing, though.
Bulk editing: A find/replace feature the most obvious. Another nice one could be to be able to select multiple test elements of the same type and see the editor in the right panel show white fields for values that are equal in all of them -- you could edit these straight-away -- and fields with different values in grey -- possibly non-editable.
Protocol pre-selection: by having options on which protocols we want to use in the test we could avoid cluttering the menus with samplers & config elements not applicable to those protocols.
Screen real-state usage: reducing font size, getting rid of useless spacing, etc... so that more space is left for panels such as the HTTP request parameters.
Another usability issue: it would be really nice to have certain test elements provide a "dynamically-generated" default name (used in case you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 10.0�5.0 sec.", "/home/index.jsp",...
Remote testing needs to be revamped because it's pointless to have 10 remote machines all trying to stuff responses down the I/O throat of a single controlling machine - better to have the remote machines keep the responses till the end and not risk the accuracy of throughput measurements. Perhaps a simpler format can be created for remote testing whereby during the test only success/failure plus response time is sent to the controlling machine, and everything else waits to the end of the test.
I agree, but note that this means significant rewrite of all listeners, so that they can handle this two-phase input and still show meaningful results.
I want test results categorized by test run, and not just as a list of sampleResults. A set of sample results has a metadata set that describes the test run, and JMeter should be able to use such metadata to potentially combine test run results and also display statistics comparing two test runs (ie, graphing # users vs throughput).
How about leaving listeners for real-time test result visualization & test result gathering/saving and having a separate application (or module) for more complex data analysis. Maybe there's something in the non-market we can use straight away?
Result files need to be abstract datasources with an interface that visualizers talk to without knowing whether the backing data is an XML file, a CSV file, a database, etc. Right now, JMeter knows how to write CSV files, but can't read them!
Note this would make sense if we had the separate analysis application I was talking about.
A defined interface will help us modularize this code whereas currently it's mixed up with the code for reading and writing test plan files.
Visualizers should be able to output useful file types for distribution of results to non-jmeter users. HTML and PNG files, for instance. Some way of exporting the data to a format that can be easily posted.
Again, a separate analysis tool could take care of this.
I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't think this is feasible.
Definitely not doable for the Java samplers. Extremely difficult for JDBC, difficult and probably not worth it for the rest (just my view -- seems to match your's though).
Instead, I would focus into accuracy by raising priority of threads during actual sampling. Would not improve total performance in terms of max throughput, but would improve measurement accuracy at mid and high loads.
Some performance and accuracy tests would also be great. I'm thinking on how to do those. An important bit would be unused hardware available for a long term for this purpose only (or almost)... I think I can provide this.
It's possible to do if you can get access to the very sockets that do the communicating, but how will you get that for jdbc drivers? Even for HTTP, we'd have to write our own HTTP Client from which we could gain access to the socket being used and control the IO for it (or take the commons client and modify it so). Because to put it all in a single threaded model, we'd have to take control of the IO part, and force the samplers to hand their sockets to some central code that would take the socket, take the bytes the sampler wants to send, and it would hand back the return bytes plus timing info. It'd be nice, but I don't think it's feasible for most protocols.
JMeter needs to collect more data. Size of responses should be explicitly collected to help throughput calculations of the form bytes/second. Timing data should include a latency measurement in addition to the whole response time.
Totally agree. The complete split would be:
1- DNS resolution time
2- Connection set-up time (SYN to SYN ACK)
3- Request transmission time (SYN ACK to ACK of last request packet)
4- Latency (ACK of last request packet to 1st response data packet)
5- Response reception time
I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an infrastructure-level thing rather than application-level), but 1+2+3+4 separate from 5 is a must. Top commercial tools separate them all.
More accurate simulation of browser behaviour in terms of # of concurrent connections, keep-alives, etc. would also be great. Even in terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, this may not be JMeter's job -- application-level testing is more important, IMO.
The problem is same as above: this requires access to the internals of the client code. How to do this for JDBC? Maybe changing socket factories? But it's a must, so we need to think about it.
Multiple SampleResponses need to be dealt with better - I'm thinking that instead of an API that looks like:>
Sampler{ SampleResult sample(); }
We need one that's more based on a callback situation: Sampler { void sample(SendResultsHereService callback); }
so that Samplers can send multiple results to the collector service. This would make samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to push out sample results at any time during their script.I feel pushing out multiple separate samples belongs more to controller land rather than sampler land...
Given this, post-processors like assertions and post-processors need a way to know which result to apply themselves to. We already have this problem wherein redirected samples confuse these components. We need a way to either mark a particular response as "the main one" or define a response set all of which need to be tested by the applicable post-processors.
Isn't the current "sample-tree" structure correct for this? Wouldn't it be enough to have post-processors, listeners,etc. know about such "structured" sample results?
I'd also like to replace the Avalon Configuration stuff with something that can load files more stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter. It goes too long without any feedback for the user. Plus uses a ton of memory.
Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just adding it to the long list).
Sun's HTTP Client should be replaced. As the cornerstone of JMeter, we ought to have one that is highly flexible to our needs, provides the most accurate timing it can, the most performance possible, the least resource intensive as possible, and the most transparency to JMeter's controlling code. I think the commons HTTP Client is probably a good place to start, being open-source, we can craft it to our needs.
Totally agree that it needs to be replaced and that the HTTP Client is our best bet.
Well, that's a start :-)
-- Salut,
Jordi.
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
