> > Not exactly. Yes, the ideas below would allow config of what data gets > saved in each sample result, but what I'm talking about is the ability to > organize which requests a listener "hears from". It's not quite the same > thing.
Yes, I see the difference. But there is a problem with current implementation. Log file is bound to visualizer - this leads to two strange things: 1) 2 visualizers in the same controller logs exactly the same, has edit boxes to enter file names, and it's not obvious what is it for, in which visualizer you have to enter filename etc (as in todays posts) 2) what if you save data in one visualizer in particular controller and load it in other visualizer in totally different context? you'll receive visualized data, but not valid in viewed context (e.g. different paths impossible to reach from visualizer you look at it) In fact, you can load it into visualizers in other jtx file into foreign context... Point 1) can be solved by binding log information rather with controller and not visualizer. Point 2) tells me that visualizers should not be bound into the test tree (!) at all. In other words - maybe we should control saving data from controllers, (which reasonably groups requests) and view it in worbench or such place? Then visualizers will be just tools to visualize data - whatever you save from whatever place. The visualizers can have common filtering/aggregating GUI... I know it creates new problems.... One more reflection - I think there is some subtle difference between viewing samples 'on-line' (when they are recored) and 'off-line' (analyzing test results). I know it looks like I'm splitting hairs, but that is something with design purity of current solution which doesn't let me sleep at nights :-) > > Ok, now I'm with you. I agree - but I'd like to enhance our save format to > allow multiple test runs to exist in one file and/or allow listeners to > load data from multiple sources and combine them in a reasonable way. I > don't think this would be too hard - there just needs to be some > information about the test run included (time of start, time of end, for > example). And then listeners could use that information to appropriately > combine data from multiple test runs. So we both agreed about the need of some metadata in test results! You've just said about saving several tests to one file, and tests are distiguished by start/stop date. (my solution is saving multiple tests to one database and has test_id, test_name but the rule is the same) > > Sounds good so long as we understand there's no real difference between a > database and a file system - from the listener's point of view. In other > words, all this should be possible whether you're using a database or just > files. Granted, files may be slower and less efficient, but it should > still work either way. > > Also, this goes back to what I said earlier about saving information about > each test run so that listeners can appropriately combine results from > multiple test runs. That's what you are describing here, if I understand > rightly. Well combining results from multiple tests is one thing, but more common is simply to find db method of doing this: tweak-test-parameters launch test save-log-to-file-1 clear-results tweak-test-parameters-again launch test save-log-to-file-2 clear-results ... and again and again. (at least I use jMeter such way :) Where logging to db you cannot create databases for every single test. Test_id's (or knowing start-stop time) solves that problem. hmmm, I think I think about it again... :-) best regards and good night ! Michal -- To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]> For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>
