On 24 Jan 2003 at 0:56, Michal Kostrzewa wrote:

> >
> > Not exactly.  Yes, the ideas below would allow config of what data gets
> > saved in each sample result, but what I'm talking about is the ability to
> > organize which requests a listener "hears from".  It's not quite the same
> > thing.
> 
> Yes, I see the difference. But there is a problem with current implementation. 
> Log file is bound to visualizer - this leads to two strange things:
> 
>  1) 2 visualizers in the same controller logs exactly the same, has edit boxes 
> to enter file names, and it's not obvious what is it for, in which visualizer 
> you have to enter filename etc (as in todays posts)
> 
>  2) what if you save data in one visualizer in particular controller and load 
> it in other visualizer in totally different context? you'll receive 
> visualized data, but not valid in viewed context (e.g. different paths 
> impossible to reach from visualizer you look at it) In fact, you can load it 
> into visualizers in other jtx file into foreign context...
> 
> Point 1) can be solved by binding log information rather with controller and 
> not visualizer.
> Point 2) tells me that visualizers should not be bound into the test tree (!) 
> at all. In other words - maybe we should control saving data from 
> controllers, (which reasonably groups requests) and view it in worbench or 
> such place? Then visualizers will be just tools to visualize data - whatever 
> you save from whatever place. The visualizers can have common 
> filtering/aggregating GUI... I know it creates new problems....
> One more reflection - I think there is some subtle difference between viewing 
> samples 'on-line' (when they are recored) and 'off-line' (analyzing test 
> results).

I agree entirely with both your points, and I think they boil down to the same 
problem, which is 
that the test tree is confusing sequential items with heirarchical items.  It would be 
nice to 
"plug in" a listener into both a) a datasource - your point #2, and b) a controller - 
your point 
#1.  This is related to what I wrote on the wiki pages at 
http://nagoya.apache.org/wiki/apachewiki.cgi?ImprovingJMetersGUI.  

The more I think about it, the more the "test tree" is turning into more of a "web" in 
my head.  
There's a sequence of requests, but the rest is hierarchical, but it's only 
hierarchical because 
it's a tree.  It could easily be more of a "plug in" metaphor where the user draws 
lines from 
controllers to listeners, from listeners to datasources, etc.  This would be a hard 
GUI to do, but 
it would make much more sense if done well, I think.  I'm just thinking aloud here - 
I'm in 
agreement with you that these problems are real.

> 
> I know it looks like I'm splitting hairs, but that is something with design 
> purity of current solution which doesn't let me sleep at nights :-) 

I'm the same way.  When I first started thinking about your datasource ideas long ago, 
it kind 
of scared me off.  I was going round and round with what was the best way to 
conceptualize 
it.

> 
> 
> >
> > Ok, now I'm with you.  I agree - but I'd like to enhance our save format to
> > allow multiple test runs to exist in one file and/or allow listeners to
> > load data from multiple sources and combine them in a reasonable way.  I
> > don't think this would be too hard - there just needs to be some
> > information about the test run included (time of start, time of end, for
> > example). And then listeners could use that information to appropriately
> > combine data from multiple test runs.
> 
> So we both agreed about the need of some metadata in test results! You've just 
> said about saving several tests to one file, and tests are distiguished by 
> start/stop date. (my solution is saving multiple tests to one database and 
> has test_id, test_name but the rule is the same)

Yes, start/stop time is necessary in order to calculate throughputs correctly between 
two tests.  
Test run name is a good thing too.

> 
> 
> >
> > Sounds good so long as we understand there's no real difference between a
> > database and a file system - from the listener's point of view.  In other
> > words, all this should be possible whether you're using a database or just
> > files.  Granted, files may be slower and less efficient, but it should
> > still work either way.
> >
> > Also, this goes back to what I said earlier about saving information about
> > each test run so that listeners can appropriately combine results from
> > multiple test runs.  That's what you are describing here, if I understand
> > rightly.
> 
> Well combining results from multiple tests is one thing, but more common is 
> simply to find db method of doing this:
> tweak-test-parameters    launch test    save-log-to-file-1    clear-results   
> tweak-test-parameters-again   launch test    save-log-to-file-2    
> clear-results  ... and again and again.
> (at least I use jMeter such way :) Where logging to db you cannot create 
> databases for every single test. Test_id's (or knowing start-stop time) 
> solves that problem.

Yes, the db datasource automatically creates new space for the next test (I'm 
assuming).  
There's no reason a file couldn't do the same.

-Mike

> 
> hmmm, I think I think about it again... :-)
> best regards and good night !
> Michal
> 
> 
> --
> To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
> For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>
> 



--
Michael Stover
[EMAIL PROTECTED]
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to