>
> Except the current implementation allows a user to set up multiple
> listeners that listen to specific requests.  The listeners work
> hierarchically too - if you add a listener to a specific controller, it
> will only log samples from requests under that controller.  If you went to
> a global file, you'd lose that capability - which I think is useful.

It'll be good solved by configuration of sampler/controller.


> I don't really understand why the logged files should be different from
> listener to listener. Surely XML is slow and bad and we all agree on that,
> and we'd all like it configurable as to what information is logged, but I
> strongly object to anything being put in those files that is calculated by
> a specific listener.  And that is what is being asked for, essentially.

I've agreed that from the beginning, sorry if I haven't state it clearly. 
The main problem is that visualizer can't have reporting logic, which can 
sound strange, but without access to raw data source visualizer can't take 
advance of db aggregation. On the other side we can give such access to the 
visualizers, but then they won't be independent from logging logic. I still 
have conception problems with that.

>>
> Sounds great.  I would vote for just eliminating the XML format and going
> to CSV (for files).

Some people *love* XML, fluently uses tools for evaluating reports, and got 
used to it, it should stay. Fast visualizers are not necessary for these 
people.

> > particular requests or groups of requests to exclude not interesting
> > requests and to save the disk space.
>
> Cool.  While we're there - why not a full config option for each request to
> indicate what should be saved and what should be tossed?  It could be a
> simple button that opens up a dialog screen to configure the report for
> that request or controller.  Or, it could be a new kind of test element -
> Log Config.

Yes! I've meant exactly that, I'll post this today I think (I'm checking it 
now)

>
> > Another feature can be "test results desktop" and "test results metadata"
> > - with that you can describe/view/erase your test results. I've encoured
> > this problem in jdbc logging, when I can't just log test to named file -
> > there has to be some key to distinguish tests (in jdbc logging it's
> > implemented as test_id field). It will also helps to keep test results in
> > order.
>
> I didn't understand this.

Well - imagine such situation. You have an application to test, and you want 
to draw plot describing application response time (on Y axis) v.s. the count 
of users (on X axis), to determine scalability of the application. So you 
have to make several stress tests with increasing thread number. You want to 
log results to the database. Probably you want to create only *one* db and 
that DB has table storing test results. However you want to make SQL queries 
selecting only choosen test (for the n-th user) - you have to have some way 
to assign sample result in the table to given test. This is no big deal when 
saving to files - you just name that files appropriately 1-user.jmx, 
10-users.jmx ... 100-users.jmx. 
Extension to that idea is to make fields to describe the test - who, when, 
subject of test, description and so on, to provide some orgranization. Again, 
when using files it's simple - you could name file like 
test_done_2000_10_10_by_MKO_10_users_failed_because_of_application_errors.jmx 
but it's not standarized, not elegant and impossible for DB's.

How with that?
best regards
Michal Kostrzewa



--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to