Hi Peter,

Comments inline..

Kind regards,
Bronagh

Peter Lin wrote:

I would like to ask the users of JMeter if anyone uses JMeter to run
automated tests. If you do, how do you structure your directories and files?
Would do people imagine using a reporting tool? If you do use jmeter for
automated tests, can take a minute to answer these questions.

1. how often do you run the automated tests?
We run our automated tests daily against the night build. A subset of these tests form our integration test suite and are used in acceptance of an official release (fortnightly)

2. how do you structure the files?
Each test plan essentially represents a functional area. This plan is driven by a data file, and each line in the datafile represents an individual test case. The test plans are run using the following ant task:
<jmeter jmeterhome="${home.jmeter}"
   resultlog="${resultlog}">
       <testplans
             dir="${test.dir}">
            <include name="**/*.jmx"/>
       </testplans>
   </jmeter>
   <xslt
   in="${resultlog}"
   out="${htmlreport}"
   style="${stylesheet}"/>
 </target>

N.B. the stylesheet used is: jmeter-results-detail-report.xsl

Executing using this task means that the test directory structure is essentially lost. However, it would be great if the directory structure in which the testplans reside (see attached diagram) could be replicated and used to store the result files produced. Perhaps in addition to the single results file, if the user specifies a results file as part of the test plan, that the results for this individual plan are written out here also when executed automatically (this only occurs when the test plan is executed via the JMeter gui) i.e. <stringProp name="filename">${JMETER_HOME}/E2E/Results/Integration/MM7PassthruVaspOrig.jtl</stringProp>

Ideally, I would like to group a number of plans into a functional area that would represent a test suite. This could then be identified by the xslt so that the results could be displayed in suites rather than as a block of tests

I think a big improvement in terms of reporting would be to provide historical results. If the same set of tests are being executed nightly, it is very useful to be able to observe trends, particularly for performance tests.
3. do you use a naming convention for the directories and files?
See attached diagram

4. how many test plans do you run?
Currently 10 test plans with a view to increasing this to reflect 600 testcases in the next 4 weeks

5. which listeners do you use to save the results
Assertion results
Graph results
Results tree
Aggregate report

6. what kinds of charts and graphs do you want?
TPS would be a more meaningful measurement for me.

thanks in advance.


peter lin



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to