Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Jakarta-jmeter Wiki" 
for change notification.

The "LogAnalysis" page has been changed by aaronforster.
http://wiki.apache.org/jakarta-jmeter/LogAnalysis?action=diff&rev1=61&rev2=62

--------------------------------------------------

  == Generating Appropriate JMeter Logs ==
  The default data logged during JMeter testing is quite sparse - important 
request and response fields are stored, but the full response sent back by the 
server is not stored. This is done to avoid bogging down the test client during 
load testing.
  
- Using the following JMeter parameters on the command line generates log 
entries with corresponding higher detail for the test run. 
+ Using the following JMeter parameters on the command line generates log 
entries with corresponding higher detail for the test run.
+ 
  {{{
  -Jjmeter.save.saveservice.data_type=true -Jjmeter.save.saveservice.label=true 
 -Jjmeter.save.saveservice.response_code=true 
-Jjmeter.save.saveservice.response_data=true 
-Jjmeter.save.saveservice.response_message=true 
-Jjmeter.save.saveservice.successful=true 
-Jjmeter.save.saveservice.thread_name=true  -Jjmeter.save.saveservice.time=true"
  }}}
- 
  The JMeter timestamp_format parameter can be used to specify logentry 
timestamps format. For eg:
+ 
  {{{
  -Jjmeter.save.saveservice.timestamp_format="yyyy-MM-dd HH:mm:ss"
  }}}
- 
  As of April 2006, JMeter 2.1.1 must also be run with the following flag to 
ensure logfile uniformity:
+ 
  {{{
  -Dfile_format.testlog=2.0
  }}}
- This forces JMeter to use the old logformat which generates <sampleResult> 
element for all log entries. The newer format instead generates <httpSample> 
elements for HTTP requests and <sampleResult> for JDBC sampler entries - this 
is inconvenient from a automated log processing point of view. Refer this 
[[http://mail-archives.apache.org/mod_mbox/jakarta-jmeter-user/200604.mbox/%[email protected]%3e|Jmeter-user
 thread]] for more details
+ This forces JMeter to use the old logformat which generates <sampleResult> 
element for all log entries. The newer format instead generates <httpSample> 
elements for HTTP requests and <sampleResult> for JDBC sampler entries - this 
is inconvenient from a automated log processing point of view. Refer this 
[[http://mail-archives.apache.org/mod_mbox/jakarta-jmeter-user/200604.mbox/<[email protected]>|Jmeter-user
 thread]] for more details
  
  The above settings can also be defined as global properties in the 
jmeter.properties configuration file, or in the user.properties file.
  
  == The JMeter Log Format ==
  The format of JMeter log entries generated when using the flags in the 
previous section is defined in the table below. (Editing note: 4 spaces are 
used to denote one level of XML 'indentation'.)
- 
- ||'''XML Element'''   ||'''Explanation'''||
+ ||'''XML Element''' ||'''Explanation''' ||
- ||{{{/testResults             }}}||Root element for XML test log||
+ ||{{{/testResults               }}} ||Root element for XML test log ||
- ||{{{    @version     }}}||Version of test results. Currently (JMeter 2.1.1), 
set to "1.1" irrespective of testlog format flag.||
+ ||{{{    @version       }}} ||Version of test results. Currently (JMeter 
2.1.1), set to "1.1" irrespective of testlog format flag. ||
- ||{{{    /sampleResult/...}}}||All log data is stored under an array of 
'sampleResult' elements.||
+ ||{{{    /sampleResult/...}}} ||All log data is stored under an array of 
'sampleResult' elements. ||
- ||{{{        @timeStamp       }}}||Timestamp - See Java method 
System.currentTimeMillis() ||
+ ||{{{        @timeStamp }}} ||Timestamp - See Java method 
System.currentTimeMillis() ||
- ||{{{        @dataType        }}}||Datatype - typically "text"||
+ ||{{{        @dataType  }}} ||Datatype - typically "text" ||
- ||{{{        @threadName      }}}||Name set for the thread group, with 
affixed at the end  " <iteration>-<thread_id>". For eg    "Integration Tests 
Thread Group 1-1"||
+ ||{{{        @threadName        }}} ||Name set for the thread group, with 
affixed at the end  " <iteration>-<thread_id>". For eg    "Integration Tests 
Thread Group 1-1" ||
- ||{{{        @label           }}}||Label set for the sampler. For eg    
"Login to Custom URL using test account credentials" ||
+ ||{{{        @label             }}} ||Label set for the sampler. For eg    
"Login to Custom URL using test account credentials" ||
- ||{{{        @time            }}}||Time in milliseconds for request to 
complete. Eg    "2515" ||
+ ||{{{        @time              }}} ||Time in milliseconds for request to 
complete. Eg    "2515" ||
- ||{{{        @responseMessage }}}||Response message. Eg    "OK" ||
+ ||{{{        @responseMessage   }}} ||Response message. Eg    "OK" ||
- ||{{{        @responseCode    }}}||Response code. Eg    "200" ||
+ ||{{{        @responseCode      }}} ||Response code. Eg    "200" ||
- ||{{{        @success         }}}||String indicating status of the request. 
Can be "true" or "false"||
+ ||{{{        @success           }}} ||String indicating status of the 
request. Can be "true" or "false" ||
- ||{{{        /sampleResult/...        }}}||HTTP Redirects are represented as 
an array of nested 'sampleResult' elements. Only 1 level of nesting occurs 
(i.e. the nested subresults do not nest further).||
+ ||{{{        /sampleResult/...  }}} ||HTTP Redirects are represented as an 
array of nested 'sampleResult' elements. Only 1 level of nesting occurs (i.e. 
the nested subresults do not nest further). ||
- ||{{{        /property                }}}||A string containing POST Data, 
Query Data and Cookie Data||
+ ||{{{        /property          }}} ||A string containing POST Data, Query 
Data and Cookie Data ||
- ||{{{            @xml:space   }}}||XML attribute indicating whether that 
white space is significant. Set to "preserve" ||
+ ||{{{            @xml:space     }}} ||XML attribute indicating whether that 
white space is significant. Set to "preserve" ||
- ||{{{            @name                }}}||Set to "samplerData"||
+ ||{{{            @name          }}} ||Set to "samplerData" ||
- ||{{{        /assertionResult/...     }}}||Assertion information are stored 
in an array of assertionResult||
+ ||{{{        /assertionResult/...       }}} ||Assertion information are 
stored in an array of assertionResult ||
- ||{{{            @failureMessage      }}}||The failure message when the 
assertion fails||
+ ||{{{            @failureMessage        }}} ||The failure message when the 
assertion fails ||
- ||{{{            @error               }}}||Set to "true" or "false" to 
indicate error in assertion (stays "false" on assertion failure)||
+ ||{{{            @error         }}} ||Set to "true" or "false" to indicate 
error in assertion (stays "false" on assertion failure) ||
- ||{{{            @failure             }}}||Set to "true" or "false" to 
indicate whether assertion failed or not||
+ ||{{{            @failure               }}} ||Set to "true" or "false" to 
indicate whether assertion failed or not ||
- ||{{{        /binary          }}}||Data returned in response||
+ ||{{{        /binary            }}} ||Data returned in response ||
  
  
  
@@ -57, +57 @@

  == Perl Based Method for Log Extraction ==
  Here are some steps to import and process JMeter log data in Excel. Steps #3 
and 4 are painstaking and can be automated by Excel macros - however, this is 
beyond my abilities at the moment:
  
- 1. First, generate delimited file for import into Excel. 
+ 1. First, generate delimited file for import into Excel.
  
  I use Perl to parse the XML logs and generate a delimited file for import 
into Excel. The heart of the Perl script is the regular expression below. Each 
line in the JMeter logfile must be matched to this expression:
+ 
  {{{
- # Regular expression extracts relevant fields from the log entry. 
+ # Regular expression extracts relevant fields from the log entry.
  /timeStamp="(\d+)".+?threadName="(.*?)".+?label="(.+?)" 
time="(\d+?)".+?success="(.+?)"/;
  
  # Data in the regex variables is accessed for use in building 
suitably-delimited line
- my $timestamp = $1; # unix timestamp 
+ my $timestamp = $1; # unix timestamp
  my $threadname = $2; # thread label
- my $label = $3; # operation label 
+ my $label = $3; # operation label
  my $time = $4; # operation time in milliseconds
  my $success = $5; # boolean success indicator for this operation
  }}}
- 
- The complexity of the regular expression is required to parse nested log 
entries that occur during HTTP redirects. A normal log entry line logs one HTTP 
'sampler' operation. For eg: 
+ The complexity of the regular expression is required to parse nested log 
entries that occur during HTTP redirects. A normal log entry line logs one HTTP 
'sampler' operation. For eg:
+ 
  {{{
  <sampleResult ... time="156" ... />
  }}}
+ However, when the opertion had a HTTP redirect (HTTP status code 302), JMeter 
records the redirects as nested {{{<sampleResult>}}} elements -- these which 
still occur on the same line as the log entry: {{{<sampleResult ... 
time="2750"> <sampleResult ... time="2468" ... /> <sampleResult time="141" 
.../> <sampleResult ... time="141" .../> </sampleResult>  }}}
- 
- However, when the opertion had a HTTP redirect (HTTP status code 302), JMeter 
records the redirects as nested {{{<sampleResult>}}} elements -- these which 
still occur on the same line as the log entry:
- {{{<sampleResult ... time="2750"> <sampleResult ... time="2468" ... /> 
<sampleResult time="141" .../> <sampleResult ... time="141" .../> 
</sampleResult> 
- }}}
  
  The outermost {{{<sampleResult>}}} element has time = 2750 milliseconds. This 
is the sum of times of the nested redirect elements. We are only interested in 
the data contained in the outermost element. Hence the regular expression uses 
the non-greedy pattern match operator ( {{{.*?}}} or {{{.+?}}}) to ignore the 
latter {{{<sampleResult>}}} elements.
  
- On the other hand, Excel 2002 can directly import JMeter XML format logs. 
However, it has problems with entries for HTTP 302 redirects. The nested log 
entry example above will generate three rows in Excel, with the total time 
repeated thrice. i.e:  
+ On the other hand, Excel 2002 can directly import JMeter XML format logs. 
However, it has problems with entries for HTTP 302 redirects. The nested log 
entry example above will generate three rows in Excel, with the total time 
repeated thrice. i.e:
+ 
  {{{
- Login 2750
+ Login   2750
- Login 2750
+ Login   2750
- Login 2750 
+ Login   2750
  }}}
- 
- 
  2. Convert Timestamps to Excel Format
  
  Once the data is in Excel, I convert the timestamp column from Jmeter's Java 
timestamp format (base year 1970) to the Excel format (base year 1900 or 1904 
depending on the Excel version and underlying OS) using this following formula. 
This formula is applied to the entire timestamp column.
  
  ''For GMT time on Windows''
+ 
  {{{
  =((x/1000)/86400)+(DATEVALUE("1-1-1970") - DATEVALUE("1-1-1900"))
  }}}
- 
  ''For GMT time on Mac OS X''
+ 
  {{{
  =((x/1000)/86400)+(DATEVALUE("1-1-1970") - DATEVALUE("1-1-1904"))
  }}}
- 
  ''For local time on Windows'' (replace t with your current offset from GMT)
+ 
  {{{
  =(((x/1000)-(t*3600))/86400)+(DATEVALUE("1-1-1970") - DATEVALUE("1-1-1900"))
  }}}
- 
  ''For local time on Mac OS X'' (replace t with your current offset from GMT)
+ 
  {{{
  =(((x/1000)-(t*3600))/86400)+(DATEVALUE("1-1-1970") - DATEVALUE("1-1-1904"))
  }}}
- 
  3. Now sort rows on the operation name (i.e. JMeter sampler name)
  
- 4. Generate suitable reports and graphs manually. 
+ 4. Generate suitable reports and graphs manually.
  
- For instance, one can generate a graph of page load times v/s time for 
different operations (e.g.: login, add 1 line to the order, etc). A different 
series in the graph is needed for each operation type used - this can be quite 
painstaking to add to a graph when there is a lot of data. 
+ For instance, one can generate a graph of page load times v/s time for 
different operations (e.g.: login, add 1 line to the order, etc). A different 
series in the graph is needed for each operation type used - this can be quite 
painstaking to add to a graph when there is a lot of data.
  
- The graphs generated can be quite useful in visualizing performance of 
websites. See the graph below for an example of one such dummy website. The 
JMeter test that generated the data imposed a steady "normal load" component, 
but also imposed two short-term, high volume "surge loads" spread 10 minutes 
apart. (This was done using a separate JMeter thread group). 
+ The graphs generated can be quite useful in visualizing performance of 
websites. See the graph below for an example of one such dummy website. The 
JMeter test that generated the data imposed a steady "normal load" component, 
but also imposed two short-term, high volume "surge loads" spread 10 minutes 
apart. (This was done using a separate JMeter thread group).
  
  {{attachment:DummyWebsitePerformance.JPG}}
  
@@ -137, +134 @@

  <xsl:output method="html" indent="yes" encoding="US-ASCII" 
doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN" />
  
  <xsl:template match="/">
-   <html>   
+   <html>
      <body>
         <xsl:apply-templates/>
      </body>
@@ -155, +152 @@

        <th>responseMessage</th>
        <th>responseCode</th>
        <th>success</th>
-     </tr>    
+     </tr>
      <xsl:apply-templates/>
    </table>
  </xsl:template>
  
  
- <xsl:template match="sampleResult">           
+ <xsl:template match="sampleResult">
    <tr>
      <td><xsl:value-of select="@timeStamp"/></td>
      <td><xsl:value-of select="@dataType"/></td>
@@ -174, +171 @@

    </tr>
    <!--<xsl:apply-templates/>-->
  </xsl:template>
-     
+ 
  </xsl:stylesheet>
  }}}
- 
  It should be easy to apply this XSL stylesheet on your Jmeter file.  I use a 
small Java program.  I think you can add an  xsl stylesheet processing 
instruction at the start of your XML file and open it in IE or Firefox 
directly.  Someone can post on how to exactly do this.
  
  Once within Excel, pivot tables are the way to go.
  
- 5. You can also use the following package that mixes Excel Macro, java 
transformation and xsl transformation of the .jtl files. It automatically 
generates Excel graphs using a Macro.
+ 5. You can also use the following package that mixes Excel Macro, java 
transformation and xsl transformation of the .jtl files. It automatically 
generates Excel graphs using a Macro. The package is available here :  
[[media:scripts_jmeter.zip]]
- The package is available here : 
- [[media:scripts_jmeter.zip]]
- 
  
  = Summarizing Huge Datasets =
  == Shell Script to Aggregate Per Minute ==
- {{attachment:perf-excel2.png}}
- <<BR>>
- As a software tester, sometimes you are called upon to performance test a web 
service (see [[../UserManual/BuildWSTest|BuildWSTest]]) and present results in 
a nice chart to impress your manager. JMeter is commonly used to thrash the 
server and produce insane amounts of throughput data. If you're running 1000 
tpm this can be rather a lot of data (180,000 transactions for a 3 hour test 
run). Even using the '''Simple Data Writer''', this is beyond the capability of 
JMeter's inbuilt graphics package and is too much to import to Excel.
+ {{attachment:perf-excel2.png}} <<BR>> As a software tester, sometimes you are 
called upon to performance test a web service (see 
[[UserManual/BuildWSTest|BuildWSTest]]) and present results in a nice chart to 
impress your manager. JMeter is commonly used to thrash the server and produce 
insane amounts of throughput data. If you're running 1000 tpm this can be 
rather a lot of data (180,000 transactions for a 3 hour test run). Even using 
the '''Simple Data Writer''', this is beyond the capability of JMeter's inbuilt 
graphics package and is too much to import to Excel.
  
- My solution is to group throughput per minute and average transaction time 
for each minute.  Attached below is a Bash script for processing a JTL log file 
from JMeter. It reduces a 3-hour test run to 180 data points which is much 
easier to represent with a chart program such as Excel.
+ My solution is to group throughput per minute and average transaction time 
for each minute.  Attached below is a Bash script for processing a JTL log file 
from JMeter. It reduces a 3-hour test run to 180 data points which is much 
easier to represent with a chart program such as Excel. <<BR>> The script uses 
a few neat awk tricks, such as: <<BR>>
+ 
- <<BR>>
- The script uses a few neat awk tricks, such as:
- <<BR>>
-     * Rounding Java timestamps to nearest minute
+  * Rounding Java timestamps to nearest minute
-     * Collect timestamps grouped by minute
+  * Collect timestamps grouped by minute
-     * Convert Java timestamp to YYYY-MM-dd etc.
+  * Convert Java timestamp to YYYY-MM-dd etc.
-     * Print Throughput for a minute increment
+  * Print Throughput for a minute increment
-     * Print Average response time for a minute increment
+  * Print Average response time for a minute increment
-     * Do all of the above in an efficient single pass through awk (this was 
the hardest bit!)
+  * Do all of the above in an efficient single pass through awk (this was the 
hardest bit!)
- Script: [[attachment:jtlmin.sh.txt]] <<BR>>
+ 
- An example session, using `jtlmin.sh` to process a JTL file. The file 
produced, `queryBalance.jtl.OUT` (tab-delimited), can now be used to produce 
throughput graph. Response times can also be included on the secondary axis, as 
in the diagram above.  These graphs were very good at showing when the 
integration layer was slow to respond and when throughput varied from the 
original JMeter plan.
+ Script: [[attachment:jtlmin.sh.txt]] <<BR>> An example session, using 
`jtlmin.sh` to process a JTL file. The file produced, `queryBalance.jtl.OUT` 
(tab-delimited), can now be used to produce throughput graph. Response times 
can also be included on the secondary axis, as in the diagram above.  These 
graphs were very good at showing when the integration layer was slow to respond 
and when throughput varied from the original JMeter plan.
+ 
  {{{
  $ jtlmin.sh
- Usage: jtlmin.sh <filename> 
+ Usage: jtlmin.sh <filename>
  Summarizes JMeter JTL output into 1-minute blocks
  
  $ jtlmin.sh queryBalance.jtl
@@ -230, +220 @@

  }}}
  Script: [[attachment:jtlmin.sh.txt]] <<BR>>
  
+ NB, here's a script to convert JMeter's Java timestamps:  Script: 
[[attachment:utime2ymd.txt]] <<BR>>
- NB, here's a script to convert JMeter's Java timestamps: 
- Script: [[attachment:utime2ymd.txt]] <<BR>>
  
  == Java Class to Quickly Summarize JMeter Results ==
- 
- I used the JMeter Ant task, 
[[http://www.programmerplanet.org/pages/projects/jmeter-ant-task.php]], to 
produce very large output files.  I wrote up this Java class to Summarize this 
information.  I hope you find this useful. - Andy G.
+ I used the JMeter Ant task, 
http://www.programmerplanet.org/pages/projects/jmeter-ant-task.php, to produce 
very large output files.  I wrote up this Java class to Summarize this 
information.  I hope you find this useful. - Andy G.
  
  Java Class: [[attachment:JMeterSummary.java]]
  
  Sample Ouput:
+ 
  {{{
  All Urls:
  cnt: 333, avg t: 535 ms, max t: 30755 ms, min t: 10 ms, result codes: 
{200=291, 302=42}, failures: 0, cnt by time: [0.0 s - 0.5 s = 312, 0.5 s - 1.0 
s = 16, 30.5 s - 31.0 s = 5]
@@ -258, +247 @@

  cnt: 21, avg t: 30 ms, max t: 60 ms, min t: 20 ms, result codes: {302=21}, 
failures: 0, cnt by time: [0.0 s - 0.5 s = 21]
  ...
  }}}
- 
- ||cnt || number of requests ||
+ ||cnt ||number of requests ||
- ||avg t || average time per request ||
+ ||avg t ||average time per request ||
- ||max t || longest request ||
+ ||max t ||longest request ||
- ||min t || shortest request ||
+ ||min t ||shortest request ||
- ||result codes || http result code and number of times received ||
+ ||result codes ||http result code and number of times received ||
- ||failures || number of failures ||
+ ||failures ||number of failures ||
- ||cnt by time || a break down of how many requests returned in the specified 
time range ||
+ ||cnt by time ||a break down of how many requests returned in the specified 
time range ||
- ||avg conn || average time doing connection overhead (time ms - latency ms = 
conn ms) ||
+ ||avg conn ||average time doing connection overhead (time ms - latency ms = 
conn ms) ||
- ||max conn || max connection overhead ||
+ ||max conn ||max connection overhead ||
- ||min conn || min connection overhead ||
+ ||min conn ||min connection overhead ||
- ||elapsed seconds || total elapsed time of test (last time stamp - first time 
stamp) ||
+ ||elapsed seconds ||total elapsed time of test (last time stamp - first time 
stamp) ||
- ||cnt per second || throughput (number of requests / elapsed seconds) ||
+ ||cnt per second ||throughput (number of requests / elapsed seconds) ||
+ 
  
  Based on this sort of input:
+ 
  {{{
  <?xml version="1.0" encoding="UTF-8"?>
  <testResults version="1.2">
@@ -280, +270 @@

  <httpSample t="581" lt="481" ts="1184177284718" s="true" 
lb="http://www.website.com/home.html"; rc="200" rm="OK" tn="Thread Group 1-1" 
dt="text"/>
  ...
  }}}
- 
  == Postgres Script to Quickly Aggregate a CSV-format JTL ==
  If you use CSV format logs, this method of summarizing one or more CSV files 
orders of magnitude faster than importing it into the Aggregate Report 
Listener.  It requires PostgreSQL.  Usage:
+ 
-  jtlsummary.sh jtl_csv_result_files...
+  . jtlsummary.sh jtl_csv_result_files...
  
  [[attachment:jtlsummary.sh]]
  
  == Extracting JTL files to CSV with Python (JMeter 2.3.x) ==
- This script does two things. First it filters the JTL file for a regular 
expression. Then it strips them and outputs a CSV file. This also includes the 
conversion of the timestamp to a readable format. 
+ This script does two things. First it filters the JTL file for a regular 
expression. Then it strips them and outputs a CSV file. This also includes the 
conversion of the timestamp to a readable format.
  
  The script only works with JTL files from JMeter 2.3.x (JTL version 2.1). 
Please see 
http://jakarta.apache.org/jmeter/usermanual/listeners.html#xmlformat2.1 for 
details.
  
  Usage is:
+ 
-  ''program.py <JTL input file> <CSV output file> "<regular expression>"''
+  . ''program.py <JTL input file> <CSV output file> "<regular expression>"''
  
- 
- {{{
- #!/usr/bin/python
+ {{{#!/usr/bin/python
  
  """
  Description : Split JTL file into a comma delimited CVS
@@ -367, +356 @@

  except:
      raise
  
- print "Filtering on regular expression : " + reFilter 
+ print "Filtering on regular expression : " + reFilter
  cmpFilter = re.compile(reFilter)
  
  for line in f:
@@ -393, +382 @@

  f.close()
  o.close()
  }}}
- 
  == Generating charts with Perl ==
- (Christoph Meissner) When it comes to exploit jmeters logfiles generated by 
many clients over a long testing period
- one might run into time consuming efforts putting all data together into 
charts. Present charts of jmeter relate response time to throughput. If you are 
working in larger enterprise environments it might also be interesting to see 
the relationship to the number of users who caused the requests. This applies 
even more when your company must make sure that it's applications can supply 
data to a certain number of employees over all day. Also it is very interesting 
to get a feeling in which range response times deviate as soon as your 
applications get stressed. 
+ (Christoph Meissner) When it comes to exploit jmeters logfiles generated by 
many clients over a long testing period one might run into time consuming 
efforts putting all data together into charts. Present charts of jmeter relate 
response time to throughput. If you are working in larger enterprise 
environments it might also be interesting to see the relationship to the number 
of users who caused the requests. This applies even more when your company must 
make sure that it's applications can supply data to a certain number of 
employees over all day. Also it is very interesting to get a feeling in which 
range response times deviate as soon as your applications get stressed.
  
  To reduce effort I'd like to present one of my Perl scripts here. It parses 
any number of jmeter logs and aggregates the data into several different charts 
(examples follow below):
- 
- || '''chart type''' || '''comment''' ||
+ ||'''chart type''' ||'''comment''' ||
- || stacked chart abs || cusps for aggregated respone times per measure point 
||
+ ||stacked chart abs ||cusps for aggregated respone times per measure point ||
- || stacked chart rel || cusps for aggregated respone times per measure point 
expressed in percentages ||
+ ||stacked chart rel ||cusps for aggregated respone times per measure point 
expressed in percentages ||
- || entire user || aggregated response times opposed to number of active 
threads ||
+ ||entire user ||aggregated response times opposed to number of active threads 
||
- || entire throughput || aggregated response times opposed to throughput ||
+ ||entire throughput ||aggregated response times opposed to throughput ||
- || <measure point> user || aggregated response times for this measure point 
opposed to number of active threads ||
+ ||<measure point> user ||aggregated response times for this measure point 
opposed to number of active threads ||
- || <measure point> throughput || aggregated response times for this measure 
point opposed to number of active threads ||
+ ||<measure point> throughput ||aggregated response times for this measure 
point opposed to number of active threads ||
+ 
+ 
  
  
  Here are exemplary charts showing a test against an address validation tool 
stressed with 25 active threads (''a very small test'' ;) ):
@@ -417, +405 @@

  {{attachment:cm_ChartStacked.png}}
  
  ==== Stacked chart showing response times in % ====
- 
  {{attachment:cm_ChartStackedPct.png}}
  
  ==== Chart opposing total response times to total threads ====
+ x-axis shows the timestamps of the test, left y-axis holds response times, 
right y-axis holds number of active threads (it is also possible to display a 1 
standard deviation line for the response times (see script below)).
- x-axis shows the timestamps of the test, left y-axis holds response times, 
right y-axis holds number of active threads
- (it is also possible to display a 1 standard deviation line for the response 
times (see script below)).
  
  {{attachment:cm_entire_users.png}}
  
  ==== Chart opposing total response times to total throughput ====
+ x-axis shows the timestamps of the test, left y-axis holds response times, 
right y-axis holds throughput (it is also possible to display a 1 standard 
deviation line for the response times (see script below))
- x-axis shows the timestamps of the test, left y-axis holds response times, 
right y-axis holds throughput
- (it is also possible to display a 1 standard deviation line for the response 
times (see script below))
  
  {{attachment:cm_entire_throughput.png}}
  
  ==== Chart opposing response times of a single measure point to active 
threads ====
- 
  {{attachment:cm_AdressService_users.png}}
  
  ==== Chart opposing response times of a single measure point to throughput 
====
- 
  {{attachment:cm_AdressService_throughput.png}}
  
  === Configure jmeters testplan ===
+ To make jmeter to generate appropriate log files you will have to configure 
your testplan properly.
- To make jmeter to generate appropriate log files
- you will have to configure your testplan properly.
  
+ One task will be to name your requests (the pipettes) in a way that allows 
the script to cumulate response times, active threads and throughput together 
into measure points. Also you will have to tell jmeter to log particular data.
- One task will be to name your requests (the pipettes) in a way
- that allows the script to cumulate response times, active threads and 
throughput together into measure points.
- Also you will have to tell jmeter to log particular data.
  
  For both requirements look at the example screenshot, please:
  
  {{attachment:cm_jmeter.png}}
  
   1. insert an 'Aggregate Report' Listener to your testplan (top level).
-  2. enter a filename where to save the results
+  1. enter a filename where to save the results
-  3. invoke 'Configure' and check the boxes as shown.
+  1. invoke 'Configure' and check the boxes as shown.
+  1. Change the names of your requests. For instance in my exemplified 
testplan all requests for images were named as 'GIF', all requests for java 
scripts were named as 'JS'. Special points of interest also get their 
appropriate name (eg. 'Login', 'Address Search', ...).
+  If you don't like particular requests to be accumulated then name them 
starting with '''''garbage''''' (eg. garbage_gif, garbage_js, ...) This step is 
necessary because the script collects data into labels of the same name. This 
way you make sure that you get a chart
-  4. Change the names of your requests.
-  For instance in my exemplified testplan all requests for images were named 
as 'GIF',
-  all requests for java scripts were named as 'JS'.
-  Special points of interest also get their appropriate name (eg. 'Login', 
'Address Search', ...).
- 
-  If you don't like particular requests to be accumulated then name them 
starting with '''''garbage''''' (eg. garbage_gif, garbage_js, ...)
- 
-  This step is necessary because the script collects data into labels of the 
same name.
-  This way you make sure that you get a chart
   that will show the response times for all images or all 'Login' or all 
'Address Searches' (except those that start with '''''garbage''''')
  
  === The script ===
  The [[attachment:jmetergraph.pl]] requires Chart 2.4.1 to be installed 
(http://search.cpan.org/~chartgrp/Chart-2.4.1/).
  
+ Updated the script as [[attachment:jmetergraphhtml.pl]] to include Mhardy's 
change below and to create an index.html page to view the graphs. Leaving the 
above in case I screwed it up. (aaronforster)
+ 
  You can call the script in this way:
  
  {{{
@@ -480, +455 @@

  
  {{attachment:cm_entire_users_stddev.png}}
  
+ If you pass ''-range'' then a bar chart will be generated showing the average 
range of the reponse times.
- 
- If you pass ''-range'' then a bar chart will be generated
- showing the average range of the reponse times.
  
  {{attachment:cm_entire_users_range.png}}
  
- 
  (Not sure exactly where to put this, please forgive me, but this script is 
great with one minor minor bug. What a shame to have that ruin people's usage 
of it. Here's the patch that worked for me:
  
- [mha...@tkdevvm(192.168.146.130) util]$ diff jmetergraph.pl 
~/mike/Desktop/jmetergraph.pl 
+ [mha...@tkdevvm(192.168.146.130) util]$ diff jmetergraph.pl 
~/mike/Desktop/jmetergraph.pl  313,314d312 < $glabels{'entire'} = \%entire; <
- 313,314d312
- < $glabels{'entire'} = \%entire;
- < 
  
  ...without that the "entire" .pngs don't contain data, Cheers, Mike)
  
  == JMeter Plugin for Hudson ==
- The [[http://hudson-ci.org/|Hudson]] continuous integration server has a 
[[http://wiki.hudson-ci.org/display/HUDSON/Performance+Plugin|'Performance' 
plugin]] that can execute and report on JMeter and JUnit tests,  and generate 
charts on performance and robustness. 
+ The [[http://hudson-ci.org/|Hudson]] continuous integration server has a 
[[http://wiki.hudson-ci.org/display/HUDSON/Performance+Plugin|'Performance' 
plugin]] that can execute and report on JMeter and JUnit tests,  and generate 
charts on performance and robustness.
  

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to