Here's an example of a paired t-test; I did this in R using RStudio since
this is easy to do.  I made up the numbers.  The whole thing can be done in
Excel but I haven't done that for a few years (R is much easier).

x = c(1,-2,  3, 4, 5, 6)
y = c(5, -6,  9, 8, 4, 6)

Question: is there a statistically significant difference between measured
x and measured y?  I will use 90% confidence interval.

Result:

Give the command: t.test(x, y, conf=0.9)

The returned information:


        Welch Two Sample t-test

data:  x and y
t = -0.59894, df = 7.7117, p-value = 0.5664
alternative hypothesis: true difference in means is not equal to 0
90 percent confidence interval:
 -6.179799  3.179799
sample estimates:
mean of x mean of y
 2.833333  4.333333

Note the interval's lower bound is -6.17... and the upper bound is 3.17...

This includes 0.  Consequently, I conclude that there's no statistically
valid difference between the two even though the means are different.  If
the interval did not include 0, then I can conclude that there's a
statistically valid difference.  The reason for this conclusion is the
large variability in the data (that's why we need a lot of data points and
we need tightly controlled test environments).

Hope this helps,

Bo



                                                                                
     
   Bohdan L. Bodnar                                                             
     
   Lead Performance Engineer                                                    
     
   1-312-871-5163                                                               
     
                                                                                
     
                                                                                
      
                                                                                
      
                                                                                
      
   E-mail: bbod...@us.ibm.com                               222 South Riverside 
Plaza 
                                                                    Chicago, IL 
60606 
                                                                        United 
States 
                                                                                
      





From:   Marek Czernek <mczer...@redhat.com>
To:     user@jmeter.apache.org
Date:   08/31/2018 08:31 AM
Subject:        Re: Best way to compare two results of jmeter



I saw the Grafana integration, but in my mind, any solution that
involves external DB seems unsuitable for my simple needs. I am testing
the Jenkins plugin, though I have been running into problems with
comparing two runs. I wonder:

 1. Do you need to fire the JMX testplan using the plugin to be able to
    compare results across builds with the performance plugin?
 2. Do you need any Jenkins plugin other than the performance plugin for
    cross-build comparison?

In the worst case, I'll implement a solution similar to what Bo
suggested, i.e. simply execute calculation on top of the CSVs. The
database solutions seem great if you need to really work with the data;
for my purposes, I mainly want to see whether there's a performance
difference from the previous build and I don't care that much about the
visual output.

Cheers,
--

Marek Czernek

JWS/JBCS Associate Quality Engineer, RHCA

Find me at www.halfastack.com


On 08/31/2018 03:18 PM, Alexander Podelko wrote:
> Just saw another solution in that area
https://dzone.com/articles/jmeter-elasticsearch-live-monitoring

>
>     On Thursday, August 30, 2018, 10:50:56 AM EDT, Alexander Podelko
<apode...@yahoo.com> wrote:
>
>   Hi Marek,
> I do use the Jenkins Performance Plugin for that purpose for some time,
I'd say that you get quite a lot for free straight out of the box. Pretty
decent. Another thing is that I haven't found practically any documentation
(although it is pretty straightforward for a simple use - and there is a
few posts how to setup it) and it is still not clear for me what to do if
I'd need something else from it....
>
> Regards,Alex
>
>
>     On Thursday, August 30, 2018, 10:11:22 AM EDT, Marek Czernek
<mczer...@redhat.com> wrote:
>
>   Hi there,
>
> is there any 'supported' way to compare the results of 2 jmeter runs? I
> googled around and found an old email from 2004 [1] basically saying
> that there is no recommended solution other than a custom-made analysis.
> Have there been any solutions to this problem?
>
> I can also see a Jenkins plugin [2] though I have no idea in what state
> the plugin is, and as such, how viable it is to run it. Last but not
> least, there's some Grafana integration blog [3].  Does anyone have any
> other suggestions? How would you compare two results programmatically to
> see if there is degraded performance?
>
> [1]
>
http://mail-archives.apache.org/mod_mbox/jmeter-user/200401.mbox/%3c99805b014a26d211bc3100a0c9b72cb8059e2...@exchange-va.noblestar.com%3E

>
> [2]
https://wiki.jenkins.io/display/JENKINS/Performance+Plugin

>
> [3]
>
http://www.testautomationguru.com/jmeter-real-time-results-influxdb-grafana/

>
> Cheers,


Reply via email to