HI

there are few  points here.

1)  Throughput is described clearly here
   http://jmeter.apache.org/usermanual/glossary.html#Throughput


2)  Throughput  is part of Summary Report  and Aggregate report, which in turn 
are part of the standard Jmeter code.
So this question seems appropriate here.

3) Instead of using ""Simple Data Writer"", you can use the "Write result to file" configuration. This way, you can also re-read the data and visualize them, after the test has been run.. This allows you do do comparison on the same data, Using a simple data writer probably introduces some skew (few msec, but the numbers become different)

4) interesting point about Jenkins, as a tool to automate reporting
Is there any pointer/cookbook around?

Thank you

Sergio

Il 30/11/2013 11.25, sebb ha scritto:
On 30 November 2013 10:06, Pierpaolo Bagnasco
<[email protected]> wrote:
Hi, thanks for the reply. I already corrected that formula, but it still
doesn't change anything.
I tried for example counting all samples in each 1000 milliseconds
interval, like:
first sample=1385731060500
last sample=1385731061394
difference=894 milliseconds
That's wrong as well; you need to subract the first start time from
the last end time.
Or add the last elapsed time to the difference between the two start times.

However, this is all academic, because the Statistical Aggregate
Report is not a standard JMeter listener.
Queries on it need to be sent to the maintainers of the plugin,
whoever that may be.

samples=277
So I tried with: (277/894)*1000=~309 requests/second. But the first
graphic, in the same period, shows a throughput of ~90.


2013/11/30 sebb <[email protected]>

On 29 November 2013 22:39, Pierpaolo Bagnasco
<[email protected]> wrote:
I'm using JMeter client to test the throughtput of a certain workload
(PHP+MySQL, 1 page) on a certain server. Basically I'm doing a "capacity
test" with an increasing number of threads over the time.

I installed the "Statistical Aggregate Report" JMeter plugin and this was
the result (ignore the "Response time" line): [image: enter image
description here]

At the same time I used the "Simple Data Writer" listener to write a log
file ("JMeter.csv"). Then I tried to "manually" calculate the throughput
for every second of the test.

Each line of "JMeter.csv" has this format:

timestamp       elaspedtime   responsecode   success   bytes
1385731020607   42            200            true      325
...             ...           ...            ...       ...

The timestamp is referred to the time when the request is made by the
client, and not when the request is served by the server. So I simply
did: *totaltime
= timestamp + elapsedtime*.
That's wrong.

timestamp + elapsedtime = end time *not* total time.

The timestamp is the start time.

In the next step I converted the *totaltime* to a date format, like:
*13:17:01*.

I have more than 14K samples and with Excel I was able to do this
quickly.
Then I counted how many samples there were for each second. Example:

totaltime    samples (requestsServed/second)
13:17:01     204
13:17:02     297
...          ...

When I tried to plot the results I obtained the following graphic:
[image:
enter image description here]

As you can notice it is far different from the first graphic.

Given that the first graphic is correct, what is the mistake of my
formula/procedure to calculate the throughput?
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



--

Ing. Sergio Boso




---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to