Hi Robin,

We've all had situations were calculations were wrong and I see where
you're going. But are you sure about total test time?

Average is a very weak statistical indicator, true, unless maybe the system
is actually very stable, which I've rarely seen on test environments. This
is why I recommend to everybody I know to use the 90th percentile, or
better yet the 95th percentile (too bad its not configurable so you could
get this directly in JMeter). In this case, where you suspect something is
affecting slightly the results, it makes sense to add up the response times
of all samples you are interested in and comparing the before and after
results. Sure - this removes the impact of statistical aberrations on your
comparisons. I've recently run into a use-case where the expected
difference would have been theoretically so small (smaller than the
standard deviation), so comparing the sums made more sense. But the sum,
just like the average shows sample times which measure system under test
performance.

But the total runtime of the test? There are factors that don't depend on
the application that might affect total execution time. Normally, I would
like to exclude anything is not strictly needed from a benchmark. What if
you use random timers (gaussian, uniform) or timers to limit or shape the
throughput in the script configuration? They make a lot of sense to use and
keep, in a test script and would affect total execution time.

There are tools that monitor the application over time and can show
detailed response time per methods. If you feel that the results are
averaged incorrectly, then you should compare response times in JMeter with
results from such tools.

Overall, I don't see the benefits, but I don't really understand what
you've noticed. Have you raised a thread when you noticed that behaviour? I
don't remeber it, but I would like to read it now.

Adrian

On Wed, Mar 21, 2012 at 3:50 PM, Robin D. Wilson <rwils...@gmail.com> wrote:

> I think it depends a lot on what you are testing. If you are trying to
> benchmark system performance, total test duration can be a good indicator
> (and a quick-glance check) of system performance. For example, my
> performance benchmarks are configured to run (without ramp up) between 10
> and 300 threads (depending on the test), in such a way as to guarantee that
> I am exercising the system at near capacity (for each benchmark). Because I
> am running the benchmarks for each release of our system, I have a history
> of the test performance.
>
> The total test duration is a good "overall" measure of the performance of
> any given benchmark. And it is what I used to figure out that JMeter wasn't
> properly reporting the 'average' sample times - until I changed my config
> to use the following settings:
>
>        sampleresult.useNanoTime=false
>        sampleresult.nanoThreadSleep=0
>
> I was seeing the same 'average' times, but the total execution time for a
> thread group was increasing with each successive new release of code. This
> suggested that something was slowing things down in my code-base. After I
> made the above config change to JMeter (2.6) I could see that the average
> sample times were actually much higher than my benchmarks had been
> recording.


> Without being able to explicitly see the execution duration times (or
> using the average sample times to calculate the test duration), I would
> have missed the fact that my benchmarks were getting worse.
>
> --
> Robin D. Wilson
> Sr. Director of Web Development
> KingsIsle Entertainment, Inc.
> VOICE: 512-777-1861
> www.KingsIsle.com
>
> -----Original Message-----
> From: Adrian Speteanu [mailto:asp.ad...@gmail.com]
> Sent: Wednesday, March 21, 2012 6:03 AM
> To: JMeter Users List
> Subject: Re: Display total execution time for test plan
>
> Hi,
>
> I suspect you weren't interested in start of / end of test. But usually
> this is how you get total test time :).
>
> It doesn't make sense to have a test that gradually starts 1000 users and
> test stops when all finished their planned sessions. Its not even useful to
> measure how long the test took.
>
> Why: no live application works likes this in production conditions. At the
> beginning and end of test you have less than 1000 users logged in. What if
> the ramp up of the 1000 threads affect average results, or even total
> execution time?
>
> Check out Sergio's reply. You simulate what users do - true, but at macro
> level, and you design your test plan in such a manner to respect your
> requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
> you have such a test, than you check out the statistics from Aggregate
> Graph, Summary Report + make some nice graphs with some of the cooler
> things that you monitor. Don't forget CPU, RAM, Network usage on the server
> side. That's what you measure and compare before and after a change.
>
> And if a change affects a particular request, focus measurements and
> reporting on that specific request.
>
> Its good to know what one user does, but its better to know what workload
> your app receives:
>  - 1000 logged in, unique and active sessions
>  - 80% make page views in section X
>  - 10% use the forum (or whatever)
>  - 1% upload files
>  - 2% download stuff during their session
>  ....
> etc - this is just an example...
>
> If you get this right for your particular application, then you need to
> measure the statistics of the response time: avg, median, 90th line. See
> how they evolve during the test (this is even better than looking at the
> values for the entire period) and so on. But all this makes measuring total
> time very irrelevant in 90% of tests or more.
>
> Adrian
>
> On Wed, Mar 21, 2012 at 12:06 AM, sergio <ser...@bosoconsulting.it> wrote:
>
> > Hi Adrian and Eric,
> >
> > maybe I'm missing some point, but to me the total duration of the test is
> > rarely important nor predictable.
> >
> > If you need it as a baseline, you can use an aggregate result listener,
> > run some test (maybe with one or two users) and then
> > you can multiply the number of samples (eventually divided the number of
> > loops executed) by the average execution time.
> > So you can easily have the net time you need to do a single loop.
> > This is net of time spent on timers.
> >
> > But when you start having 1000 users, you have a lot of parallelizaton,
> > but obviously not the 100% (that would be ideal).
> > Also in some case, you have to add the ramp-up time.
> >
> > In my experience, we usually end up measuring the behaviour of few key
> > transactions (e.g. submit the order, or login/logout), under different
> > situations and loads,
> > The relationship between average, mean, 90nth % and max return an idea of
> > the way things go.
> > Note that these transactions are also the longest.
> >
> > A static page or an image takes few msec to download, and most of the
> time
> > spent is due to the network latency,
> > which is not something we can easily optimize.
> >
> > This is my point of view, feel free to share your thoughts.
> > best regards
> >
> > Sergio
> >
> > Il 20/03/2012 17:55, Erik Pragt ha scritto:
> >
> >  Hi Adrian,
> >>
> >> Thanks for the super quick reply. I'm a bit surprised by your first
> >> remark though, so maybe I'm having a wrong approach here.
> >>
> >> I'm currently developing an application which might have some
> >> performance issues. Our current target is around 1000 simultaneous
> >> logged in users, and around 10 concurrent 'clicks'. My current
> >> approach was to sort of simulate that behavior in JMeter, check how
> >> long it takes for the simulated users to finish their flows, make some
> >> adjustments, test again, and check if my simulated users are faster
> >> than before. Based on this, I need the total execution time, but
> >> apparently this is not the usual approach, else it would certainly
> >> have been in there somewhere.
> >>
> >> Could you recommend what would be a better way to test my scenario?
> >> I'm not a performance rock star at all, so I'm very curious what would
> >> be an effective way in improving the application and using JMeter as
> >> the load generator in that.
> >>
> >> Kind regards,
> >>
> >> Erik Pragt
> >>
> >> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<asp.ad...@gmail.com>
> >>  wrote:
> >>
> >>> Hi Erik,
> >>>
> >>> A very interesting idea.
> >>>
> >>> You can find start / stop time in jmeter's log. When running from a
> >>> console
> >>> in non-gui mode, you also get some more statistics then in GUI (how
> long
> >>> the test ran). You can also schedule a test to run for a certain amount
> >>> of
> >>> time, or starting / stopping at certain hours (so you don't have to
> worry
> >>> about this stuff).
> >>>
> >>> If you are interested in response times, however, the sum of all
> >>> requests,
> >>> then things get more complicated.
> >>>
> >>> Adrian
> >>>
> >>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<erik.pr...@jworks.nl>
> >>>  wrote:
> >>>
> >>>  Hi all,
> >>>>
> >>>> I've created a test plan to put some load on a flow of pages we have.
> >>>> I'm quite new to JMeter, and I have a small question on how to get the
> >>>> information I'm looking for. I've got a working test plan, I can see
> >>>> the samples, the throughput, etc, but I can't find anywhere what the
> >>>> time was to execute this testplan, or a single loop of this testplan
> >>>> when I execute it multiple times.
> >>>>
> >>>> Can someone give me a small heads up how I can record and view this
> >>>> time?
> >>>>
> >>>> Kind regards,
> >>>>
> >>>> Erik Pragt
> >>>>
> >>>> ------------------------------**------------------------------**
> >>>> ---------
> >>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscr...@jmeter.apache.org>
> >>>> For additional commands, e-mail: user-h...@jmeter.apache.org
> >>>>
> >>>>
> >>>>  ------------------------------**------------------------------**
> >> ---------
> >> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscr...@jmeter.apache.org>
> >> For additional commands, e-mail: user-h...@jmeter.apache.org
> >>
> >>
> >
> > --
> >
> > Ing. Sergio Boso
> >
> > In caso di erronea ricezione da parte di persona diversa, siete pregati
> di
> > eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> > archivi e di volercelo comunicare immediatamente restituendoci il
> messaggio
> > via e-mail al seguente indirizzosergio@**bosoconsulting.it<
> indirizzoser...@bosoconsulting.it><mailto:
> > sergiob...@yahoo.it>
> > L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> > propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> > rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> > indirizzo:sergio@**bosoconsulting.it<
> indirizzo%3aser...@bosoconsulting.it><mailto:
> > sergiob...@yahoo.it>
> >
> >
> >
> >
> >
> >
> > ------------------------------**------------------------------**---------
> > To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> user-unsubscr...@jmeter.apache.org>
> > For additional commands, e-mail: user-h...@jmeter.apache.org
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
> For additional commands, e-mail: user-h...@jmeter.apache.org
>
>

Reply via email to