I did raise a thread on this list when I noticed the behavior. I even created a 
bug for it (it was summarily closed when they couldn't reproduce the same 
problem). I eventually narrowed down the problem to the config settings I 
showed. (I can't remember for sure, but I think you even responded to my 
thread... I know sebb did, but I think I even recall you responding too.)

NOTE: this problem is apparent _only_ when running JMeter on WinXP. I did not 
see the same issue on Win7.

All that being said, I completely disagree about all other factors affecting 
the test times - that assumes a complex test, and that I haven't accounted for 
those in my script. My benchmarks are specifically configured for a very 
limited set of test variables, and they are designed to test the same thing 
each time. My test environment is configured so that I can limit other 
variables from influencing my tests. I can virtually guarantee that if I run 
the same test 10 times (or 100 times), the test duration will vary by less than 
1% on each iteration (and less than 1% between any given test runs). If it does 
vary by more than that, I start looking for problems in my test or my code.

As an example, I have tests that test only "login" on my web system. I get the 
home page, POST a login, and get the home page again (after the user has logged 
in). My site is a very high volume web site (millions of pages per day). I need 
to know if I've introduce any delays in the login process - because that will 
adversely affect the end-user experience for my customers. On multiple 
occasions I've identified DB queries that had been improperly indexed because 
of this test. Likewise I have specific test cases for 'registrations', 'forgot 
password', 'forums reply' and many other use cases - each one very limited in 
scope, and each one very specific in its test configuration. Each one exercises 
a very specific portion of the system - and each one tells me if my developers 
have screwed up something in the code they've delivered for the 'next' release.

I run each of these test cases for each new version of our system. I get a 
benchmark of the performance of each version. The overall test duration is a 
good 'aggregate' measure of that benchmark. I can guarantee if the test 
duration increases significantly, something is wrong. (And if it goes down 
significantly, something is potentially wrong with the test, or we've done a 
really good job with the code for that release.)

I'll agree, significant changes in the test duration do not tell me explicitly 
what is wrong - but it does flag that I need to look deeper. So it is a good 
bellwether for checking my work.

Please accept that there are ways to test that you might not need - but that 
are still useful to others.

--
Robin D. Wilson
Sr. Director of Web Development
KingsIsle Entertainment, Inc.
VOICE: 512-777-1861
www.KingsIsle.com


-----Original Message-----
From: Adrian Speteanu [mailto:[email protected]] 
Sent: Wednesday, March 21, 2012 9:46 AM
To: JMeter Users List
Subject: Re: Display total execution time for test plan

Hi Robin,

We've all had situations were calculations were wrong and I see where
you're going. But are you sure about total test time?

Average is a very weak statistical indicator, true, unless maybe the system
is actually very stable, which I've rarely seen on test environments. This
is why I recommend to everybody I know to use the 90th percentile, or
better yet the 95th percentile (too bad its not configurable so you could
get this directly in JMeter). In this case, where you suspect something is
affecting slightly the results, it makes sense to add up the response times
of all samples you are interested in and comparing the before and after
results. Sure - this removes the impact of statistical aberrations on your
comparisons. I've recently run into a use-case where the expected
difference would have been theoretically so small (smaller than the
standard deviation), so comparing the sums made more sense. But the sum,
just like the average shows sample times which measure system under test
performance.

But the total runtime of the test? There are factors that don't depend on
the application that might affect total execution time. Normally, I would
like to exclude anything is not strictly needed from a benchmark. What if
you use random timers (gaussian, uniform) or timers to limit or shape the
throughput in the script configuration? They make a lot of sense to use and
keep, in a test script and would affect total execution time.

There are tools that monitor the application over time and can show
detailed response time per methods. If you feel that the results are
averaged incorrectly, then you should compare response times in JMeter with
results from such tools.

Overall, I don't see the benefits, but I don't really understand what
you've noticed. Have you raised a thread when you noticed that behaviour? I
don't remeber it, but I would like to read it now.

Adrian

On Wed, Mar 21, 2012 at 3:50 PM, Robin D. Wilson <[email protected]> wrote:

> I think it depends a lot on what you are testing. If you are trying to
> benchmark system performance, total test duration can be a good indicator
> (and a quick-glance check) of system performance. For example, my
> performance benchmarks are configured to run (without ramp up) between 10
> and 300 threads (depending on the test), in such a way as to guarantee that
> I am exercising the system at near capacity (for each benchmark). Because I
> am running the benchmarks for each release of our system, I have a history
> of the test performance.
>
> The total test duration is a good "overall" measure of the performance of
> any given benchmark. And it is what I used to figure out that JMeter wasn't
> properly reporting the 'average' sample times - until I changed my config
> to use the following settings:
>
>        sampleresult.useNanoTime=false
>        sampleresult.nanoThreadSleep=0
>
> I was seeing the same 'average' times, but the total execution time for a
> thread group was increasing with each successive new release of code. This
> suggested that something was slowing things down in my code-base. After I
> made the above config change to JMeter (2.6) I could see that the average
> sample times were actually much higher than my benchmarks had been
> recording.


> Without being able to explicitly see the execution duration times (or
> using the average sample times to calculate the test duration), I would
> have missed the fact that my benchmarks were getting worse.
>
> --
> Robin D. Wilson
> Sr. Director of Web Development
> KingsIsle Entertainment, Inc.
> VOICE: 512-777-1861
> www.KingsIsle.com
>
> -----Original Message-----
> From: Adrian Speteanu [mailto:[email protected]]
> Sent: Wednesday, March 21, 2012 6:03 AM
> To: JMeter Users List
> Subject: Re: Display total execution time for test plan
>
> Hi,
>
> I suspect you weren't interested in start of / end of test. But usually
> this is how you get total test time :).
>
> It doesn't make sense to have a test that gradually starts 1000 users and
> test stops when all finished their planned sessions. Its not even useful to
> measure how long the test took.
>
> Why: no live application works likes this in production conditions. At the
> beginning and end of test you have less than 1000 users logged in. What if
> the ramp up of the 1000 threads affect average results, or even total
> execution time?
>
> Check out Sergio's reply. You simulate what users do - true, but at macro
> level, and you design your test plan in such a manner to respect your
> requirements: 1000 sessions logged in and a maximum of 10 hits / s. When
> you have such a test, than you check out the statistics from Aggregate
> Graph, Summary Report + make some nice graphs with some of the cooler
> things that you monitor. Don't forget CPU, RAM, Network usage on the server
> side. That's what you measure and compare before and after a change.
>
> And if a change affects a particular request, focus measurements and
> reporting on that specific request.
>
> Its good to know what one user does, but its better to know what workload
> your app receives:
>  - 1000 logged in, unique and active sessions
>  - 80% make page views in section X
>  - 10% use the forum (or whatever)
>  - 1% upload files
>  - 2% download stuff during their session
>  ....
> etc - this is just an example...
>
> If you get this right for your particular application, then you need to
> measure the statistics of the response time: avg, median, 90th line. See
> how they evolve during the test (this is even better than looking at the
> values for the entire period) and so on. But all this makes measuring total
> time very irrelevant in 90% of tests or more.
>
> Adrian
>
> On Wed, Mar 21, 2012 at 12:06 AM, sergio <[email protected]> wrote:
>
> > Hi Adrian and Eric,
> >
> > maybe I'm missing some point, but to me the total duration of the test is
> > rarely important nor predictable.
> >
> > If you need it as a baseline, you can use an aggregate result listener,
> > run some test (maybe with one or two users) and then
> > you can multiply the number of samples (eventually divided the number of
> > loops executed) by the average execution time.
> > So you can easily have the net time you need to do a single loop.
> > This is net of time spent on timers.
> >
> > But when you start having 1000 users, you have a lot of parallelizaton,
> > but obviously not the 100% (that would be ideal).
> > Also in some case, you have to add the ramp-up time.
> >
> > In my experience, we usually end up measuring the behaviour of few key
> > transactions (e.g. submit the order, or login/logout), under different
> > situations and loads,
> > The relationship between average, mean, 90nth % and max return an idea of
> > the way things go.
> > Note that these transactions are also the longest.
> >
> > A static page or an image takes few msec to download, and most of the
> time
> > spent is due to the network latency,
> > which is not something we can easily optimize.
> >
> > This is my point of view, feel free to share your thoughts.
> > best regards
> >
> > Sergio
> >
> > Il 20/03/2012 17:55, Erik Pragt ha scritto:
> >
> >  Hi Adrian,
> >>
> >> Thanks for the super quick reply. I'm a bit surprised by your first
> >> remark though, so maybe I'm having a wrong approach here.
> >>
> >> I'm currently developing an application which might have some
> >> performance issues. Our current target is around 1000 simultaneous
> >> logged in users, and around 10 concurrent 'clicks'. My current
> >> approach was to sort of simulate that behavior in JMeter, check how
> >> long it takes for the simulated users to finish their flows, make some
> >> adjustments, test again, and check if my simulated users are faster
> >> than before. Based on this, I need the total execution time, but
> >> apparently this is not the usual approach, else it would certainly
> >> have been in there somewhere.
> >>
> >> Could you recommend what would be a better way to test my scenario?
> >> I'm not a performance rock star at all, so I'm very curious what would
> >> be an effective way in improving the application and using JMeter as
> >> the load generator in that.
> >>
> >> Kind regards,
> >>
> >> Erik Pragt
> >>
> >> On Tue, Mar 20, 2012 at 5:44 PM, Adrian Speteanu<[email protected]>
> >>  wrote:
> >>
> >>> Hi Erik,
> >>>
> >>> A very interesting idea.
> >>>
> >>> You can find start / stop time in jmeter's log. When running from a
> >>> console
> >>> in non-gui mode, you also get some more statistics then in GUI (how
> long
> >>> the test ran). You can also schedule a test to run for a certain amount
> >>> of
> >>> time, or starting / stopping at certain hours (so you don't have to
> worry
> >>> about this stuff).
> >>>
> >>> If you are interested in response times, however, the sum of all
> >>> requests,
> >>> then things get more complicated.
> >>>
> >>> Adrian
> >>>
> >>> On Tue, Mar 20, 2012 at 6:21 PM, Erik Pragt<[email protected]>
> >>>  wrote:
> >>>
> >>>  Hi all,
> >>>>
> >>>> I've created a test plan to put some load on a flow of pages we have.
> >>>> I'm quite new to JMeter, and I have a small question on how to get the
> >>>> information I'm looking for. I've got a working test plan, I can see
> >>>> the samples, the throughput, etc, but I can't find anywhere what the
> >>>> time was to execute this testplan, or a single loop of this testplan
> >>>> when I execute it multiple times.
> >>>>
> >>>> Can someone give me a small heads up how I can record and view this
> >>>> time?
> >>>>
> >>>> Kind regards,
> >>>>
> >>>> Erik Pragt
> >>>>
> >>>> ------------------------------**------------------------------**
> >>>> ---------
> >>>> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> [email protected]>
> >>>> For additional commands, e-mail: [email protected]
> >>>>
> >>>>
> >>>>  ------------------------------**------------------------------**
> >> ---------
> >> To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> [email protected]>
> >> For additional commands, e-mail: [email protected]
> >>
> >>
> >
> > --
> >
> > Ing. Sergio Boso
> >
> > In caso di erronea ricezione da parte di persona diversa, siete pregati
> di
> > eliminare il messaggio e i suoi allegati in modo definitivo dai vostri
> > archivi e di volercelo comunicare immediatamente restituendoci il
> messaggio
> > via e-mail al seguente indirizzosergio@**bosoconsulting.it<
> [email protected]><mailto:
> > [email protected]>
> > L’interessato può, inoltre, esercitare tutti i diritti di accesso sui
> > propri dati previsti dal decreto 196/2003, tra i quali i diritti di
> > rettifica, aggiornamento e cancellazione, inviando un messaggio all’
> > indirizzo:sergio@**bosoconsulting.it<
> indirizzo%[email protected]><mailto:
> > [email protected]>
> >
> >
> >
> >
> >
> >
> > ------------------------------**------------------------------**---------
> > To unsubscribe, e-mail: user-unsubscribe@jmeter.**apache.org<
> [email protected]>
> > For additional commands, e-mail: [email protected]
> >
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to