Hi,
I can't decide between 1 or 2. IMO, they are not exclusive of each other,
they should be used in combination:
What I do:
-1- have a clear breakdown of real-life traffic (or estimated traffic):
that many requests of each type (numbers, network traffic fluctuations,
frequency of requests and so on...)
-2- have a clear breakdown of number of users
-3- create a test script that reaches targets from #1 at the scale of
the test setup (which is not always as big as the production setup)
-4- fine tune the script until the number of active/inactive user
sessions (server side) coincides with production target...
--------------------------
Reasoning:
Yeah, its nice to know user session length and actual behaviour. But in
practice, due to a desire to protect your users' privacy or due to lack of
data, you only have a clue of what you actually need, but not a real grasp
of the numbers you need. More, often you do have the big numbers and the
detailed use-cases actually come from marketing/sales/client side - and the
second, I strongly recommend to "use with caution", consult them, but don't
trust them. In the end, the impact on the server-side, will be the same if
you make your homework correctly... IMO, with this method you cover
edge-cases too: the average user session might not have that request that
takes only 1% of the total traffic, but that request might be good to have
in the load test plan.
When lacking good statistics or when you have a new application with no
data from existing live environments, then you should simply push the
system to the limit, gradually, until performance would be unacceptable to
you and/or end-users.
----------------------------
Other:
Last, but not least, you should NOT focus solely on simulating VUs. I still
see this trend of mistrusting synthetic benchmarks and is ridiculous. Yes,
some tests do not reflect how the end-user will perceive performance, BUT
those kind of tests: 1) give better understanding to the dev team of what
the problem is, performance wise - which obviously is needed in order to
FIX IT; 2) can be used to monitor performance fluctuation of specific
functionalities and this can be monitored over time, with multiple app
versions; 3) they make it easier for you, the tester, to understand the
application architecture and performance particularities which are actually
essential in order to create a realistic test that tries to simulate VUs...
so start with this, end with what you had in mind before you sent the email.
Cheers,
Adrian S
On Fri, Nov 2, 2012 at 5:15 PM, Philippe Bossu <[email protected]> wrote:
> Hello,
>
> I have to load test a website that has a number of visits per days and a
> visit duration.
>
> What is the formula to compute the number of Virtual Users (Threads) that I
> need to put in my tests.
>
> I read that this number can be computed using 2 different informations:
>
> - Method1 => Peak visit rate (visits/hour) and Average visit length
> (minutes/visit), it would then be => visitRate/(60/visitLength)
>
>
> - Method 2 => Peak page rate (pages/hour), Testcase size (number of
> pages seen by User) and Testcase duration (in minutes),it would then be
> => (Peak page rate*testcaseDuration)/(60*testcaseSize)
>
>
>
>
> My question is with method 1, must my test Iteration last the "average
> visit length" using Timers and what more to make it last that length ?
>
>
> My question with method 2, are we talking about my Test Plan duration ?
>
>
> Which method is the best one ?
>
>
> Thank you
>
> Regards
>