On 30/07/2009, S.D. <[email protected]> wrote:
> Hi.
>
>  I am having trouble understanding my sample result times.  It's very
>  possible I am just structuring my tests incorrectly.
>
>  I have a test with a number of http samplers.  The first few are just
>  authenticate and redirects and they go very quickly.  One test reads a
>  back end data source and should take about 2.5 seconds to return.
>
>  The sample time seem to increase toward the end of the test.  That is,
>  the initial samples take an expected amount of time to complete, while
>  the rest take progressively longer.
>
>  Setting timers does not seem to have any effect.  The number of threads
>  also does not improve.

What does that mean?
Do you mean that increasing the number of threads does not reduce the
individual sample times? If so, that's not surprising. Increasing the
number of threads may increase throughput - if the server can handle
it - but individual samples are likely to take a bt longer.

>  I have a feeling I am just setting things up
>  wrong, but I'm not sure what.   Is my sample just sitting there waiting
>  for a long time while contending for a thread?

Unlikely, unless you are using lots of threads. Once a sample starts,
there's generally very little interaction between threads (other than
competion for CPU etc).

> It's a servlet app, so
>  I'm assuming everything should be thread-safe.

Thread-safe does not imply performant.
If the servlet is badly written, it could get slower and slower as
more threads are used.

>  I realize everyone's scenarios are different, but I feel like this is
>  something basic I am missing.  I would be very happy with any simple
>  explanations regarding latency/load vs elapsed times.  Any examples as
>  to how you have structured similar tests would also be extremely
>  helpful.
>
>  This is the result of the data access sampler.  The Thread Group is set
>  to 10 users with 10 seconds of ramp-up time.   The Load time and the
>  Latency are either identical or 1 or 2 ms variation.

That's normal for responses that aren't huge. For very large
responses, one would expect the load time to be noticeably larger than
the latency, as presumably the response will require many packets.

>  1
>  11:30:35.154
>  1-1
>  A017D4B-122CCED7210-1AA7FE9-6AA0D37F
>  2714
>  Success
>  8164
>  2
>  11:30:36.136
>  1-2
>  A017D4B-122CCED7602-143153C-187ADA67
>  3769
>  Success
>  8176
>  3
>  11:30:39.134
>  1-5
>  A017D4B-122CCED81BC-8987B7645C76D8
>  3584
>  Success
>  8162
>  4
>  11:30:38.131
>  1-4
>  A017D4B-122CCED7DD2-BD39DC45774B9
>  6301
>  Success
>  8154
>  5
>  11:30:37.131
>  1-3
>  A017D4B-122CCED79E9-31300D3896E19E
>  10276
>  Success
>  8181
>  6
>  11:30:44.148
>  1-10
>  A017D4B-122CCED954F-1C04E10-6A78C28F
>  4744
>  Success
>  8156
>  7
>  11:30:43.412
>  1-9
>  A017D4B-122CCED9165-1489B30-10ED2EE1
>  8335
>  Success
>  8179
>  8
>  11:30:42.879
>  1-8
>  A017D4B-122CCED8D7A-468AE375783718
>  10552
>  Success
>  8164
>  9
>  11:30:41.141
>  1-7
>  A017D4B-122CCED898F-1D0E77E-56485C06
>  15434
>  Success
>  8166
>  10
>  11:30:40.135
>  1-6
>  A017D4B-122CCED85A4-B3EDDB-2208EF21
>  17770
>  Success
>  8171
>
>  Thanks for any assistance ...
>  --sidd
>
>
>
>  ---------------------------------------------------------------------
>  To unsubscribe, e-mail: [email protected]
>  For additional commands, e-mail: [email protected]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to