On 7/1/05, Rabi Lahiri <[EMAIL PROTECTED]> wrote:
> Actually I think the user parameter thing was a bit of a red herring.
> It worked that way several runs in a row but I can't reproduce the
> behavior any longer.
> 
> Here's a simple example that demonstrates the behavior I don't quite 
> understand.
> 
> standard HTTP sampler (not my subclass), uncheck follow redirect and
> uncheck use keepalive.  point to www.google.com port 80.
> 
> Timing is avg/min/max, with 1 loop and varying numbers of threads,
> according to the aggregate report listener:
> 1 thread      = 48 / 48 / 48
> 10 threads  = 47 / 44 / 52
> 25 threads  = 66 / 46 / 107
> 50 threads  = 89 / 39 / 193
> 100 threads = 137 / 46 / 247
> 500 threads = 1236 / 47 / 8076
> 1000 threads = 1670 / 29 / 15379
> 
> I wouldn't expect to see the increasing average times that way - even
> if a few threads spike and make the max value go way up, the general
> performance should still be pretty constant, as observed with the
> minimum timing column.  Also, if instead of 1000 threads in 1 loop, I
> use 10 threads with 100 loops, I get
> 
> avg / min / max
> 56 /  24   / 197
> 
> Which looks much more reasonable.  Since the request is the same all
> the time, shouldn't the timing for 10 threads * 100 loops be very
> similar to that with 1000 threads * 1 loop?

No, they are not at all equivalent.

A test with a single loop does not get a chance to stabilise - apart
from the fact that 1000 threads * 1 loop is a much higher load for the
JMeter host machine than 10 threads with 100 loops.

> 
> thanks again,
> Rabi
> 
> On 7/1/05, sebb <[EMAIL PROTECTED]> wrote:
> > Which version of Jmeter are you using?
> >
> > I don't understand what's happening here.
> >
> > Try replacing your HTTP Sampler with a JavaTest sampler to see if the
> > timings are affected by the presence of the user parameter
> > pre-processor or not.
> >
> > Or indeed try using the standard HTTP Sampler instead of yours.
> >
> > If either of these shows the timing problem, please create a Bugzilla
> > issue, and then attach the following:
> >
> > - simple test script that has OK timing data
> > - simple test script that causes timing data problems
> > - jmeter.log for both test runs
> > - jtl files for both runs.
> >
> > S.
> >
> > P.S. If you thing the changes you made to the HTTP Sampler might be
> > useful to others, perhaps you would consider filing a Bugzilla
> > enhancement request to describe what you have added.
> >
> > On 7/1/05, Rabi Lahiri <[EMAIL PROTECTED]> wrote:
> > > I have the duration assertion as a child of the sampler.
> > >
> > > Something interesting I noticed just now, though - if I use static
> > > arguments and no User parameter preprocessor with the sampler, the
> > > timing data looks fine.  If I add the user parameter preprocessor with
> > > two $_StringFromFile functions (as a child of the sampler), the timing
> > > data gets messed up and looks cumulative among all the threads, even
> > > if I don't actually use the variables from the file (i.e. I leave the
> > > arguments to the sampler as static data).
> > >
> > > Could the thread timers be including the time that each thread is
> > > blocked on I/O or somehow accumulating time because of that?  We have
> > > subclassed the HTTPSampler and AbstractSamplerGui for our project;
> > > otherwise everything is from the standard distribution.
> > >
> > > thanks again,
> > > Rabi
> > >
> > > On 6/30/05, sebb <[EMAIL PROTECTED]> wrote:
> > > > Where have you put the duration assertions in relation to the samplers?
> > > >
> > > > Which version of JMeter are you using?
> > > >
> > > > S
> > > > On 6/30/05, Rabi Lahiri <[EMAIL PROTECTED]> wrote:
> > > > > Hi,
> > > > > I've looked through all the docs I could find and the mail archives
> > > > > and couldn't find an answer to this.  I have a subclass of HTTPSampler
> > > > > and need to implement timeouts for my requests.  I'm using the sampler
> > > > > with the following setup:
> > > > >
> > > > > 500 threads
> > > > > 1 loop
> > > > > parameter "args" is _StringFromFile(args.txt)
> > > > > parameter "method" is _StringFromFile(methods.txt)
> > > > >
> > > > > My service needs to pass args and method dynamically this way.
> > > > > args.txt and methods.txt are 500-line files which constitute the
> > > > > requests I need.  This works fine and I can test the results of each
> > > > > with response assertions, except that duration assertions don't work
> > > > > properly.  The duration assertion appears to be timing the whole set
> > > > > of 500 threads instead of each one individually, so as soon as the
> > > > > group time hits the assertion value every thread fails.  For example,
> > > > > let's say each call takes exactly 200 ms.  If I set the duration
> > > > > assertion to 1100ms, the first five responses succeed but all the rest
> > > > > fail.  I need to be able to set it up so that I can verify that no
> > > > > individual call takes more than, say, 250ms.  Is this possible?  If I
> > > > > need to make a code change, can it be done relatively simply?
> > > > >
> > > > > thanks,
> > > > > Rabi Lahiri
> > > > >
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > > > For additional commands, e-mail: [EMAIL PROTECTED]
> > > > >
> > > > >
> > > >
> > >
> >
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to