If you have a thread group with 10 HTTP Request samplers in it (or 100, or 1000, whatever the number is - the more you have, the more pronounced this concern will become), and you run a thread that steps through each of the samplers in sequence, some threads will get done sooner than others - that is practically guaranteed - since there is no way that the server will respond identically quickly to each request.
So, after 1 loop through your test plan you will have a bell curve of times it took for each thread to complete the loop. A few threads will have completed relatively fast, a few will have completed relatively slower, and the bulk of the threads will have completed with times somewhere in the middle. Let's say that the fastest thread completes the full set of requests for the loop in 10 seconds. And the slowest thread completes the loop in 20 seconds. For argument's sake, let's say that the differential between fastest and slowest is not necessarily due to the capacity of the server, but just an artifact of random chance - the fastest thread caught all the green lights, and the slowest thread caught all the red lights (e.g., i/o blocking, lock waits, network congestion back offs, etc.)... If you re-start the loop for each thread, as soon as it completes the last loop - then the fast threads will restart much earlier than the loops that are completing on the slower end of the curve. In fact, given that the 'fastest' time was only 10 seconds, and the slowest time was 20 seconds - there is a non-zero chance that the fastest thread could actually complete another full loop by the time the slowest thread is completing its first full loop. If you assign the loops based on a pure division of "number of loops / number of threads", you will assure that each thread has an equal number of loops. But you will also assure that some threads complete their total number of loops well before other threads complete their full set of loops. Moreover, the longer your test run is, the wider the disparity will be between the threads that reach their full loop count early, and the threads that finish the full loop count last. Such that, the slower running threads may be running their last several loops while the faster threads have completely died off - meaning that the last few threads will be running while the machine is under a progressively lighter load. Clearly, you can't help this from happening during the 'last' loop for every thread, since each thread must stop once it has completed a full loop - if there are no more loops left to run. But if you don't really care how many times each thread runs a loop, you could schedule the loops from a queue - where instead of pre-calculating the total loops for each thread, you just maintained the total number of loops you needed for the test to run, and each thread would just decrement the count when it finished its previous loop. Once the count drops all the way to '0', all of the threads still running would complete their final loops, and the test would be done. BUT no threads would be starting new loops while the server was under a lighter load (even though they would complete their last loops while the load steadily declined). Sebb's suggestion of just using a CSV file to count the loops accomplishes this exact requirement. If I want a test to run 100,000 loops, I just supply a CSV file with 100000 records it. Then I set the Thread Group to run "Infinite" loops, and set the CSV file to stop on EOF. The test runs until the 100000th record is retrieved from the CSV file, and once that loop starts - the threads all stop when they finish their most recent loop because they can't get another record from the CSV. (HINT: you can give the CSV variable a name like 'loop_count', and make your CSV file contain records that count up from 1-100000, and you will also gain a variable that tells you which loop you were on for specific sets of requests.) This isn't a huge concern, just a way to make sure that you maintain the same relative server load all through the duration of the test. -- Robin D. Wilson Voice: 512-426-3929 rwils...@gmail.com -----Original Message----- From: Deepak Goel [mailto:deic...@gmail.com] Sent: Friday, October 21, 2016 1:46 AM To: JMeter Users List <firstname.lastname@example.org> Subject: Re: How would you go about creating a new Thread Group type? I am not sure I understand your problem correctly... Why would the load be reduced to 99 threads? As the threads which haven't finished their earlier cycle are still running...they are spending time finishing their previous request (which would also indicate a problem with the server responding the request) Hey Namaskara~Nalama~Guten Tag~Bonjour -- Keigu Deepak 73500 12833 www.simtree.net, dee...@simtree.net deic...@gmail.com LinkedIn: www.linkedin.com/in/deicool Skype: thumsupdeicool Google talk: deicool Blog: http://loveandfearless.wordpress.com Facebook: http://www.facebook.com/deicool "Contribute to the world, environment and more : http://www.gridrepublic.org " On Tue, Oct 18, 2016 at 2:38 AM, Robin D. Wilson <rwils...@gmail.com> wrote: > Just throwing this out there - to see if anyone wants to lead a > code-level newbie down the right path. I've been using JMeter for > 8-9 years now, and really like it. But one thing that bugs me is the > way the Thread Group works. Specifically, the idea of the "loops" > bothers me. > > > > Consider this scenario: > > > > 1) 100 "users" (e.g., threads) > > 2) 10 loops > > 3) Test has 10 HTTP Requests in it > > > > By this logic, you should end up with 100 (users) X 10 loops X 10 HTTP > Requests = 10,000 requests. > > > > And you do. > > > > BUT, what you actually get when you run the test isn't as clear as you > think. It is basically 100 threads, each running 10 times, and each > thread loop running 10 HTTP requests in sequence. > > > > Here's the problem, if thread 1 completes its 10 loops in 100 seconds, > but threads 2-100 take 200 seconds to complete their 10 loops. The > load will be reduced to only 99 simultaneous threads (users) for the > last 100 seconds of the test run. That's not a 'huge' deal, but it can > affect the perception of the test if you look at it more like threads > 1-50 finish in 100 seconds, and threads 51-100 take an extra 100 > seconds after that to complete. Then only half of your test cycle was > at the required 'max' load you wanted to create, while the other half > was at 50% of the required load. (It is reasonably possible that I'm > just misinterpreting how it actually works, but it appears to work > like this from my experience since I can see threads continuing to > process after some threads appear to stop - and it lasts longer than > it takes to get through the entire cycle - sometimes for several > minutes at the end of a test run.) > > > > I'd like to create a new Thread Group that doesn't assign the thread > to the loop until it starts again at the top of the cycle. So if > thread-1 finishes really fast, and there are still loops to run - it > just picks up another loop and runs it. I would always want to > maintain at least 100 threads executing, so long as there were loops > left to run. So basically, you multiply the number of loops by the > number of threads (1000 in my example), and your 100 threads always > restart until there are no loops left to run. Then the longest period > you have where you aren't fully loading the servers is just the last > loop for every thread. > > > > BUT, I am not 100% sure where I would start in the source code. I have > programmed before (a long time ago), but if someone could point me in > the right direction (like, which source files would need to be > modified, and how would I register my new Thread Group as a different > option), I think I could figure it out pretty quickly. > > > > -- > > Robin D. Wilson > > <mailto:rwils...@gmail.com> rwils...@gmail.com > > > > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org For additional commands, e-mail: user-h...@jmeter.apache.org