On Jun 24, 2011, at 12:53 AM, Deepak Shetty wrote:

>> Can you elaborate some more about why the Thread starvation occurs?
> Im not sure if the terminology accurately describes the issue . But lets say
> you run a test with X number of threads and you want to simulate Y requests
> per second (the most common use case is a background load while you test
> adding some new functionality). If the server starts responding slower then
> each Jmeter thread will wait longer and you might run into a situation where
> you cannot simulate Y requests per second since there are no threads
> available to make new request (hence I guess 'thread starvation' but the
> term means something else to me) to keep up the Y requests/second

It means, keep up the Y requests / time unit... and you cannot do that with 
JMeter or any other load test harness with a similar threading model. This 
threading model is easy to write, easy to understand but self throttling. Event 
base models are harder to understand, more difficult to implement but open by 
design and less likely to self throttle.

Regards,
Kirk

> 
> regards
> deepak
> 
> 
> 
> On Thu, Jun 23, 2011 at 3:34 PM, Barrie Treloar <baerr...@gmail.com> wrote:
> 
>> On Fri, Jun 24, 2011 at 2:50 AM, Kirk <kirk.pepperd...@gmail.com> wrote:
>>> If I'm expecting an incoming tx rate of 200 requests per second and
>> JMeter doesn't have the threads to sustain it.. then I would consider JMeter
>> to be a bottleneck in the test. This is because the artificially throttling
>> of the overall system (thread starvation in JMeter) can result a load
>> pattern that doesn't properly expose an underlying bottleneck. This is what
>> I've run into in a couple of accounts. Problem in these cases is that
>> developers are looking into the application and not seeing where the real
>> problem is.
>> 
>> I'm newish to the list, so I haven't seen this discussion before.
>> Can you elaborate some more about why the Thread starvation occurs?
>> 
>>> The other issue is that it's hard to setup a JMeter script so that it
>> sustains an expected workload on a server. This is why I've suggested,
>> teach, demonstrate and continue to use ThreadGroup in a way that you
>> yourself called "bizarre". Yet using that model I'm able to simulation 1000s
>> of users in a single JMeter instance all doing the right thing (or as much
>> of a right thing as the current HTTP samplers allow for). And yup, I've got
>> a ThreadGroup replacement sketched out on my whiteboard, now to find some
>> cycles to make it real. I think it should eliminate the need for the
>> constant throughput timer (but, who knows ;-)).
>> 
>> And same with this one, what do you do differently with ThreadGroups and
>> why?
>> 
>>> Also, it would be really really nice to normalize the priority behavior
>> of some of the components such as the timers. IME, how timers work is 1) not
>> intuitive and hence difficult for newbies to get right and 2) creates extra
>> work trying to get the timers to behave (i.e, the test action or simple
>> controller hack around).
>> 
>> I'm definitely interested in this, what specifically about priorities.
>> I'm in the camp of hacking the timers to get it to behave "correctly".
>> At least its better than the perl script we've currently got that
>> calculates a bunch of values to try to set ramp up/throughput times to
>> get what we are looking for.
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: jmeter-user-unsubscr...@jakarta.apache.org
>> For additional commands, e-mail: jmeter-user-h...@jakarta.apache.org
>> 
>> 


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscr...@jakarta.apache.org
For additional commands, e-mail: jmeter-user-h...@jakarta.apache.org

Reply via email to