Mariusz>also discussion about JMeter threading model and constraints

Frankly speaking, this time I just implemented the thread group the way I
wanted, and I did zero research on the past requests.
Thanks for the links above, and I think they discuss the very similar
problem and I think Kirk would appreciate the new thread group.
However, I'm not sure if Kirk uses JMeter often nowadays :-/

A "fun" discovery is my own mail on exactly the same issue from 2015:
https://lists.apache.org/thread/2mnjf3vv94ykc6zlk2qzmkt43z5rxbb0
I've no idea why I dropped the idea then, however, the issue is exactly the
same as I resolve now.

However, that idea was more involved and it included the proposal to
distinguish "start of test" from "preparation steps in each thread".

---

There are lots of questions like "how many threads do I need", "why the
load does not match expectations".
For instance,
https://lists.apache.org/thread/syjbkcxk3cvp2c1g9hsxm8jcwc8776w1

There are lots of questions like "how do I configure X requests per minute".

Both topics are addressed by my current thread group.

---

Vincent mentions the pacing feature, however, I believe it is really really
close to my current proposal as well.
For instance, "pacing of 1 min" means exactly the same as configuring "1
request per minute".
In other words, if you configure the new thread group to "1 request per
minute", it would spawn exactly one request every minute (on average).
If you configure "0.5 requests per minute" you get the same thing as
"pacing 2 min".

Currently "schedule string" contains "rate(X/min)", "radom_arrivals(X
min)", "even_arrivals(X min)", "pause(X min)" calls.
One can achieve "pacing" feature via "rate(${1/pacing}/min)".
Of course, I can add "pacing(X min)" macro that would be converted to
"rate(1/X)" in the internal representation, however,
I am not convinced it is worth adding taking into account it would make it
harder for the users to pick the right tool (they would have to choose
between rate and pacing).

I think 99.42% of all SLAs were like "number of users", "max number of
concurrent requests", "number of requests per second".
I do not think I saw a non-functional requirement like "single thread must
not issue requests closer than X seconds".

In other words, I think I understand what pacing means (thanks to Vincent's
explanation and pictures), however, I do not see which business requirements
make "pacing" easier to configure and use in reporting to the management.

The bad thing with pacing implementation, it would coordinate requests from
different thread groups magnifying coordinated omission issues.

I would suggest configuring the request rate.

Vladimir

Reply via email to