Philippe>But I would keep Thread Group as is for the following reasons:

Thanks for the corner cases, and I think I cover all of them

>   - Backward compatibility although I see you propose a solution

Yes, the old group stays "forever" and we just deprecate it (e.g. mark as
deprecated and eventually we might hide it from the menu by default)
I was wrong in saying that I could remove properties, etc, etc, from the
existing thread group.

>- Some simulations want to test a degree of parallelism and see behaviour

That is a good question.
Basically, it boils down to a question of "when do the threads quit?"
In a trivial case, the thread group would stop the thread on thread
completion.
In other words, in the first implementation, the thread group would create
one-time threads.

If you create precise_thread_group { loop_forever { ... testplan... } },
then "loop forever" would prevent threads from terminating,
and you would have the concurrency you want.

However, I would like to make "looping" to be configurable as well.
In other words, there might be "arrival schedule" (==thread creation, see
below),
and there might be "looping schedule" that describes how threads should
loop or quit the system.

>- Tests that require rampup. I think it is useful to have this.

I think the new group covers that.
By the way, I created a PR https://github.com/apache/jmeter/pull/674

I think the new group would have one or two free-text fields for the
schedule.
My first idea for the configuration is something like

rate(1/min) even_arrivals(10 min) rate(1/min)

In other words, the schedule is composed of a sequence of steps where
rate(...) defines the rate and the time spans between rate(..) are filled
accordingly.

The schedule above means:
1. start with 1 request per minute
2. launch requests at even intervals during 10 min
3. finish with 1 request per minute

In other words, it means "create requests with exactly 1 minute intervals
in-between"

The ramp-up could be represented as follows:

rate(0/min) even_arrivals(1 min) rate(60/min) even_arrivals(10 min)
rate(60/min)

That would mean:
1. start with 0 requests per minute
2. ramp up the load to 60/min during the first minute (~create request
every second)
3. keep injecting the load for 10 minutes

However, I think randomized (Poisson) arrivals are better default, so the
similar thing with randomization would look like

rate(0/min) random_arrivals(1 min) rate(60/min) random_arrivals(10 min)
rate(60/min)

If you have suggestions for the names or syntax, please comment.

Vladimir

Reply via email to