Hi

In my opinion, we need to keep the old thread group to allow us to simulate
open and closed models and because some time we need to simulate X users
and not only X action.

About Kotlin, I don't have an opinion.

Le dim. 7 nov. 2021 à 18:16, Vincent Daburon <[email protected]> a écrit :

> Hi,
> For me JMeter is only in java for the main functionalities including
> the groups of threads and I do not appreciate that we add kotlin
> language to the project.
> First line of the Jmeter overview :  “ The Apache JMeter™ application
> is open source software, a 100% pure Java application “
>
> Adding so many additional libraries just to display graphs lines seems
> disproportionate to me.
>
>  Personally I never use the throughput computation functionalities
> because it is quite difficult to understand, it does not work when
> there are optional calls (IF).
>
> I use the notion of PACING which is much easier and which exists in
> other performance testing tools.
> The pacing is the minimum duration before doing a new iteration.
> With the pacing there is a calculation to add a dynamic wait pause in
> order to reach the duration of the pacing.
> Eg : Pacing 2min, iteration duration until last sampler 1min45sec, the
> dynamic wait pause will be 15 sec.
>
> For me this feature is too impacting on current developments (kotlin, libs
> SVG).
>
> I don't think this feature should be added in this state in the code of
> JMeter
>
> The classic Thread group should not be deprecated either because it
> suits me for my needs.
>
> Regards.
>
> Vincent DABURON
>
> JMeter user since 2004 and testing professional
>
> Le dim. 7 nov. 2021 à 15:45, Vladimir Sitnikov
> <[email protected]> a écrit :
> >
> > >Why not call it « Open Model thread group »  instead of precise
> throughput
> > >thread group?
> >
> > Naming is hard, and I have no idea of the proper name. Suggestions are
> > welcome.
> > I used "precise thread group" just to pick a name and get the thing
> running.
> >
> > I think "open model thread group" is not exactly right since after
> > thread(N) addition the thread group
> > is no longer "open model".
> > On the other hand, a sufficiently large number of threads in "closed
> model"
> > is not really different from "open model",
> > so the existing thread groups are "open" as well if user configures big
> > enough threat counts.
> >
> > >Precise is a bit weird as it would mean others are not.
> >
> > The thing is the new group generates the accurate load in terms of the
> > number of samples.
> > For instance, if you configure rate(10/min) random_arrivals(1 min), then
> > you get exactly 10 samples.
> >
> > However, I agree "precise thread group" sounds weird.
> >
> > Vladimir
>

Reply via email to