Hi Stephen,

The trouble with benchmarks in CI is that the numbers may be largely
unreliable, depending mostly on the hardware where it runs and in general
the surrounding environment. Chances are high that the benchmarks will not
produce comparable results.

It would however be good to provide some tools to run the (same) benchmarks
manually.

When run on the same hardware with different codebases or on different
hardware with the same codebase, the outcome may provide interesting and
comparable insights.

Warm regards
--
Sent from my phone. Typos are a kind gift to anyone who happens to find
them.

On Tue, Dec 28, 2021, 07:46 Stephen Webb <[email protected]> wrote:

> Hi,
>
> Robert has created a benchmark that I thought would be nice to integrate
> into CI.
>
> I see the Log4J has some benchmarks actions which are currently run
> manually with results posted to github pages.
>
> Do you consider this a useful/optimal approach?
>
> Would an threshold which an action could check for each PR be useful?
>
> Regards
> Stephen Webb
>
> <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail
> >
> Virus-free.
> www.avast.com
> <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail
> >
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>

Reply via email to