I've been looking into doing this for log4cxx as well.  I was planning
on just doing it through github actions, as I am really only concerned
about any large order of magnitude changes in the number of log
messages/second.

My understanding of how the github runners work is that whenever you
do a build, a new VM is spun up automatically, which I would expect
would give you dedicated (virtual) resources in order to do a valid
test.  Does anybody know if that is the case, or am I under-thinking
it?

The github documentation on the VMs that they use is here:
https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners

-Robert Middleton

On Mon, Oct 4, 2021 at 11:54 AM Ralph Goers <[email protected]> wrote:
>
> I’d be surprised if there aren’t some projects that have dedicated servers 
> for this
> kind of this. However, they also may have directed sponsorships for it.
>
> Ralph
>
> > On Oct 4, 2021, at 8:20 AM, Matt Sicker <[email protected]> wrote:
> >
> > CI tooling might help here if we can run the tests on a dedicated agent (or 
> > at least one where only a single perf test happens concurrently). Without a 
> > dedicated agent, running the tests repeatedly might help smooth the noisy 
> > neighbors.
> >
> > Matt Sicker
> >
> >> On Oct 4, 2021, at 02:48, Ralph Goers <[email protected]> wrote:
> >>
> >> Of course, running the benchmarks under Jenkins or as GitHub Actions 
> >> would be
> >> almost useless since there would be no way to control what other workloads 
> >> were
> >> running at the same time.
> >>
> >> Ralph
> >>
> >>> On Oct 4, 2021, at 12:39 AM, Ralph Goers <[email protected]> 
> >>> wrote:
> >>>
> >>> If they can be run in Jenkins or GitHub Actions then there is hardware 
> >>> available.
> >>> However, we would have no idea what the hardware is the test is running 
> >>> on,
> >>> although the test could probably find a way to figure it out.
> >>>
> >>> I don’t know of other tooling.
> >>>
> >>> Ralph
> >>>
> >>>>> On Oct 4, 2021, at 12:22 AM, Volkan Yazıcı <[email protected]> wrote:
> >>>>
> >>>> Hello,
> >>>>
> >>>> log4j-perf is nicely populated with various JMH benchmarks, yet it 
> >>>> requires
> >>>> manual action to run them. Not to mention drawing comparisons between 
> >>>> runs
> >>>> on varying Log4j, Java, OS, CPU, and concurrency configurations is close 
> >>>> to
> >>>> being impossible. I am in the search of a F/OSS tool to facilitate such
> >>>> performance tests on a regular basis, e.g., once a week. In particular, 
> >>>> the
> >>>> recent performance crusade Carter conquered triggered by Ceki's
> >>>> Log4j-vs-Logback comparison is a tangible example showing the necessity 
> >>>> of
> >>>> such a performance test bed. In this context, I need some suggestions on
> >>>>
> >>>> 1. Are there any (F/OSS?) tools that one can employ to run certain
> >>>> benchmarks, store the results, generate reports comparing the results 
> >>>> with
> >>>> earlier runs?
> >>>> 2. Can Apache provide us VMs to run this tool on?
> >>>>
> >>>>
> >>>> Kind regards.
> >>>
> >>>
> >>>
> >>
> >>
> >
>
>

Reply via email to