We've got an "our infrastructure" section on the wiki. I expect it is
probably not super up to date.
On Thu, Sep 15, 2022 at 9:56 AM Brian Hulette via dev
wrote:
> Is there somewhere we could document this?
>
> On Thu, Sep 15, 2022 at 6:45 AM Moritz Mack wrote:
>
>> Thank you, Andrew!
>>
>>
Is there somewhere we could document this?
On Thu, Sep 15, 2022 at 6:45 AM Moritz Mack wrote:
> Thank you, Andrew!
>
> Exactly what I was looking for, that’s awesome!
>
>
>
> On 15.09.22, 06:37, "Alexey Romanenko" wrote:
>
>
>
>
>
> Ahh, great! I didn’t know that 'beam-perf’ label is used for
Thank you, Andrew!
Exactly what I was looking for, that’s awesome!
On 15.09.22, 06:37, "Alexey Romanenko" wrote:
Ahh, great! I didn’t know that 'beam-perf’ label is used for that.
Thanks!
> On 14 Sep 2022, at 17:47, Andrew Pilloud wrote:
>
> We do have a dedicated machine for benchmarks. This
Ahh, great! I didn’t know that 'beam-perf’ label is used for that.
Thanks!
> On 14 Sep 2022, at 17:47, Andrew Pilloud wrote:
>
> We do have a dedicated machine for benchmarks. This is a single
> machine limited to running one test at a time. Set the
> jenkinsExecutorLabel for the job to
We do have a dedicated machine for benchmarks. This is a single
machine limited to running one test at a time. Set the
jenkinsExecutorLabel for the job to 'beam-perf' to use it. For
example:
I think it depends on the goal why to run that benchmarks. In ideal case, we
need to run them on the same dedicated machine(s) and with the same
configuration all the time but I’m not sure that it can be achieved in current
infrastructure reality.
On the other hand, IIRC, the initial goal of
Good idea. I'm curious about our current benchmarks. Some of them run on
clusters, but I think some of them are running locally and just being
noisy. Perhaps this could improve that. (or if they are running on local
Spark/Flink then maybe the results are not really meaningful anyhow)
On Tue, Sep
Hi team,
I’m looking for some help to setup infrastructure to periodically run Java
microbenchmarks (JMH).
Results of these runs will be added to our community metrics (InfluxDB) to help
us track performance, see [1].
To prevent noisy runs this would require a dedicated Jenkins machine that