Commented on the JIRA. I think this topic isn't so much about
runner-to-runner comparison but just getting organized. For me working on a
particular runner or IO or DSL the results are very helpful for seeing
trends over time.

On Wed, May 16, 2018 at 7:05 AM Jean-Baptiste Onofré <j...@nanthrax.net>
wrote:

> Hi Lukasz,
>
> Thanks, gonna comment in the Jira.
>
> Generally speaking, I'm not a big fan to compare a runner versus
> another, because there are bunch of parameters that can influence the
> results.
>
> Regards
> JB
>
> On 16/05/2018 15:54, Łukasz Gajowy wrote:
> > Hi all,
> >
> > I created an issue which I believe is interesting in terms of what
> > should be included in the Performance Testing dashboard and what
> > shouldn't. Speaking more generally, we have to settle which
> > results should be treated as official ones. The issue description
> > contains my idea of solving it, but I might miss something there. If
> > you're interested in this topic and willing to contribute you're welcome
> > to!
> >
> > Issue link: https://issues.apache.org/jira/browse/BEAM-4298
> >
> > (please note that there's a related issue linked)
> >
> >
> > Best regards,
> > Łukasz Gajowy
>

Reply via email to