gruuya commented on PR #9461:
URL: 
https://github.com/apache/arrow-datafusion/pull/9461#issuecomment-1987911528

   > The major concern I have is that this PR seems to run the benchmark on 
github runners, as I understand it
   
   True, that is correct. My assumption was that any instabilities in the base 
performance would not vary greatly during a single run as both benchmarks are 
run within the same job in a relatively short time interval, but I guess this 
is not a given.
   
   In addition, for this type of benchmarking (PR-vs-main) we're only 
interested in relative comparisons, and so the longitudinal variance component, 
which is undoubtedly large wouldn't come into play (unlike for the tracking of 
main perf across time).
   
   That said the present workflows should be easily extendable to use 
self-hosted runners once those become available I believe.
   
   > I wonder how we could test it... Maybe I could merge it to my fork and run 
it there 🤔
   
   Oh that's a good idea, I believe I can test it out on our fork and provide 
the details here, thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to