Mousius commented on pull request #9129: URL: https://github.com/apache/tvm/pull/9129#issuecomment-933732684
Hi @areusch, > i agree CI is not a personal testing environment, but it is sometimes the easiest way for developers to access cloud platforms they don't have e.g. arm, gpu. I do empathise with this, but I don't think we should design a CI solution around the edge cases, by reducing the overall running jobs we can get to these faster when they do arise. > @Mousius the comment you referenced is a bit more general and i'm not sure this specific issue contributes to CI taking a while to complete. you can monitor CI if you're anxious for the test results. There's two things this change fixes: 1. Machine availability - we keep overall machines free-er to start a job than they previously were as we fail out of them faster 2. Machine saturation - running multiple tasks on a single machine is going to result in `n` slow jobs, the fewer jobs you run the more compute you have free. I don't rely on CI for test results, but I can definitely feel the reluctance of waiting for CI to complete once you have a green tick given your change is then delayed to likely the next day each time. > in practice this seems most likely to result in cancellation of GPU integration tests, but the number of available GPU executors has not been 0 in the past month. perhaps we should track that stat for a bit now that #9128 is in. i am wondering if maybe it already somewhat addressed this concern. We should be very careful about considering the number of executors available as a metric as to how efficient CI is. When a Jenkins agent is under load from one set of branch builds it will have a negative effect on any other thing also running - so whilst we may never run out of executors on paper, this change would result in them being less loaded and thus more efficient at running CI jobs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
