dongjoon-hyun commented on PR #53042:
URL: https://github.com/apache/spark/pull/53042#issuecomment-3529489807

   > I understand the goal, but it didn't mention this specific coverage run. 
pandas tests are kept for daily CIs, coverage run is a daily CI. It will add 4 
jobs per day, each lasts about 1hr. Say we have 20 concurrent job quotas per 
day, that's 480 hr in total. Would 1% addition to this quota break our limit? 
Or we have some other workflows running at the same time and an extra 4 jobs at 
the same time will overflow the quota?
   
   May I ask why you think in that way instead of interpreting the policy 
literally?
   
   The policy says `a job concurrency level less than or equal to 20` 
literally. Please look at the Apache Spark CI's behavior. For examples, recent 
three Apache Spark commit builders selectively run.
   - 20 jobs from 27 jobs
   - 19 jobs from 26 jobs.
   - 9 jobs from 19 jobs.
   
   <img width="524" height="238" alt="Screenshot 2025-11-13 at 11 49 13" 
src="https://github.com/user-attachments/assets/eb9a4ac9-30ee-4826-a568-0044b0245e6d";
 />
   
   Technically, Apache Spark PMC wants to optimize the workload further more 
instead of this proposal because we are still consuming more resources 
especially for the release period. At this point of time, your proposal has a 
quite opposite direction.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to