andygrove opened a new pull request, #4285:
URL: https://github.com/apache/datafusion-comet/pull/4285

   ## Summary
   
   - Gates `runs-on.com` usage behind the `vars.USE_RUNS_ON` repository 
variable so CI falls back to standard GitHub-hosted runners when the ASF cloud 
runners are unavailable (incorporates #4276)
   - On standard runners (7 GB RAM), reduces SBT heap from 6 GB to 3 GB and 
sets `SERIAL_SBT_TESTS=1` to disable parallel test forking — matching what 
`apache/spark` does in its own GitHub Actions workflows to avoid OOM kills
   
   ## Details
   
   The ASF took away the `runs-on.com` runners Comet was using. On the regular 
7 GB GitHub runners, Spark 4 jobs get OOM-killed because:
   1. `-mem 6144` requests nearly all available RAM for the SBT launcher alone
   2. Parallel test forking spawns additional JVMs that push total usage over 
the limit
   
   Apache Spark's own CI solves this by using `SERIAL_SBT_TESTS=1` (sequential 
test execution) and a 4 GB heap cap. This PR takes the same approach with a 3 
GB SBT heap to leave headroom for the native Comet library.
   
   When `vars.USE_RUNS_ON` is set to `'true'` in the repository settings, the 
previous behavior (16-CPU cloud runners, 6 GB SBT heap, parallel tests) is 
restored.
   
   Supersedes #4276.
   
   ## Test plan
   
   - [ ] Verify Spark 4.0/4.1 SQL test jobs no longer get OOM-killed on 
`ubuntu-latest`
   - [ ] Verify Spark 3.4/3.5 jobs still pass (they were less memory-hungry but 
benefit from the same fix)
   - [ ] When `vars.USE_RUNS_ON` is re-enabled, verify jobs use the cloud 
runners with full memory


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to