amuraru opened a new pull request #24618: [SPARK-27734][CORE][SQL][WIP] Add 
memory based thresholds for shuffle spill
URL: https://github.com/apache/spark/pull/24618
 
 
   
   ## What changes were proposed in this pull request?
   
   When running large shuffles (700TB input data, 200k map tasks, 50k reducers 
on a 300 nodes cluster) the job is regularly OOMing in map and reduce phase.
   
   IIUC ShuffleExternalSorter (map side) and ExternalAppendOnlyMap and 
ExternalSorter (reduce side) are trying to max out the available execution 
memory. This in turn doesn't play nice with the Garbage Collector and executors 
are failing with OutOfMemoryError when the memory allocation from these 
in-memory structure is maxing out the available heap size (in our case we are 
running with 9 cores/executor, 32G per executor)
   
   To mitigate this, one can set 
spark.shuffle.spill.numElementsForceSpillThreshold to force the spill on disk. 
While this config works, it is not flexible enough as it's expressed in number 
of elements, and in our case we run multiple shuffles in a single job and 
element size is different from one stage to another.
   
   
   This patch extends the spill threshold behaviour and adds two new parameters 
to control the spill based on memory usage:
   
   - spark.shuffle.spill.map.maxRecordsSizeForSpillThreshold
   - spark.shuffle.spill.reduce.maxRecordsSizeForSpillThreshold
   
   
   ## How was this patch tested?
   
   - internal testing
   - TODO: extend existing unit-tests
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to