Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19717#discussion_r155853090
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
    @@ -668,7 +668,9 @@ private[spark] object SparkConf extends Logging {
         MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM.key -> Seq(
           AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3")),
         LISTENER_BUS_EVENT_QUEUE_CAPACITY.key -> Seq(
    -      AlternateConfig("spark.scheduler.listenerbus.eventqueue.size", 
"2.3"))
    +      AlternateConfig("spark.scheduler.listenerbus.eventqueue.size", 
"2.3")),
    +    "spark.driver.memoryOverhead" -> Seq(
    --- End diff --
    
    Yes, we do need. Combined `spark.yarn.executor.memoryOverhead` and 
`spark.kubernetes.executor.memoryOverhead` into `spark.executor.memoryOverhead`.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to