mridulm commented on code in PR #53190:
URL: https://github.com/apache/spark/pull/53190#discussion_r2594705955


##########
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##########
@@ -445,6 +445,34 @@ package object config {
         "Ensure that memory overhead is a double greater than 0")
       .createWithDefault(0.1)
 
+  private[spark] val EXECUTOR_BURSTY_MEMORY_OVERHEAD_ENABLED =
+    ConfigBuilder("spark.executor.memoryOverheadBursty.enabled")
+      .doc("Whether to enable memory overhead bursty")
+      .version("4.2.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  private[spark] val EXECUTOR_BURSTY_MEMORY_OVERHEAD_FACTOR =
+    ConfigBuilder("spark.executor.memoryOverheadBurstyFactor")
+      .doc("the bursty control factor controlling the size of memory overhead 
space shared with" +
+        s" other processes, newMemoryOverhead=oldMemoryOverhead-MIN((onheap + 
memoryoverhead) *" +
+        s" (this value - 1), oldMemoryOverhead)")
+      .version("4.2.0")
+      .doubleConf
+      .checkValue((v: Double) => v >= 1.0,
+        "the value of bursty control factor has to be no less than 1")
+      .createWithDefault(1.2)
+
+  private[spark] val EXECUTOR_BURSTY_MEMORY_OVERHEAD = ConfigBuilder(
+    "spark.executor.burstyMemoryOverhead")
+    .doc(s"The adjusted amount of memoryOverhead to be allocated per executor" 
+
+      s" (the adjustment happens if 
${EXECUTOR_BURSTY_MEMORY_OVERHEAD_ENABLED.key} is enabled," +
+      " in MiB unless otherwise specified. This parameter is here only for UI 
demonstration," +
+      " there is not effect when user sets it directly")
+    .version("4.2.0")
+    .bytesConf(ByteUnit.MiB)
+    .createOptional
+

Review Comment:
   Isnt this not specific to kubernetes ? Or is it usable in other cases ?



##########
core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala:
##########
@@ -48,7 +51,7 @@ import org.apache.spark.util.Utils
 @Evolving
 @Since("3.1.0")
 class ResourceProfile(
-    val executorResources: Map[String, ExecutorResourceRequest],
+    var executorResources: Map[String, ExecutorResourceRequest],

Review Comment:
   `ResourceProfile` is expected to be immutable, and shared across stages, UI 
as well as past events already emitted to event log - we cannot mutate them.
   
   Actually, it is not clear to me why this needs to be updated anyway - we can 
simply add request vs limit as new resources in `executorResources`. and adapt 
resource profile initialization as well as pod submission submission code to 
honor them.
   
   Am I missing something here @CodingCat  ?



##########
core/src/main/scala/org/apache/spark/SparkContext.scala:
##########
@@ -238,7 +238,7 @@ class SparkContext(config: SparkConf) extends Logging {
   private var _files: Seq[String] = _
   private var _archives: Seq[String] = _
   private var _shutdownHookRef: AnyRef = _
-  private var _statusStore: AppStatusStore = _
+  private[spark] var _statusStore: AppStatusStore = _

Review Comment:
   Expose a `def` instead - we should not need to modify this outside 
`SparkContext`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to