CodingCat commented on code in PR #53190:
URL: https://github.com/apache/spark/pull/53190#discussion_r2601122609


##########
core/src/main/scala/org/apache/spark/status/api/v1/api.scala:
##########
@@ -501,6 +501,30 @@ class ApplicationEnvironmentInfo private[spark] (
     val classpathEntries: collection.Seq[(String, String)],
     val resourceProfiles: collection.Seq[ResourceProfileInfo])
 
+private[spark] object ApplicationEnvironmentInfo {
+  def create(appEnv: ApplicationEnvironmentInfo,
+             newSparkProperties: Map[String, String] = Map(),

Review Comment:
   @dongjoon-hyun I have updated the coding style manually, as I found running 
/dev/scalafmt will bring in changes in many other files



##########
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##########
@@ -445,6 +445,34 @@ package object config {
         "Ensure that memory overhead is a double greater than 0")
       .createWithDefault(0.1)
 
+  private[spark] val EXECUTOR_BURSTY_MEMORY_OVERHEAD_ENABLED =
+    ConfigBuilder("spark.executor.memoryOverheadBursty.enabled")
+      .doc("Whether to enable memory overhead bursty")
+      .version("4.2.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  private[spark] val EXECUTOR_BURSTY_MEMORY_OVERHEAD_FACTOR =
+    ConfigBuilder("spark.executor.memoryOverheadBurstyFactor")
+      .doc("the bursty control factor controlling the size of memory overhead 
space shared with" +
+        s" other processes, newMemoryOverhead=oldMemoryOverhead-MIN((onheap + 
memoryoverhead) *" +
+        s" (this value - 1), oldMemoryOverhead)")
+      .version("4.2.0")
+      .doubleConf
+      .checkValue((v: Double) => v >= 1.0,

Review Comment:
   updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to