Kimahriman commented on a change in pull request #35504:
URL: https://github.com/apache/spark/pull/35504#discussion_r810494736
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -105,6 +105,17 @@ package object config {
.bytesConf(ByteUnit.MiB)
.createOptional
+ private[spark] val DRIVER_MEMORY_OVERHEAD_FACTOR =
+ ConfigBuilder("spark.driver.memoryOverheadFactor")
+ .doc("This sets the Memory Overhead Factor on the driver that will
allocate memory to " +
+ "non-JVM memory, which includes off-heap memory allocations, non-JVM
tasks, various " +
+ "systems processes, and tmpfs-based local directories.")
+ .version("3.3.0")
+ .doubleConf
+ .checkValue(factor => factor > 0,
Review comment:
https://github.com/apache/spark/pull/35504#discussion_r808320307
I think that's a fraction of existing memory, where this is an amount of
additional memory to add, so there's no reason to set an upper bound on this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]