wForget commented on code in PR #1379:
URL: https://github.com/apache/datafusion-comet/pull/1379#discussion_r1953731417


##########
spark/src/main/scala/org/apache/comet/CometSparkSessionExtensions.scala:
##########
@@ -1354,9 +1354,14 @@ object CometSparkSessionExtensions extends Logging {
 
   /** Calculates required memory overhead in MB per executor process for 
Comet. */
   def getCometMemoryOverheadInMiB(sparkConf: SparkConf): Long = {
-    // `spark.executor.memory` default value is 1g
-    val executorMemoryMiB = ConfigHelpers
-      .byteFromString(sparkConf.get("spark.executor.memory", "1024MB"), 
ByteUnit.MiB)
+    val executorMemoryMiB = if (cometUnifiedMemoryManagerEnabled(sparkConf)) {

Review Comment:
   I am still confused here. For the unified memory manager, do we still need 
to multiply off-heap memory by a factor to calculate comet overhead memory? Or 
should we use executor off-heap memory as comet overhead memory?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to