tgravescs commented on a change in pull request #28972:
URL: https://github.com/apache/spark/pull/28972#discussion_r451547527



##########
File path: 
core/src/test/scala/org/apache/spark/resource/ResourceProfileSuite.scala
##########
@@ -256,4 +266,29 @@ class ResourceProfileSuite extends SparkFunSuite {
     }.getMessage()
     assert(taskError.contains("The resource amount 0.7 must be either <= 0.5, 
or a whole number."))
   }
+
+  test("ResourceProfile has correct custom executor resources") {
+    val rprof = new ResourceProfileBuilder()
+    val eReq = new ExecutorResourceRequests()
+      .cores(2).memory("4096")
+      .memoryOverhead("2048").pysparkMemory("1024").offHeapMemory("3072")
+      .resource("gpu", 2)
+    rprof.require(eReq)
+
+    assert(ResourceProfile.getCustomExecutorResources(rprof.build).size === 1,
+      "Executor resources should have 1 custom resource")

Review comment:
       can you add an assert that is     
ResourceProfile.allSupportedExecutorResources size == 5 with comment saying 
currently 5 resources, need to update the test if more added.

##########
File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
##########
@@ -196,4 +197,20 @@ object YarnSparkHadoopUtil {
       0
     }
   }
+
+  /**
+   * Get offHeap memory size from [[ExecutorResourceRequest]]
+   * return 0 if MEMORY_OFFHEAP_ENABLED is false.
+   */
+  def executorOffHeapMemorySizeAsMb(sparkConf: SparkConf,

Review comment:
       these functions are almost identical. Lets create a helper function 
checkOffHeapEnabled that has the core logic and just pass in size we want...
   
   def executorOffHeapMemorySizeAsMb(sparkConf: SparkConf,  execRequest: 
ExecutorResourceRequest): Long = {
      checkOffHeapEnabled(sparkConf, execRequest.amount).toLong
   
   The version above would call with checkOffHeapEnabled(sparkConf, 
Utils.memoryStringToMb(sparkConf.get(MEMORY_OFFHEAP_SIZE).toString))

##########
File path: 
core/src/main/scala/org/apache/spark/resource/ExecutorResourceRequests.scala
##########
@@ -54,6 +54,20 @@ class ExecutorResourceRequests() extends Serializable {
     this
   }
 
+  /**
+   * Specify off heap memory. The value specified will be converted to MiB.
+   * This value only take effect when MEMORY_OFFHEAP_ENABLED is true

Review comment:
       nit add period (.) to end of sentence.

##########
File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
##########
@@ -196,4 +197,20 @@ object YarnSparkHadoopUtil {
       0
     }
   }
+
+  /**
+   * Get offHeap memory size from [[ExecutorResourceRequest]]
+   * return 0 if MEMORY_OFFHEAP_ENABLED is false.
+   */
+  def executorOffHeapMemorySizeAsMb(sparkConf: SparkConf,
+                                    execRequest: ExecutorResourceRequest): 
Long = {

Review comment:
       indentation should be 4 spaces from beginning of def

##########
File path: python/pyspark/resource/requests.py
##########
@@ -139,6 +140,14 @@ def pysparkMemory(self, amount):
                 ExecutorResourceRequest(self._PYSPARK_MEM, 
_parse_memory(amount))
         return self
 
+    def offheapMemory(self, amount):
+        if self._java_executor_resource_requests is not None:

Review comment:
       could you update the test in test_resources.py to call the new api




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to