mridulm commented on a change in pull request #26682: [SPARK-29306][CORE] Stage 
Level Sched: Executors need to track what ResourceProfile they are created with 
URL: https://github.com/apache/spark/pull/26682#discussion_r371507008
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/resource/ResourceUtils.scala
 ##########
 @@ -124,6 +124,35 @@ private[spark] object ResourceUtils extends Logging {
       .filter(_.amount > 0)
   }
 
+  // Used to take a fraction amount from a task resource requirement and split 
into a real
+  // integer amount and the number of parts expected. For instance, if the 
amount is 0.5,
+  // the we get (1, 2) back out.
+  // Returns tuple of (amount, numParts)
+  def calculateAmountAndPartsForFraction(amount: Double): (Int, Int) = {
+    val parts = if (amount <= 0.5) {
+      Math.floor(1.0 / amount).toInt
+    } else if (amount % 1 != 0) {
+      throw new SparkException(
+        s"The resource amount ${amount} must be either <= 0.5, or a whole 
number.")
+    } else {
+      1
 
 Review comment:
   I am slightly confused about this.
   
   If gpus on an executor is 2 and spark.task.resource.gpu.amount=0.33, is it 
invalid configuration 
    ? Or does it mean we run only 2 tasks on that node for that resource 
profile ?
   
   If spark.task.resource.gpu.amount=1 and executor  has 2 gpu's, does it mean 
the task needs both gpu's ? Or only  1 ?
   
   What does it mean to have spark.task.resource.gpu.amount > 1 ?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to