tgravescs commented on a change in pull request #27583: [SPARK-29149][YARN]  
Update YARN cluster manager For Stage Level Scheduling
URL: https://github.com/apache/spark/pull/27583#discussion_r383405902
 
 

 ##########
 File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ResourceRequestHelper.scala
 ##########
 @@ -227,6 +227,17 @@ private object ResourceRequestHelper extends Logging {
     resourceInformation
   }
 
+  def isYarnCustomResourcesNonEmpty(resource: Resource): Boolean = {
+    try {
+      // Use reflection as this uses APIs only available in Hadoop 3
 
 Review comment:
   
   hadoop 3.1.1 has full gpu support, they backported some of it to hadoop 2.10 
as well. I've tested the normal GPU scheduling feature with both of these as 
well as older hadoop 2.7 release. With older versions you can still ask Spark 
for GPUs but if yarn doesn't support it  doesn't ask yarn for it but Spark 
internally still tries to do it.  If you are running on nodes with GPUs spark 
will still use your discovery script to find them and assign them out. if the 
discovery script doesn't find a gpu and you asked for one then it fails. 
   
   This was actually a more recent change that I put in for gpu scheduling as 
more and more people were asking for support on older versions of hadoop 
because they don't plan on upgrading to hadoop 3 for a while. 
   
    I do need to test all that again with the stage level scheduling.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to