tgravescs commented on a change in pull request #29906:
URL: https://github.com/apache/spark/pull/29906#discussion_r497506544



##########
File path: 
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala
##########
@@ -523,9 +523,9 @@ class YarnAllocatorSuite extends SparkFunSuite with 
Matchers with BeforeAndAfter
     handler.getNumUnexpectedContainerRelease should be (2)
   }
 
-  test("blacklisted nodes reflected in amClient requests") {
-    // Internally we track the set of blacklisted nodes, but yarn wants us to 
send *changes*
-    // to the blacklist.  This makes sure we are sending the right updates.
+  test("excludeOnFailure nodes reflected in amClient requests") {
+    // Internally we track the set of excluded nodes, but yarn wants us to 
send *changes*
+    // to it.  This makes sure we are sending the right updates.

Review comment:
       In this case actually this is what I wanted.  We internally track what 
nodes are excluded. This is from the excludeOnFailure list from the scheduler, 
from the spark.yarn.exclude.nodes config and the allocator logic for excluding 
nodes that have failures..  Yes we use the yarn blacklist list api to actually 
specify it to yarn but I don't see any reason internally we have to keep our 
code using the term blacklist except where we call into their api.  I could add 
a comment about that in there to clarify in the Allocator itself until they can 
update their api.
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to