[ https://issues.apache.org/jira/browse/SPARK-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432495#comment-16432495 ]
Attila Zsolt Piros commented on SPARK-16630: -------------------------------------------- Let me illustrate my problem with an example: - the limit for blacklisted nodes is configured to 2 - we have one node blacklisted close to the yarn allocator ("host1" -> expiryTime1), this is the new code I am working on - scheduler requests a new executors along with blacklisted nodes (task-level): "host2", "host3" (org.apache.spark.deploy.yarn.YarnAllocator#requestTotalExecutorsWithPreferredLocalities) So I have to choose 2 nodes to communicate towards YARN. My idea to pass expiryTime2 and expiryTime3 to the YarnAllocator to choose the most relevant 2 nodes (the one which expires latter are the more relevant). For this in the case class RequestExecutors the nodeBlacklist field type is changed to Map[String, Long] from Set[String]. > Blacklist a node if executors won't launch on it. > ------------------------------------------------- > > Key: SPARK-16630 > URL: https://issues.apache.org/jira/browse/SPARK-16630 > Project: Spark > Issue Type: Improvement > Components: YARN > Affects Versions: 1.6.2 > Reporter: Thomas Graves > Priority: Major > > On YARN, its possible that a node is messed or misconfigured such that a > container won't launch on it. For instance if the Spark external shuffle > handler didn't get loaded on it , maybe its just some other hardware issue or > hadoop configuration issue. > It would be nice we could recognize this happening and stop trying to launch > executors on it since that could end up causing us to hit our max number of > executor failures and then kill the job. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org