[
https://issues.apache.org/jira/browse/SPARK-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432165#comment-16432165
]
Attila Zsolt Piros commented on SPARK-16630:
--------------------------------------------
I have question regarding limiting the number of blacklisted nodes according to
the cluster size.
With this change there will be two sources of nodes to be backlisted:
- one list is coming from the scheduler (existing node level backlisting)
- the other is computed here close to the YARN (stored along with the expiry
times)
I think it makes sense to have the limit for the complete list (union) of
blacklisted nodes, am I right?
If this limit is for the complete list then regarding the subset I think the
newly blacklisted nodes are more up-to-date to be used then the earlier
backlisted ones.
So I would pass the expiry times from the scheduler to the YARN allocator to
make the subset of backlisted nodes to be communicated to YARN. What is your
opinion?
> Blacklist a node if executors won't launch on it.
> -------------------------------------------------
>
> Key: SPARK-16630
> URL: https://issues.apache.org/jira/browse/SPARK-16630
> Project: Spark
> Issue Type: Improvement
> Components: YARN
> Affects Versions: 1.6.2
> Reporter: Thomas Graves
> Priority: Major
>
> On YARN, its possible that a node is messed or misconfigured such that a
> container won't launch on it. For instance if the Spark external shuffle
> handler didn't get loaded on it , maybe its just some other hardware issue or
> hadoop configuration issue.
> It would be nice we could recognize this happening and stop trying to launch
> executors on it since that could end up causing us to hit our max number of
> executor failures and then kill the job.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]