[
https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739030#comment-14739030
]
Sean Owen commented on SPARK-8119:
----------------------------------
No, it's marked as Fixed for 1.5.0 which remains true. I did a bulk change of
Target=1.5.0 to Target=1.5.1 which changed this one too, but then I noticed
that didn't make sense; it's only left to be integrated into 1.4.2, so I
restored that.
> HeartbeatReceiver should not adjust application executor resources
> ------------------------------------------------------------------
>
> Key: SPARK-8119
> URL: https://issues.apache.org/jira/browse/SPARK-8119
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.4.0
> Reporter: SaintBacchus
> Assignee: Andrew Or
> Priority: Critical
> Labels: backport-needed
> Fix For: 1.5.0
>
>
> DynamicAllocation will set the total executor to a little number when it
> wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no
> more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it
> wants after calling sc.killExecutor. Even if dynamic allocation is not
> enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in
> HeartbeatReceiver. The intention of the method is to permanently adjust the
> number of executors the application will get. In HeartbeatReceiver, however,
> this is used as a best-effort mechanism to ensure that the timed out executor
> is dead.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]