[
https://issues.apache.org/jira/browse/SPARK-32643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17179279#comment-17179279
]
Apache Spark commented on SPARK-32643:
--------------------------------------
User 'agrawaldevesh' has created a pull request for this issue:
https://github.com/apache/spark/pull/29452
> [Cleanup] Consolidate state kept in ExecutorDecommissionInfo with
> TaskSetManager.tidToExecutorKillTimeMapping
> -------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-32643
> URL: https://issues.apache.org/jira/browse/SPARK-32643
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Affects Versions: 3.1.0
> Reporter: Devesh Agrawal
> Priority: Minor
>
> The decommissioning state is a bit fragment across two places in the
> TaskSchedulerImpl:
> *
> [https://github.com/apache/spark/pull/29014/|https://github.com/apache/spark/pull/29014/files]
> stored the incoming decommission info messages in
> _TaskSchedulerImpl.executorsPendingDecommission._
> * While
> [https://github.com/apache/spark/pull/28619/|https://github.com/apache/spark/pull/28619/files]
> was storing just the executor end time in the map
> _TaskSetManager.tidToExecutorKillTimeMapping_ (which in turn is contained in
> TaskSchedulerImpl).
> While the two states are not really overlapping, its a bit of a code hygiene
> concern to save this state in two places.
> With [https://github.com/apache/spark/pull/29422], TaskSchedulerImpl is
> emerging as the place where all decommissioning book keeping is kept within
> the driver. So consolidate the information in _tidToExecutorKillTimeMapping_
> into _ExecutorDecommissionInfo._
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]