Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/11205#issuecomment-186486879
Hi @andrewor14 , thanks a lot for your comments.
The reason why I introduce another data structure to track each executor's
stage and task numbers is mentioned before and I pasted here again:
>Executors may never be idle. Currently we use the executor to tasks
mapping relation to identify the status of executors, in maximum task failure
situation, some TaskEnd events may never be delivered, which makes the related
executor always be busy.
According to my test, TaskEnd event may not be delivered as the expected
number, which will make executor never be released. So compared to the old
implementation, I changed to clean the related task number when stage is
completed. That's why I introduce a complicated data structure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]