Github user GraceH commented on the pull request:
https://github.com/apache/spark/pull/7888#issuecomment-138573813
@andrewor14 I have pushed another proposal. Please let me know your
comments.
* The SparkContext allows end-user to set `force` control while
killExecutor(s). Dynamic allocation will always uses force control as false to
avoid false killing while executor is busy.
* The `killExectutors` log out some executor busy, and cannot be killed if
`force==false
* The `killExectutors` return back acknowledge no matter it has executor
to kill
* The onTaskStart (i.e., `OnExecutorBusy`) will rescuer certain executor
from `pendingToRemove` list if it is busy and added to that list by misjudgment.
* Add one HashMap for all those `activeExecutors`, which records the
running/launched task number. If the task number > 0, that executor is busy.
And `isExecutorBusy` returns back true.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]