Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/3825#issuecomment-68650323
@JoshRosen your thinking is that Master will be in good shape even though
an exception has been thrown? If you can guarantee that, then resuming the
actor while keeping the accumulated state should do the job. Otherwise, things
get more complicated. Within the lengthy process of handling exceptions thrown
within the DAGScheduler (https://github.com/apache/spark/pull/186), we ended up
taking the conservative approach of restarting the whole system instead of
trying to restart the DAGScheduler actor with fixed or reconstructed state. I
haven't dug into the details of this PR yet, so I can't say for certain, but
there are probably lessons to be learned from that DAGScheduler epic PR.
Something else that we'll need to consider at some point if other actors
start requiring supervision strategies other than the default is what the
overall structure of the supervision hierarchy should be. Right now, only the
DAGScheduler has another level of supervision, but perhaps Spark actors from
outside the DAGScheduler should also be handled under one or more levels of
common supervision.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]