[
https://issues.apache.org/jira/browse/SPARK-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117114#comment-16117114
]
Sean Owen commented on SPARK-650:
---------------------------------
I can also imagine cases involving legacy code that make this approach hard to
implement. Still, it's possible with enough 'discipline', but this is true of
wrangling any legacy code. I don't think the question of semantics is fully
appreciated here. Is killing the app's other tasks on the same executor
reasonable behavior? how many failures are allowed by default by this new
mechanism? what do you do if init never returns? for how long? Are you willing
to reschedule the task on another executor? how does it interact with locality?
I know, any change raises questions, but this one raises a lot.
It's a conceptual change in Spark and I'm just sure it's not going to happen 3
years in. Tasks have never had special status or lifecycle w.r.t. executors and
that's a positive thing, really.
> Add a "setup hook" API for running initialization code on each executor
> -----------------------------------------------------------------------
>
> Key: SPARK-650
> URL: https://issues.apache.org/jira/browse/SPARK-650
> Project: Spark
> Issue Type: New Feature
> Components: Spark Core
> Reporter: Matei Zaharia
> Priority: Minor
>
> Would be useful to configure things like reporting libraries
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]