Michael Schmei├čer commented on SPARK-650:

Then somebody should please explain to me, how this doesn't matter or rather 
how certain use-cases are supposed to be solved. We need to initialize each JVM 
and connect it to our logging system, set correlation IDs, initialize contexts 
and so on. I guess that most users just have implemented work-arounds as we 
did, but in an enterprise environment, this is really not the preferable 
long-term solution to me. Plus, I think that it would really not be hard to 
implement this feature for someone who has knowledge about the Spark executor 

> Add a "setup hook" API for running initialization code on each executor
> -----------------------------------------------------------------------
>                 Key: SPARK-650
>                 URL: https://issues.apache.org/jira/browse/SPARK-650
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Matei Zaharia
>            Priority: Minor
> Would be useful to configure things like reporting libraries

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to