[
https://issues.apache.org/jira/browse/SPARK-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-8134.
------------------------------
Resolution: Invalid
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
As I say, you should email user@. JIRA is not the place for questions.
> Lifecycle of RDDs in Cluster envs
> ---------------------------------
>
> Key: SPARK-8134
> URL: https://issues.apache.org/jira/browse/SPARK-8134
> Project: Spark
> Issue Type: New Feature
> Components: Spark Core
> Affects Versions: 1.3.1
> Reporter: sid
>
> Is there a way to implement simple driver lifecycle like this.
> init() - connect to JMS
> RDD - read a file and pump out messages
> close() - close JMS
> Currently we are hacking RDDs to create singleton connections (worked node
> agnostic); but then unable to close them.
> also when i call SparkContext.close(); it immediately kills jobs in worker
> nodes.
> There is NO WAY of knowing when to close the context in driver
> I looked into SparkListener and that doesnt help solve our problem.
> I assumed driver would stay alive until jobs are done and cleanup. Driver
> simply spawns jobs and ends
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]