Hi. We are testing Spark streaming. Its looks awesome!

We are trying to figure how to submit a new version of a live forever job.
We have a job that streams metrics of a bunch of servers applying
transformations like ".reduceByWindow" and then stores the results in hdfs. 

If we submit this new version, there will be two jobs fighting for the same
stream and the results metrics were be incomplete in each window. If we stop
the original job and then submit the new one, metrics will be lost. 

Any thoughts?

Could two jobs share and aggregate streams?

Thanks,



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-streaming-submit-new-job-version-tp15173.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to