folders were
there and continuously being recycled.
Metadata checkpoints were being cleaned on both scenarios.
tnks,
Rod
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-data-checkpoint-cleaning-tp14847p14939.html
Sent from the Apache Spark User
cleaned on both scenarios.
tnks,
Rod
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-data-checkpoint-cleaning-tp14847p14939.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
outside of Spark Streaming
(as a few other challenges like avoiding IO re-execution and event stream
recovery will need to be done outside), so I really hope to have some
strong
control on this part.
How does RDD data checkpoint cleaning happen? Would UpdateStateByKey be a
particular case where
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-data-checkpoint-cleaning-tp14847p14935.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail
the RDDs
continuously increasing. When I ran on linux, only two RDD folders were
there and continuously being recycled.
Metadata checkpoints were being cleaned on both scenarios.
tnks,
Rod
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-data-checkpoint