Calling unpersist on an RDD in a spark streaming application does not
actually unpersist the blocks from memory and/or disk. After the RDD has
been processed in a .foreach(rdd) call, I attempt to unpersist the rdd since
it is no longer useful to store in memory/disk. This mainly causes a problem
with dynamic allocation where after the batch of data has been processed, we
want the executors to destroy their executors (giving the cores and memory
back to the cluster while waiting for the next batch processing attempt to
occur). 

Is this a known issue? It's not major in that it doesn't break anything...
just prevents dynamic allocation from working as well as it could if
streaming is combined with it.

Thanks,
Mark.



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-1-6-0-Streaming-Persistance-Bug-tp16190.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to