Hi,

AFAIK, the blocks of minibatch RDDs are checked every job finished, and
older blocks automatically removed (See:
https://github.com/apache/spark/blob/master/streaming/src/main/scala/org/apache/spark/streaming/dstream/DStream.scala#L463
).

You can control this behaviour by StreamingContext#remember to some extent.

// maropu


On Fri, Jan 20, 2017 at 3:17 AM, Andrew Milkowski <amgm2...@gmail.com>
wrote:

> hello
>
> using spark 2.0.2  and while running sample streaming app with kinesis
> noticed (in admin ui Storage tab)  "Stream Blocks" for each worker keeps
> climbing up
>
> then also (on same ui page) in Blocks section I see blocks such as below
>
> input-0-1484753367056
>
> that are marked as Memory Serialized
>
> that do not seem to be "released"
>
> above eventually consumes executor memories leading to out of memory
> exception on some
>
> is there a way to "release" these blocks free them up , app is sample m/r
>
> I attempted rdd.unpersist(false) in the code but that did not lead to
> memory free up
>
> thanks much in advance!
>



-- 
---
Takeshi Yamamuro

Reply via email to