[ 
https://issues.apache.org/jira/browse/SPARK-19364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Milkowski updated SPARK-19364:
-------------------------------------
    Description: 
*** update *** we found that below situation occurs when we encounter

"com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException: 
Can't update checkpoint - instance doesn't hold the lease for this shard"

https://github.com/awslabs/amazon-kinesis-client/issues/108

we use s3 directory (and dynamodb) to store checkpoints, but if such occurs 
blocks should not get stuck but continue graecfully

running standard kinesis stream ingestion with a java spark app and creating 
dstream after running for some time some block streams seem to persist forever 
and never cleaned up and this eventually leads to memory depletion on workers

we even tried cleaning RDD's with the following:

cleaner = ssc.sparkContext().sc().cleaner().get();

        filtered.foreachRDD(new VoidFunction<JavaRDD<String>>() {
            @Override
            public void call(JavaRDD<String> rdd) throws Exception {
               cleaner.doCleanupRDD(rdd.id(), true);
            }
        });

despite above blocks do persis still, this can be seen in spark admin UI

for instance

input-0-1485362233945   1       ip-<>:34245     Memory Serialized       1442.5 
KB

above block stays and is never cleaned up

  was:
running standard kinesis stream ingestion with a java spark app and creating 
dstream after running for some time some block streams seem to persist forever 
and never cleaned up and this eventually leads to memory depletion on workers

we even tried cleaning RDD's with the following:

cleaner = ssc.sparkContext().sc().cleaner().get();

        filtered.foreachRDD(new VoidFunction<JavaRDD<String>>() {
            @Override
            public void call(JavaRDD<String> rdd) throws Exception {
               cleaner.doCleanupRDD(rdd.id(), true);
            }
        });

despite above blocks do persis still, this can be seen in spark admin UI

for instance

input-0-1485362233945   1       ip-<>:34245     Memory Serialized       1442.5 
KB

above block stays and is never cleaned up


> Some Stream Blocks in Storage Persists Forever
> ----------------------------------------------
>
>                 Key: SPARK-19364
>                 URL: https://issues.apache.org/jira/browse/SPARK-19364
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.0.2
>         Environment: ubuntu unix
> spark 2.0.2
> application is java
>            Reporter: Andrew Milkowski
>
> *** update *** we found that below situation occurs when we encounter
> "com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException: 
> Can't update checkpoint - instance doesn't hold the lease for this shard"
> https://github.com/awslabs/amazon-kinesis-client/issues/108
> we use s3 directory (and dynamodb) to store checkpoints, but if such occurs 
> blocks should not get stuck but continue graecfully
> running standard kinesis stream ingestion with a java spark app and creating 
> dstream after running for some time some block streams seem to persist 
> forever and never cleaned up and this eventually leads to memory depletion on 
> workers
> we even tried cleaning RDD's with the following:
> cleaner = ssc.sparkContext().sc().cleaner().get();
>         filtered.foreachRDD(new VoidFunction<JavaRDD<String>>() {
>             @Override
>             public void call(JavaRDD<String> rdd) throws Exception {
>                cleaner.doCleanupRDD(rdd.id(), true);
>             }
>         });
> despite above blocks do persis still, this can be seen in spark admin UI
> for instance
> input-0-1485362233945 1       ip-<>:34245     Memory Serialized       1442.5 
> KB
> above block stays and is never cleaned up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to