[
https://issues.apache.org/jira/browse/SPARK-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15266625#comment-15266625
]
Sean Owen commented on SPARK-15060:
-----------------------------------
Yes, it's not recursive; it's a dependency graph. Yes, relying on Java
serialization for a very deep DAG can cause a stack overflow. But, you're just
saying "serialize the RDD info". That's already what happens -- though it
doesn't go to a file. On its face that has the same problem. Are you suggesting
a different serialization? the issue is, I think, that even if this were fixed,
very deep lineages cause other problems. You should be checkpointing in a case
like this.
> Fix stack overflow when executing long linage transform without checkpoint
> --------------------------------------------------------------------------
>
> Key: SPARK-15060
> URL: https://issues.apache.org/jira/browse/SPARK-15060
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 1.5.2, 1.6.1, 2.0.0
> Reporter: Zheng Tan
>
> When executing long linage rdd transform, it is easy to get stack overflow
> exception in driver end. This can be reproduced by the following example:
> var rdd = sc.makeRDD(1 to 10, 10)
> for (_ <- 1 to 1000) {
> rdd = rdd.map(x => x)
> }
> rdd.reduce(_ + _)
> SPARK-5955 solve this problem by checkpointing rdd for every 10~20 rounds. It
> is not so convenient since it required checkpointing data to HDFS.
> Another solution is cutting off the recursive rdd dependencies in driver end
> and re-assembly them in executor end.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]