[ 
https://issues.apache.org/jira/browse/SPARK-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298622#comment-14298622
 ] 

Tien-Dung LE edited comment on SPARK-5499 at 1/30/15 1:47 PM:
--------------------------------------------------------------

I tried with checkpoint() but had the same error. Here is the code

{code}
    for (i <- 1 to 1000) {
      newPair = pair.map(_.swap).persist()

      pair = newPair
      println("" + i + ": count = " + pair.count())

      if( i % 100 == 0) {
        pair.checkpoint()
      }
    }
{code}


was (Author: tien-dung.le):
I tried with checkpoint() but same had the same error. Here is the code

{code}
    for (i <- 1 to 1000) {
      newPair = pair.map(_.swap).persist()

      pair = newPair
      println("" + i + ": count = " + pair.count())

      if( i % 100 == 0) {
        pair.checkpoint()
      }
    }
{code}

> iterative computing with 1000 iterations causes stage failure
> -------------------------------------------------------------
>
>                 Key: SPARK-5499
>                 URL: https://issues.apache.org/jira/browse/SPARK-5499
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Tien-Dung LE
>
> I got an error "org.apache.spark.SparkException: Job aborted due to stage 
> failure: Task serialization failed: java.lang.StackOverflowError" when 
> executing an action with 1000 transformations.
> Here is a code snippet to re-produce the error:
> {code}
>   import org.apache.spark.rdd.RDD
>   var pair: RDD[(Long,Long)] = sc.parallelize(Array((1L,2L)))
>     var newPair: RDD[(Long,Long)] = null
>     for (i <- 1 to 1000) {
>       newPair = pair.map(_.swap)
>       pair = newPair
>     }
>     println("Count = " + pair.count())
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to