[ 
https://issues.apache.org/jira/browse/SPARK-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14494093#comment-14494093
 ] 

Xinghao Pan commented on SPARK-6808:
------------------------------------

While I understand that behavior, there are a couple of reasons why I believe 
it is a bug.

Firstly, this behavior should only occur in the *first* iteration after 
checkpointing, and not *every* iteration. This is true even when the later 
versions of the RDD are cached completely in memory. At that point, Spark 
should no longer be reading from disk / network.

Secondly, if this were the intended behavior, then the additional map should 
have no effect.



> Checkpointing after zipPartitions results in NODE_LOCAL execution
> -----------------------------------------------------------------
>
>                 Key: SPARK-6808
>                 URL: https://issues.apache.org/jira/browse/SPARK-6808
>             Project: Spark
>          Issue Type: Bug
>          Components: GraphX, Spark Core
>    Affects Versions: 1.2.1, 1.3.0
>         Environment: EC2 Ubuntu r3.8xlarge machines
>            Reporter: Xinghao Pan
>
> I'm encountering a weird issue where a simple iterative zipPartition is 
> PROCESS_LOCAL before checkpointing, but turns NODE_LOCAL for all iterations 
> after checkpointing.
> Here's an example snippet of code:
>     var R : RDD[(Long,Int)]
>     = sc.parallelize((0 until numPartitions), numPartitions)
>       .mapPartitions(_ => new Array[(Long,Int)](10000000).map(i => 
> (0L,0)).toSeq.iterator).cache()
>     sc.setCheckpointDir(checkpointDir)
>     var iteration = 0
>     while (iteration < 50){
>       R = R.zipPartitions(R)((x,y) => x).cache()
>       if ((iteration+1) % checkpointIter == 0) R.checkpoint()
>       R.foreachPartition(_ => {})
>       iteration += 1
>     }
> I've also tried to unpersist the old RDDs, and increased spark.locality.wait 
> but nether helps.
> Strangely, by adding a simple identity map
> R = R.map(x => x).cache()
> after the zipPartitions appears to partially mitigate the issue.
> The problem was originally triggered when I attempted to checkpoint after 
> doing joinVertices in GraphX, but the above example shows that the issue is 
> in Spark core too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to