dongjoon-hyun commented on a change in pull request #25856: [SPARK-29182][Core]
Cache preferred locations of checkpointed RDD
URL: https://github.com/apache/spark/pull/25856#discussion_r327438672
##########
File path: core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala
##########
@@ -82,14 +83,28 @@ private[spark] class ReliableCheckpointRDD[T: ClassTag](
Array.tabulate(inputFiles.length)(i => new CheckpointRDDPartition(i))
}
+ // Cache of preferred locations of checkpointed files.
+ private[spark] val cachedPreferredLocations: mutable.HashMap[Int,
Seq[String]] =
+ mutable.HashMap.empty
Review comment:
If ALS runs more longer, this PR will cause Job failure.
Let's say Spark Job start with cache (A1,A2,A3) and we do the rolling
restart some data nodes of HDFS at the same time.
HDFS can do the followings.
```
1. A1, A2, A3
2. , A2, A3 (<= A1 is shutdown)
3. B1, A2, A3 (<= HDFS create a new replica B1)
4. B1, , A3
5. B1, B2, A3
6. B1, B2,
7. B1, B2, B3
```
Actually, the above is HDFS design. After this PR, Spark job starts to fail
at step 6.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]