viirya commented on a change in pull request #25856: [SPARK-29182][Core] Cache
preferred locations of checkpointed RDD
URL: https://github.com/apache/spark/pull/25856#discussion_r327387373
##########
File path: core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala
##########
@@ -82,14 +83,28 @@ private[spark] class ReliableCheckpointRDD[T: ClassTag](
Array.tabulate(inputFiles.length)(i => new CheckpointRDDPartition(i))
}
+ // Cache of preferred locations of checkpointed files.
+ private[spark] val cachedPreferredLocations: mutable.HashMap[Int,
Seq[String]] =
+ mutable.HashMap.empty
Review comment:
This is good point. Although the cached preferred locations can be possibly
outdated, I think it only implies that the location to launch tasks could be
not best options.
For example, if the cached preferred loc is [hostA, hostB, hostC] for
partition 1. When hostA dies, if Spark executor also dies there, we will
consider other preferred locations hostB and hostC.
If only data node dies at hostA but Spark executor still works there, Spark
could still choose hostA to launch the task. In this case, hostA is just not
best option.
Yeah, I think the assumption is not so strong, the locations are just
relatively stable during job execution. So this is a config for users to enable
or not, not a feature always turn on.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]