dongjoon-hyun commented on a change in pull request #31876:
URL: https://github.com/apache/spark/pull/31876#discussion_r618072319



##########
File path: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala
##########
@@ -52,8 +55,41 @@ private[spark] sealed trait MapStatus {
    * partitionId of the task or taskContext.taskAttemptId is used.
    */
   def mapId: Long
+
 }
 
+private[spark] class MapStatusLocationFactory(conf: SparkConf) {
+  private val locationExtension = classOf[Location]
+  private val (locationConstructor, locationName) = {
+    conf.get(config.SHUFFLE_LOCATION_PLUGIN_CLASS).map { className =>
+      val clazz = Utils.classForName(className)
+      require(locationExtension.isAssignableFrom(clazz),
+        s"$className is not a subclass of ${locationExtension.getName}.")
+      (clazz.getConstructor(), className)
+    }.orNull
+  }
+
+  private lazy val locationCache: LoadingCache[Location, Location] = 
CacheBuilder.newBuilder()
+    .maximumSize(10000)
+    .build(
+      new CacheLoader[Location, Location]() {
+        override def load(loc: Location): Location = loc
+      }
+    )
+
+  def load(in: ObjectInput): Location = {
+    try {
+      Option(locationConstructor).map { ctr =>
+        val loc = ctr.newInstance().asInstanceOf[Location]
+        loc.readExternal(in)
+        locationCache.get(loc)

Review comment:
       I expected a loading cache usage, but this looks like not used a loading 
cache.
   Could you explain why we have this cache, @Ngone51 ? IIUC, this is only used 
once here and it looks like there is no cache hit and it already do re-read via 
`loc.readExternal(in)`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to