Ngone51 commented on a change in pull request #31763:
URL: https://github.com/apache/spark/pull/31763#discussion_r589899892



##########
File path: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala
##########
@@ -52,6 +52,13 @@ private[spark] sealed trait MapStatus {
    * partitionId of the task or taskContext.taskAttemptId is used.
    */
   def mapId: Long
+
+  /**
+   * Extra metadata for map status. This could be used by different 
ShuffleManager implementation
+   * to store information they need. For example, a Remote Shuffle Service 
ShuffleManager could
+   * store shuffle server information and let reducer task know where to fetch 
shuffle data.
+   */
+  def metadata: Option[Serializable]

Review comment:
       Then, it'd be a totally different topic, right?  IIUC, SPARK-25299 could 
also benefit custom shuffle manager if SPARK-25299(custom storage) is pluggable 
with a custom shuffle manager. Ideally, a custom shuffle manager should be able 
to plugin in different storages. We might not think deeply about how to support 
custom shuffle managers when working on SPARK-25299 but need to keep in mind 
the "pluggable". After SPARK-25299 completed, then, we can start to enhance the 
support for custom shuffle managers.
   
   That being said, I think you are still free to raise a separate discussion 
on supporting the custom shuffle manager. For example, what are the 
shortcomings of the current framework, and what should be improved...I just 
wonder the community may not have enough bandwidth to work on these two 
significant projects concurently.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to