Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5286#discussion_r27496946
  
    --- Diff: 
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockManager.scala ---
    @@ -39,25 +41,18 @@ import org.apache.spark.storage._
     // Note: Changes to the format in this file should be kept in sync with
     // 
org.apache.spark.network.shuffle.StandaloneShuffleBlockManager#getSortBasedShuffleBlockData().
     private[spark]
    -class IndexShuffleBlockManager(conf: SparkConf) extends 
ShuffleBlockManager {
    +class IndexShuffleBlockManager(conf: SparkConf) extends 
ShuffleBlockResolver {
     
       private lazy val blockManager = SparkEnv.get.blockManager
     
       private val transportConf = SparkTransportConf.fromSparkConf(conf)
     
    -  /**
    -   * Mapping to a single shuffleBlockId with reduce ID 0.
    -   * */
    -  def consolidateId(shuffleId: Int, mapId: Int): ShuffleBlockId = {
    -    ShuffleBlockId(shuffleId, mapId, 0)
    -  }
    -
       def getDataFile(shuffleId: Int, mapId: Int): File = {
    -    blockManager.diskBlockManager.getFile(ShuffleDataBlockId(shuffleId, 
mapId, 0))
    +    blockManager.diskBlockManager.getFile(ShuffleDataBlockId(shuffleId, 
mapId, NOOP_REDUCE_ID))
    --- End diff --
    
    Yeah - I think there is actually a third option which is just to make it so 
the BlockObjectWriter doesn't have to be passed a Block ID (it almost doesn't 
even need one). This way we totally avoid using the BlockID for the purpose of 
naming this file. I'm punting it to future work though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to