Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1799#discussion_r15898826
  
    --- Diff: 
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
    @@ -640,9 +713,122 @@ private[spark] class ExternalSorter[K, V, C](
        */
       def iterator: Iterator[Product2[K, C]] = 
partitionedIterator.flatMap(pair => pair._2)
     
    +  /**
    +   * Write all the data added into this ExternalSorter into a file in the 
disk store, creating
    +   * an .index file for it as well with the offsets of each partition. 
This is called by the
    +   * SortShuffleWriter and can go through an efficient path of just 
concatenating binary files
    +   * if we decided to avoid merge-sorting.
    +   *
    +   * @param blockId block ID to write to. The index file will be 
blockId.name + ".index".
    +   * @param context a TaskContext for a running Spark task, for us to 
update shuffle metrics.
    +   * @return array of lengths, in bytes, of each partition of the file 
(used by map output tracker)
    +   */
    +  def writePartitionedFile(blockId: BlockId, context: TaskContext): 
Array[Long] = {
    +    val outputFile = blockManager.diskBlockManager.getFile(blockId)
    +
    +    // Track location of each range in the output file
    +    val offsets = new Array[Long](numPartitions + 1)
    +    val lengths = new Array[Long](numPartitions)
    +
    +    // Statistics
    +    var totalBytes = 0L
    +    var totalTime = 0L
    +
    +    if (bypassMergeSort && partitionWriters != null) {
    +      // We decided to write separate files for each partition, so just 
concatenate them. To keep
    +      // this simple we spill out the current in-memory collection so that 
everything is in files.
    +      spillToPartitionFiles(if (aggregator.isDefined) map else buffer)
    +      partitionWriters.foreach(_.commitAndClose())
    +      var out: FileOutputStream = null
    +      var in: FileInputStream = null
    +      try {
    +        out = new FileOutputStream(outputFile)
    +        for (i <- 0 until numPartitions) {
    +          val file = partitionWriters(i).fileSegment().file
    --- End diff --
    
    I find this part that uses fileSegments slightly convoluted. But we can 
deal with this later. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to