cloud-fan commented on a change in pull request #28895:
URL: https://github.com/apache/spark/pull/28895#discussion_r443999460
##########
File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala
##########
@@ -335,28 +335,12 @@ private[spark] abstract class MapOutputTracker(conf:
SparkConf) extends Logging
* tuples describing the shuffle blocks that are stored at that
block manager.
*/
def getMapSizesByExecutorId(
- shuffleId: Int,
- startPartition: Int,
- endPartition: Int)
- : Iterator[(BlockManagerId, Seq[(BlockId, Long, Int)])]
-
- /**
- * Called from executors to get the server URIs and output sizes for each
shuffle block that
- * needs to be read from a given range of map output partitions
(startPartition is included but
- * endPartition is excluded from the range) and is produced by
- * a range of mappers (startMapIndex, endMapIndex, startMapIndex is included
and
- * the endMapIndex is excluded).
- *
- * @return A sequence of 2-item tuples, where the first item in the tuple is
a BlockManagerId,
- * and the second item is a sequence of (shuffle block id, shuffle
block size, map index)
- * tuples describing the shuffle blocks that are stored at that
block manager.
- */
- def getMapSizesByRange(
shuffleId: Int,
startMapIndex: Int,
endMapIndex: Int,
startPartition: Int,
- endPartition: Int): Iterator[(BlockManagerId, Seq[(BlockId, Long, Int)])]
+ endPartition: Int)
+ : Iterator[(BlockManagerId, Seq[(BlockId, Long, Int)])]
Review comment:
unnecessary change
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]