Ngone51 commented on a change in pull request #30164:
URL: https://github.com/apache/spark/pull/30164#discussion_r520301662
##########
File path:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala
##########
@@ -657,6 +681,28 @@ class BlockManagerMasterEndpoint(
}
}
+ private def getShufflePushMergerLocations(
+ numMergersNeeded: Int,
+ hostsToFilter: Set[String]): Seq[BlockManagerId] = {
+ val activeBlockManagers = blockManagerIdByExecutor.groupBy(_._2.host)
+ .mapValues(_.head).values.map(_._2).toSet
+ val filteredActiveBlockManagers = activeBlockManagers
+ .filterNot(x => hostsToFilter.contains(x.host))
+ val filteredActiveMergers = filteredActiveBlockManagers.map(
+ x => BlockManagerId(x.executorId, x.host,
StorageUtils.externalShuffleServicePort(conf)))
+
+ // Enough mergers are available as part of active executors list
+ if (filteredActiveMergers.size >= numMergersNeeded) {
+ filteredActiveMergers.toSeq
+ } else {
+ // Delta mergers added from inactive mergers list to the active mergers
list
+ val filteredDeadMergers = shuffleMergerLocations.values
Review comment:
"dead mergers" / "active" / "inactive" sound confused. "dead" or
"inactive" doesn't mean the merger is in bad status that can't work normally
but just mean there're no executors on the same host, right?
How about renaming `filteredActiveMergers` to `filteredMergersWithExecutors`
and `filteredDeadMergers` to `filteredMergersWithoutExecutors`?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]