Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2366#discussion_r17904273
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala 
---
    @@ -787,31 +789,88 @@ private[spark] class BlockManager(
       }
     
       /**
    +   * Get peer block managers in the system.
    +   */
    +  private def getPeers(forceFetch: Boolean): HashSet[BlockManagerId] = {
    +    val cachedPeersTtl = conf.getInt("spark.storage.cachedPeersTtl", 60 * 
1000) // milliseconds
    +    val timeout = System.currentTimeMillis - lastPeerFetchTime > 
cachedPeersTtl
    +
    +    cachedPeers.synchronized {
    +      if (cachedPeers.isEmpty || forceFetch || timeout) {
    +        cachedPeers.clear()
    +        cachedPeers ++= master.getPeers(blockManagerId).sortBy(_.hashCode)
    +        lastPeerFetchTime = System.currentTimeMillis
    +        logDebug("Fetched peers from master: " + cachedPeers.mkString("[", 
",", "]"))
    +      }
    +    }
    +    cachedPeers
    +  }
    +
    +  /**
        * Replicate block to another node.
        */
    -  @volatile var cachedPeers: Seq[BlockManagerId] = null
       private def replicate(blockId: BlockId, data: ByteBuffer, level: 
StorageLevel): Unit = {
    +    val maxReplicationFailures = 
conf.getInt("spark.storage.maxReplicationFailures", 1)
    +    val numPeersToReplicateTo = level.replication - 1
    +    val peersReplicatedTo = new HashSet[BlockManagerId]
    +    val peersFailedToReplicateTo = new HashSet[BlockManagerId]
         val tLevel = StorageLevel(
           level.useDisk, level.useMemory, level.useOffHeap, 
level.deserialized, 1)
    -    if (cachedPeers == null) {
    -      cachedPeers = master.getPeers(blockManagerId, level.replication - 1)
    +    val startTime = System.nanoTime
    +    val random = new Random(blockId.hashCode)
    +
    +    var forceFetchPeers = false
    +    var failures = 0
    +    var done = false
    +
    +    // Get a random peer
    +    def getRandomPeer(): Option[BlockManagerId] = {
    +      val peers = getPeers(forceFetchPeers) -- peersReplicatedTo -- 
peersFailedToReplicateTo
    +      if (!peers.isEmpty) Some(peers.toSeq(random.nextInt(peers.size))) 
else None
    --- End diff --
    
    @mridulm You read that correctly, but @rxin point is different. He is 
wondering whether converting the `HashSet` to a `Seq` (so that we can select 
one by index) every time we select a peer could be expensive if the size of the 
peers is large (say, in a cluster with 1000s of nodes?). I am not entirely 
convinced that even if the list of nodes is O(1000) the computation is going to 
be expensive. But I am still going to make an attempt to make this more 
efficient. In fact, repeated calculation of `peers` in line 828 every time a 
peer needs to be selected, can also be avoided.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to