Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12113#discussion_r62119690
  
    --- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
    @@ -296,10 +290,89 @@ private[spark] class MapOutputTrackerMaster(conf: 
SparkConf)
       protected val mapStatuses = new ConcurrentHashMap[Int, 
Array[MapStatus]]().asScala
       private val cachedSerializedStatuses = new ConcurrentHashMap[Int, 
Array[Byte]]().asScala
     
    +  private val maxRpcMessageSize = RpcUtils.maxMessageSizeBytes(conf)
    +
    +  // Kept in sync with cachedSerializedStatuses explicitly
    +  // This is required so that the Broadcast variable remains in scope 
until we remove
    +  // the shuffleId explicitly or implicitly.
    +  private val cachedSerializedBroadcast = new HashMap[Int, 
Broadcast[Array[Byte]]]()
    +
    +  // This is to prevent multiple serializations of the same shuffle - 
which happens when
    +  // there is a request storm when shuffle start.
    +  private val shuffleIdLocks = new ConcurrentHashMap[Int, AnyRef]()
    +
    +  // requests for map output statuses
    +  private val mapOutputRequests = new 
LinkedBlockingQueue[GetMapOutputMessage]
    +
    +  // Thread pool used for handling map output status requests. This is a 
separate thread pool
    +  // to ensure we don't block the normal dispatcher threads.
    +  private val threadpool: ThreadPoolExecutor = {
    +    val numThreads = 
conf.getInt("spark.shuffle.mapOutput.dispatcher.numThreads", 8)
    +    val pool = ThreadUtils.newDaemonFixedThreadPool(numThreads, 
"map-output-dispatcher")
    +    for (i <- 0 until numThreads) {
    +      pool.execute(new MessageLoop)
    +    }
    +    pool
    +  }
    +
    +  def post(message: GetMapOutputMessage): Unit = {
    +    mapOutputRequests.offer(message)
    +  }
    +
    +  /** Message loop used for dispatching messages. */
    +  private class MessageLoop extends Runnable {
    +    override def run(): Unit = {
    +      try {
    +        while (true) {
    +          try {
    +            val data = mapOutputRequests.take()
    +             if (data == PoisonPill) {
    +              // Put PoisonPill back so that other MessageLoops can see it.
    +              mapOutputRequests.offer(PoisonPill)
    +              return
    +            }
    +            val context = data.context
    +            val shuffleId = data.shuffleId
    +            val hostPort = context.senderAddress.hostPort
    +            logDebug("Handling request to send map output locations for 
shuffle " + shuffleId +
    +              " to " + hostPort)
    +            val mapOutputStatuses = 
getSerializedMapOutputStatuses(shuffleId)
    +            val serializedSize = mapOutputStatuses.length
    +            if (serializedSize > maxRpcMessageSize) {
    +              val msg = s"Map output statuses were $serializedSize bytes 
which " +
    +                s"exceeds spark.rpc.message.maxSize ($maxRpcMessageSize 
bytes)."
    +
    +              // For SPARK-1244 we'll opt for just logging an error and 
then sending it to
    +              // the sender. A bigger refactoring (SPARK-1239) will 
ultimately remove this
    --- End diff --
    
    we could but one of the reasons I like the broadcast size being 
configurable is to allow for a fallback in case something went wrong with 
broadcasts or someone just didn't want to use them. you could configure 
broadcast min size very large and go back to previous behavior.  we've been 
running this for quite some time now with no issues but I'm sure what we run 
doesn't cover all uses cases out there. 
    
    I guess we could still allow that but you then have to set min broadcast 
size to something like maxSize -1. So I guess I'm fine with either, would you 
like me to change it?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to