Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12113#discussion_r62361365
  
    --- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
    @@ -296,10 +290,86 @@ private[spark] class MapOutputTrackerMaster(conf: 
SparkConf)
       protected val mapStatuses = new ConcurrentHashMap[Int, 
Array[MapStatus]]().asScala
       private val cachedSerializedStatuses = new ConcurrentHashMap[Int, 
Array[Byte]]().asScala
     
    +  private val maxRpcMessageSize = RpcUtils.maxMessageSizeBytes(conf)
    +
    +  // Kept in sync with cachedSerializedStatuses explicitly
    +  // This is required so that the Broadcast variable remains in scope 
until we remove
    +  // the shuffleId explicitly or implicitly.
    +  private val cachedSerializedBroadcast = new HashMap[Int, 
Broadcast[Array[Byte]]]()
    +
    +  // This is to prevent multiple serializations of the same shuffle - 
which happens when
    +  // there is a request storm when shuffle start.
    +  private val shuffleIdLocks = new ConcurrentHashMap[Int, AnyRef]()
    +
    +  // requests for map output statuses
    +  private val mapOutputRequests = new 
LinkedBlockingQueue[GetMapOutputMessage]
    +
    +  // Thread pool used for handling map output status requests. This is a 
separate thread pool
    +  // to ensure we don't block the normal dispatcher threads.
    +  private val threadpool: ThreadPoolExecutor = {
    +    val numThreads = 
conf.getInt("spark.shuffle.mapOutput.dispatcher.numThreads", 8)
    +    val pool = ThreadUtils.newDaemonFixedThreadPool(numThreads, 
"map-output-dispatcher")
    +    for (i <- 0 until numThreads) {
    +      pool.execute(new MessageLoop)
    +    }
    +    pool
    +  }
    +
    +  // Make sure that that we aren't going to exceed the max RPC message 
size by making sure
    +  // we use broadcast to send large map output statuses.
    +  if (minSizeForBroadcast > maxRpcMessageSize) {
    +    val msg = s"spark.shuffle.mapOutput.minSizeForBroadcast 
($minSizeForBroadcast bytes) must " +
    +      s"be <= spark.rpc.message.maxSize ($maxRpcMessageSize bytes) to 
prevent sending an rpc " +
    +      "message that is to large."
    +    logError(msg)
    +    throw new IllegalArgumentException(msg)
    --- End diff --
    
    I think this is all personal opinion and in some situations I agree, but in 
this case a human set what I would consider an advanced config to a bad value 
and we should tell them upfront rather then have them expect one thing and get 
another.  If they are changing advanced configs they are probably tuning their 
job and thus shouldn't be frustrated over us telling them they are doing 
something wrong.  It doesn't come like this out of the box when a user is 
running a simply wordcount or sql statement.  
    
    If this is all that it keeping this from being committed I will change it 
though as to me its not worth discussing anymore.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to