Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11505#discussion_r55564271
  
    --- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
    @@ -386,28 +384,36 @@ private[spark] class MapOutputTrackerMaster(conf: 
SparkConf)
           fractionThreshold: Double)
         : Option[Array[BlockManagerId]] = {
     
    -    if (mapStatuses.contains(shuffleId)) {
    -      val statuses = mapStatuses(shuffleId)
    -      if (statuses.nonEmpty) {
    -        // HashMap to add up sizes of all blocks at the same location
    -        val locs = new HashMap[BlockManagerId, Long]
    -        var totalOutputSize = 0L
    -        var mapIdx = 0
    -        while (mapIdx < statuses.length) {
    -          val status = statuses(mapIdx)
    -          val blockSize = status.getSizeForBlock(reducerId)
    -          if (blockSize > 0) {
    -            locs(status.location) = locs.getOrElse(status.location, 0L) + 
blockSize
    -            totalOutputSize += blockSize
    +    val statuses = mapStatuses.get(shuffleId).orNull
    +    if (statuses != null) {
    +      statuses.synchronized {
    +        if (statuses.nonEmpty) {
    +          // HashMap to add up sizes of all blocks at the same location
    +          val locs = new HashMap[BlockManagerId, Long]
    +          var totalOutputSize = 0L
    +          var mapIdx = 0
    +          while (mapIdx < statuses.length) {
    +            val status = statuses(mapIdx)
    +            // status may be null here if we are called between 
registerShuffle, which creates an
    +            // array with null entries for each output, and 
registerMapOutputs, which populates it
    +            // with valid status entries. This is possible if one thread 
schedules a job which
    +            // depends on an RDD which is currently being computed by 
another thread.
    +            if (status != null) {
    --- End diff --
    
    I'm a bit late to this, but I'll note that there is a pattern that can be 
applied on the user side to avoid both the race and unnecessary recomputation 
of the same RDD.  Putting the RDD actions into a Guava-style loading cache will 
allow the RDD to be computed just once while multiple threads can fetch the 
needed results from the loading-cached RDD -- the RDD itself, not the data 
produced as a result of the action.  That way all but the first attempt to 
evaluate the action will instead run on an RDD for which mapOutputs are already 
available, Stages can be skipped, etc.
    
    Some of the same ideas are covered in a different context in 
https://issues.apache.org/jira/browse/SPARK-11838  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to