Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2143#discussion_r16735777
  
    --- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
    @@ -76,6 +76,20 @@ private[spark] class ContextCleaner(sc: SparkContext) 
extends Logging {
       private val blockOnCleanupTasks = sc.conf.getBoolean(
         "spark.cleaner.referenceTracking.blocking", true)
     
    +  /**
    +   * Whether to disable blocking on shuffle tasks. This override is 
effective only when
    +   * blocking on cleanup tasks is enabled.
    +   *
    +   * When context cleaner is configured to block on every delete request, 
it can throw timeout
    +   * exceptions on cleanup of shuffle blocks, as reported in SPARK-3139. 
To avoid that, this
    +   * parameter disables blocking on shuffle cleanups when when 
`blockOnCleanupTasks` is true).
    +   * Note that this does not affect the cleanup of RDDs and broadcasts.
    +   * This is intended to be a temporary workaround, until the real Akka 
issue (referred to in
    +   * the comment above `blockOnCleanupTasks`) is resolved.
    +   */
    +  private val disableBlockOnShuffleCleanupTasks = sc.conf.getBoolean(
    +    "spark.cleaner.referenceTracking.disableBlockingForShuffles", true)
    --- End diff --
    
    Yes, I was following Josh's argument. I found this more intuitive, but I 
have no strong opinions. Whatever you guys tell me. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to