Github user sitalkedia commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18150#discussion_r121217764
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -190,6 +190,12 @@ class DAGScheduler(
       /**
        * Number of consecutive stage attempts allowed before a stage is 
aborted.
        */
    +  private[scheduler] val unRegisterOutputOnHostOnFetchFailure =
    +    sc.getConf.getBoolean("spark.fetch.failure.unRegister.output.on.host", 
true)
    --- End diff --
    
    @jiangxb1987 - Do we want to set the default to false? If we believe that 
this is the correct and expected behavior, we should set it to true and in case 
we see issues, we can turn it off?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to