Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4770#discussion_r25380094
  
    --- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
    @@ -715,12 +715,8 @@ private[spark] object Utils extends Logging {
     
       /** Get the Yarn approved local directories. */
       private def getYarnLocalDirs(conf: SparkConf): String = {
    -    // Hadoop 0.23 and 2.x have different Environment variable names for 
the
    -    // local dirs, so lets check both. We assume one of the 2 is set.
    -    // LOCAL_DIRS => 2.X, YARN_LOCAL_DIRS => 0.23.X
    -    val localDirs = Option(conf.getenv("YARN_LOCAL_DIRS"))
    -      .getOrElse(Option(conf.getenv("LOCAL_DIRS"))
    -      .getOrElse(""))
    +    //YarnLocalDirs must be inside container directory. Since it will be 
automatically deleted when container shut downs.
    +    val localDirs = Option(System.getProperty("user.dir")).getOrElse(""))
    --- End diff --
    
    If the problem is with shuffle files accumulating, as I suggested before, 
my understanding is that `ContextCleaner` would take care of this. Maybe your 
application is not releasing RDDs for garbage collection, in which case the 
cleaner wouldn't be able to do much. Or maybe the cleaner has a bug, or wasn't 
supposed to do that in the first place.
    
    But the point here is that your patch is not correct. It breaks two 
existing features.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to