Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/4620#issuecomment-74908505
  
    Sorry, but this patch is not correct. As @growse mentions, when 
`SPARK_LOCAL_DIRS` is not set, this code will try to change the permissions of 
`/tmp` on Unix machines. It will also use `/tmp/` as the local dir for the 
driver in client mode, which was the exact thing the original change was trying 
to avoid.
    
    The correct fix here, if you really care about cleaning up the extra 
directory, is to export a different env variable from the `Worker` 
([here](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala#L134))
 and handle that variable specially in `getOrCreateLocalRootDirs`. When that 
new env variable is set, the code would behave just like the 
`isRunningInYarnContainer()` case above the change you're making.
    
    @srowen the current code shouldn't create a cascade of directories, but it 
does create a two-level-deep "spark-xxxx" hierarchy for executors in standalone 
mode.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to