Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/7051#issuecomment-116875310
  
    @JoshRosen and I discussed this offline. So apparently this feature has 
been broken for a long time. Since recently, each Spark executor actually gets 
its own unique directory so a shared cache doesn't even make sense anymore. The 
proposal is just to remove this feature since it (1) doesn't work, (2) is very 
complex, and (3) there is no way to fix it given the fundamental executor temp 
dir constraints.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to