Github user tdas commented on the pull request:

    https://github.com/apache/spark/pull/2056#issuecomment-53469866
  
    This could be an acceptable solution, but we dont know whether 10 * default 
akka timeout is a good threshold either. The error is happening because we made 
the ContextCleaner block on each deletion event (rdd, broadcast, or shuffle) 
and shuffles can take a long time to be deleted. Given that the RC is imminent, 
its better to not change for all blocks and make a narrower change in the 
context cleaner - not wait for shuffle cleanups to complete.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to