GitHub user bbossy opened a pull request:

    https://github.com/apache/spark/pull/11207

    [SPARK-12583 Fix mesos shuffle service

    Delete shuffle files once a framework is no longer running:
    
    Instead of relying on a connection being disconnected, or a heartbeat 
signal lost, the mesos shuffle service periodically checks whether the 
framework (spark application) is still running:
    
    Every ```spark.storage.blockManagerSlaveTimeoutMs / 4``` the mesos shuffle 
service retrieves the leading master's ```/master/state.json```. It checks 
whether it got a reply from the actual leading master and updates a "last seen" 
timestamp in its internal state (spark applications on mesos register with the 
external shuffle service using their framework id). Then, it deletes the 
temporary files of all the frameworks (that have previously registered), that 
have not been reported as running in 


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/bbossy/spark 
SPARK-12583-mesos-shuffle-service-fix

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/11207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #11207
    
----
commit e387def7f2b1a82d90f337606f49b562401d0714
Author: Bertrand Bossy <[email protected]>
Date:   2016-02-12T17:08:53Z

    SPARK-12583: Fix mesos shuffle service
    
    Delete shuffle files once a framework is no longer running

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to