GitHub user jerryshao opened a pull request:

    https://github.com/apache/spark/pull/19032

    [SPARK-17321][YARN] Avoid writing shuffle metadata to disk if NM recovery 
is disabled

    ## What changes were proposed in this pull request?
    
    In the current code, if NM recovery is not enabled then 
`YarnShuffleService` will write shuffle metadata to NM local dir-1, if this 
local dir-1 is on bad disk, then `YarnShuffleService` will be failed to start. 
So to solve this issue, in Spark side if NM recovery is not enabled, then Spark 
will not persist data into leveldb, in that case yarn shuffle service can still 
be served but lose the ability for recovery, (it is fine because the failure of 
NM will kill the containers as well as applications).
    
    ## How was this patch tested?
    
    Tested in the local cluster with NM recovery off and on to see if folder is 
created or not. MiniCluster UT isn't added because in MiniCluster NM will 
always set port to 0, but NM recovery requires non-ephemeral port.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jerryshao/apache-spark SPARK-17321

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19032.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19032
    
----
commit 5abbe75072cf3f172f0b2e448941b94d72268c90
Author: jerryshao <[email protected]>
Date:   2017-08-24T03:28:48Z

    Avoid writing shuffle metadata to disk if NM recovery is disabled
    
    Change-Id: Id062d71589f46052706058c151c706dae38b1e6e

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to