GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/14997
[SPARK-16711] YarnShuffleService doesn't re-init properly on YARN rolling
upgrade
branch-2.0 version of this patch. The differences are in the
YarnShuffleService for finding the location to put the DB. branch-2.0 does not
use the yarn nm recovery path like master does.
Tested in manually on 8 node yarn cluster and ran unit tests. Manually
tests verified DB created properly and it found them if already existed.
Verified that during rolling upgrade credentials were reloaded and running
application was not affected.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tgravescs/spark SPARK-16711-branch2.0
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/14997.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #14997
----
commit 1f33218f9d18fd0c009af054126356deaa7cf48f
Author: Thomas Graves <[email protected]>
Date: 2016-09-02T17:42:13Z
[SPARK-16711] YarnShuffleService doesn't re-init properly on YARN rolling
upgrade
The Spark Yarn Shuffle Service doesn't re-initialize the application
credentials early enough which causes any other spark executors trying to fetch
from that node during a rolling upgrade to fail with
"java.lang.NullPointerException: Password cannot be null if SASL is enabled".
Right now the spark shuffle service relies on the Yarn nodemanager to
re-register the applications, unfortunately this is after we open the port for
other executors to connect. If other executors connected before the re-register
they get a null pointer exception which isn't a re-tryable exception and cause
them to fail pretty quickly. To solve this I added another leveldb file so that
it can save and re-initialize all the applications before opening the port for
other executors to connect to it. Adding another leveldb was simpler from the
code structure point of view.
Most of the code changes are moving things to common util class.
Patch was tested manually on a Yarn cluster with rolling upgrade was
happing while spark job was running. Without the patch I consistently get the
NullPointerException, with the patch the job gets a few Connection refused
exceptions but the retries kick in and the it succeeds.
Author: Thomas Graves <[email protected]>
Closes #14718 from tgravescs/SPARK-16711.
Conflicts:
common/network-shuffle/pom.xml
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockResolver.java
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
commit 5cae33916953011a38f7f495caba0f333112a088
Author: Thomas Graves <[email protected]>
Date: 2016-09-07T15:49:21Z
Fix finddb
commit 40e6be3acd6889b3b965fba520df87af1d5d671a
Author: Thomas Graves <[email protected]>
Date: 2016-09-07T15:54:42Z
remove unused vars from merge
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]