GitHub user a1k0n opened a pull request:
https://github.com/apache/spark/pull/11505
[SPARK-13631] [CORE] Thread-safe getLocationsWithLargestOutputs
## What changes were proposed in this pull request?
If a job is being scheduled in one thread which has a dependency on an
RDD currently executing a shuffle in another thread, Spark would throw a
NullPointerException. This patch synchronizes access to `mapStatuses` and
skips null status entries (which are in-progress shuffle tasks).
## How was this patch tested?
Our client code unit test suite, which was reliably reproducing the race
condition with 10 threads, shows that this fixes it. I have not found a
minimal
test case to add to Spark, but I will attempt to do so if desired.
The same test case was tripping up on SPARK-4454, which was fixed by
making other DAGScheduler code thread-safe.
@shivaram @srowen
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/a1k0n/spark SPARK-13631
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/11505.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #11505
----
commit 7fad8fa5734559b87cf05b030a8e4c716880c60f
Author: Andy Sloane <[email protected]>
Date: 2016-03-04T00:16:35Z
[SPARK-13631] [CORE] Thread-safe getLocationsWithLargestOutputs
If a job is being scheduled in one thread which has a dependency on an
RDD currently executing a shuffle in another thread, Spark would throw a
NullPointerException.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]