GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/8703
Check partitionId's range in ExternalSorter#spill()
See this thread for background:
http://search-hadoop.com/m/q3RTt0rWvIkHAE81
We should check the range of partition Id and provide meaningful message
through exception.
Alternatively, we can use abs() and modulo to force the partition Id into
legitimate range. However, expectation is that user should correct the logic
error in his / her code.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/8703.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #8703
----
commit 70eb93eb6fa269c26f82e1125aeea69a589d0428
Author: tedyu <[email protected]>
Date: 2015-09-10T18:27:01Z
Check partitionId's range in ExternalSorter#spill()
commit 63dfe1191ed9f04560cfd39fe01108fcef462673
Author: tedyu <[email protected]>
Date: 2015-09-10T18:29:11Z
Correct indentation
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]