GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/16677
[WIP][SQL] Use map output statistices to improve global limit's parallelism
## What changes were proposed in this pull request?
A logical `Limit` is performed actually by two physical operations
`LocalLimit` and `GlobalLimit`.
In most of time, before `GlobalLimit`, we will perform a shuffle exchange
to shuffle data to single partition. When the limit number is very big, we
shuffle a lot of data to a single partition and significantly reduce
parallelism, except for the cost of shuffling.
This change tries to perform `GlobalLimit` without shuffling data to single
partition. Instead, we perform the map stage of the shuffling and collect the
statistics of the number of rows in each partition. Shuffled data are actually
all retrieved locally without from remote executors.
Once we get the number of output rows in each partition, we only take the
required number of rows from the locally shuffled data.
## How was this patch tested?
Jenkins tests.
Please review http://spark.apache.org/contributing.html before opening a
pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/viirya/spark-1
improve-global-limit-parallelism
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/16677.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #16677
----
commit e067b10274179c1307a04e2a94c141147867c58f
Author: Liang-Chi Hsieh <[email protected]>
Date: 2017-01-23T08:21:04Z
Use map output statistices to improve global limit's parallelism.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]