GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/15726
[SPARK-18107][SQL][FOLLOW-UP] Insert overwrite statement runs much slower
in spark-sql than it does in hive-client
## What changes were proposed in this pull request?
As reported on the jira, insert overwrite statement runs much slower in
Spark, compared with hive-client.
We have addressed this issue for static partition at #15667. This is a
follow-up pr for #15667 to address dynamic partition.
## How was this patch tested?
Jenkins tests.
There are existing tests using insert overwrite statement. Those tests
should be passed. I added a new test to specially test insert overwrite into
dynamic partition.
For performance issue, as I don't have Hive 2.0 environment, this needs the
reporter to verify it. Please refer to the jira.
Please review
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before
opening a pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/viirya/spark-1
improve-hive-insertoverwrite-followup
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/15726.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #15726
----
commit a0060abc5b413de3eaef3b91aecfc1ed29a2f008
Author: Liang-Chi Hsieh <[email protected]>
Date: 2016-11-01T14:03:35Z
Address dynamic partition.
commit eae8f1ad1d8240c236a73066610747f3e7ef3669
Author: Liang-Chi Hsieh <[email protected]>
Date: 2016-11-02T02:26:23Z
Add comments.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]