GitHub user ooq opened a pull request:
https://github.com/apache/spark/pull/14337
[SPARK-16699][SQL] Fix performance bug in hash aggregate on long string keys
## What changes were proposed in this pull request?
In the following code in `VectorizedHashMapGenerator.scala`:
```
def hashBytes(b: String): String = {
val hash = ctx.freshName("hash")
s"""
|int $result = 0;
|for (int i = 0; i < $b.length; i++) {
| ${genComputeHash(ctx, s"$b[i]", ByteType, hash)}
| $result = ($result ^ (0x9e3779b9)) + $hash + ($result << 6) +
($result >>> 2);
|}
""".stripMargin
}
```
when b=input.getBytes(), the current 2.0 code results in getBytes() being
called n times, n being length of input. getBytes() involves memory copy is
thus expensive and causes a performance degradation.
Fix is to evaluate getBytes() before the for loop.
## How was this patch tested?
Performance bug, no additional test added.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ooq/spark SPARK-16699
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/14337.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #14337
----
commit 9102d50c12bf12ea4c404c71743457d1406a8428
Author: Qifan Pu <[email protected]>
Date: 2016-07-24T22:38:48Z
Fix SPARK-16699
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]