[jira] [Commented] (SPARK-16699) Fix performance bug in hash aggregate on long string keys
[ https://issues.apache.org/jira/browse/SPARK-16699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391239#comment-15391239 ] Dongjoon Hyun commented on SPARK-16699: --- Hi, [~qifan]. Nice catch! By the way, usually, only committers set `FIX VERSION` field. You had better leave it blank next time. :) > Fix performance bug in hash aggregate on long string keys > - > > Key: SPARK-16699 > URL: https://issues.apache.org/jira/browse/SPARK-16699 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 2.0.0 >Reporter: Qifan Pu > Fix For: 2.0.0 > > > In the following code in `VectorizedHashMapGenerator.scala`: > ``` > def hashBytes(b: String): String = { > val hash = ctx.freshName("hash") > s""" > |int $result = 0; > |for (int i = 0; i < $b.length; i++) { > | ${genComputeHash(ctx, s"$b[i]", ByteType, hash)} > | $result = ($result ^ (0x9e3779b9)) + $hash + ($result << 6) + > ($result >>> 2); > |} >""".stripMargin > } > ``` > when b=input.getBytes(), the current 2.0 code results in getBytes() being > called n times, n being length of input. getBytes() involves memory copy is > thus expensive and causes a performance degradation. > Fix is to evaluate getBytes() before the for loop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-16699) Fix performance bug in hash aggregate on long string keys
[ https://issues.apache.org/jira/browse/SPARK-16699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391191#comment-15391191 ] Apache Spark commented on SPARK-16699: -- User 'ooq' has created a pull request for this issue: https://github.com/apache/spark/pull/14337 > Fix performance bug in hash aggregate on long string keys > - > > Key: SPARK-16699 > URL: https://issues.apache.org/jira/browse/SPARK-16699 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 2.0.0 >Reporter: Qifan Pu > Fix For: 2.0.0 > > > In the following code in `VectorizedHashMapGenerator.scala`: > ``` > def hashBytes(b: String): String = { > val hash = ctx.freshName("hash") > s""" > |int $result = 0; > |for (int i = 0; i < $b.length; i++) { > | ${genComputeHash(ctx, s"$b[i]", ByteType, hash)} > | $result = ($result ^ (0x9e3779b9)) + $hash + ($result << 6) + > ($result >>> 2); > |} >""".stripMargin > } > ``` > when b=input.getBytes(), the current 2.0 code results in getBytes() being > called n times, n being length of input. getBytes() involves memory copy is > thus expensive and causes a performance degradation. > Fix is to evaluate getBytes() before the for loop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org