[
https://issues.apache.org/jira/browse/HIVE-15682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857110#comment-15857110
]
Ferdinand Xu commented on HIVE-15682:
-------------------------------------
Hi [~xuefuz]
{noformat}
select count(*) from (select request_lat from dwh.fact_trip where datestr >
'2017-01-27' order by request_lat) x;
Origin: 246.56, 342.78, 216.40, 216.587, 270.805, 449.232, 233.406 AVG: 282.25
patch: 125.21, 123.22, 166.31, 168.30, 120.428, 119.21, 120.385 AVG: 134.72
{noformat}
What kind of data scales do you use to evaluate the performance? We can
evaluate this patch using TPC-DS and TPCx-BB.
> Eliminate per-row based dummy iterator creation
> -----------------------------------------------
>
> Key: HIVE-15682
> URL: https://issues.apache.org/jira/browse/HIVE-15682
> Project: Hive
> Issue Type: Improvement
> Components: Spark
> Affects Versions: 2.2.0
> Reporter: Xuefu Zhang
> Assignee: Xuefu Zhang
> Fix For: 2.2.0
>
> Attachments: HIVE-15682.patch
>
>
> HIVE-15580 introduced a dummy iterator per input row which can be eliminated.
> This is because {{SparkReduceRecordHandler}} is able to handle single key
> value pairs. We can refactor this part of code 1. to remove the need for a
> iterator and 2. to optimize the code path for per (key, value) based (instead
> of (key, value iterator)) processing. It would be also great if we can
> measure the performance after the optimizations and compare to performance
> prior to HIVE-15580.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)