[ 
https://issues.apache.org/jira/browse/HIVE-21338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16785473#comment-16785473
 ] 

Hive QA commented on HIVE-21338:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12961291/HIVE-21338.4.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16358/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16358/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16358/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-03-06 10:12:54.031
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16358/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-03-06 10:12:54.056
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 9dc28db HIVE-21340: CBO: Prune non-key columns feeding into a 
SemiJoin (Vineet Garg, reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 9dc28db HIVE-21340: CBO: Prune non-key columns feeding into a 
SemiJoin (Vineet Garg, reviewed by Jesus Camacho Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-03-06 10:13:03.767
+ rm -rf ../yetus_PreCommit-HIVE-Build-16358
+ mkdir ../yetus_PreCommit-HIVE-Build-16358
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16358
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16358/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelOptUtil.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveProject.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java: does 
not exist in index
error: a/ql/src/test/queries/clientpositive/cbo_limit.q: does not exist in index
error: a/ql/src/test/results/clientpositive/keep_uniform.q.out: does not exist 
in index
error: a/ql/src/test/results/clientpositive/llap/cbo_limit.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/llap/keep_uniform.q.out: does not 
exist in index
error: 
a/ql/src/test/results/clientpositive/llap/vector_binary_join_groupby.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/llap/vectorization_0.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/llap/vectorized_date_funcs.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/llap/vectorized_shufflejoin.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/parquet_vectorization_0.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query16.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query23.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query32.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query38.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query92.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query94.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query95.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query96.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/spark/query97.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query16.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query23.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query32.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query38.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query92.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query94.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query95.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query96.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query97.q.out: does 
not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query16.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query23.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query32.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query38.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query92.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query94.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query95.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query96.q.out: 
does not exist in index
error: 
a/ql/src/test/results/clientpositive/perf/tez/constraints/cbo_query97.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query16.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query23.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query32.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query38.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query92.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query94.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query95.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query96.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/constraints/query97.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query16.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query23.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query32.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query38.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query92.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query94.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query95.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query96.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/query97.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/spark/cbo_limit.q.out: does not 
exist in index
error: 
a/ql/src/test/results/clientpositive/spark/parquet_vectorization_0.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/vectorization_0.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/spark/vectorized_shufflejoin.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/vector_binary_join_groupby.q.out: 
does not exist in index
error: a/ql/src/test/results/clientpositive/vectorized_date_funcs.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/vectorized_shufflejoin.q.out: does 
not exist in index
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:548: trailing whitespace.
        Map 11 
/data/hiveptest/working/scratch/build.patch:557: trailing whitespace.
        Map 16 
/data/hiveptest/working/scratch/build.patch:566: trailing whitespace.
        Map 17 
/data/hiveptest/working/scratch/build.patch:575: trailing whitespace.
        Map 18 
/data/hiveptest/working/scratch/build.patch:584: trailing whitespace.
        Map 9 
warning: squelched 42 whitespace errors
warning: 47 lines add whitespace errors.
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc4325027835741795463.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc4325027835741795463.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc5523507329277339427.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
org/apache/hadoop/hive/ql/parse/HiveLexer.g
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
org/apache/hadoop/hive/ql/parse/HiveParser.g
warning(200): IdentifiersParser.g:424:5: 
Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 10

As a result, alternative(s) 10 were disabled for that input
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HintParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HintParser.g
org/apache/hadoop/hive/ql/parse/HintParser.g
Generating vector expression code
Generating vector expression test code
Processing annotations
Annotations processed
Processing annotations
No elements to process
[ERROR] COMPILATION ERROR : 
[ERROR] 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java:[1931,37]
 HiveSortLimitRemoveRule() has private access in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveSortLimitRemoveRule
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) 
on project hive-exec: Compilation failure
[ERROR] 
/data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java:[1931,37]
 HiveSortLimitRemoveRule() has private access in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveSortLimitRemoveRule
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hive-exec
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-16358
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12961291 - PreCommit-HIVE-Build

> Remove order by and limit for aggregates
> ----------------------------------------
>
>                 Key: HIVE-21338
>                 URL: https://issues.apache.org/jira/browse/HIVE-21338
>             Project: Hive
>          Issue Type: Improvement
>          Components: Query Planning
>            Reporter: Vineet Garg
>            Assignee: Vineet Garg
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-21338.1.patch, HIVE-21338.2.patch, 
> HIVE-21338.3.patch, HIVE-21338.4.patch
>
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> If a query is guaranteed to produce at most one row LIMIT and ORDER BY could 
> be removed. This saves unnecessary vertex for LIMIT/ORDER BY.
> {code:sql}
> explain select count(*) cs from store_sales where ss_ext_sales_price > 100.00 
> order by cs limit 100
> {code}
> {code}
> STAGE PLANS:
>   Stage: Stage-1
>     Tez
>       DagId: vgarg_20190227131959_2914830f-eab6-425d-b9f0-b8cb56f8a1e9:4
>       Edges:
>         Reducer 2 <- Map 1 (CUSTOM_SIMPLE_EDGE)
>         Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
>       DagName: vgarg_20190227131959_2914830f-eab6-425d-b9f0-b8cb56f8a1e9:4
>       Vertices:
>         Map 1
>             Map Operator Tree:
>                 TableScan
>                   alias: store_sales
>                   filterExpr: (ss_ext_sales_price > 100) (type: boolean)
>                   Statistics: Num rows: 1 Data size: 112 Basic stats: 
> COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: (ss_ext_sales_price > 100) (type: boolean)
>                     Statistics: Num rows: 1 Data size: 112 Basic stats: 
> COMPLETE Column stats: NONE
>                     Select Operator
>                       Statistics: Num rows: 1 Data size: 112 Basic stats: 
> COMPLETE Column stats: NONE
>                       Group By Operator
>                         aggregations: count()
>                         mode: hash
>                         outputColumnNames: _col0
>                         Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                         Reduce Output Operator
>                           sort order:
>                           Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                           value expressions: _col0 (type: bigint)
>             Execution mode: vectorized
>         Reducer 2
>             Execution mode: vectorized
>             Reduce Operator Tree:
>               Group By Operator
>                 aggregations: count(VALUE._col0)
>                 mode: mergepartial
>                 outputColumnNames: _col0
>                 Statistics: Num rows: 1 Data size: 120 Basic stats: COMPLETE 
> Column stats: NONE
>                 Reduce Output Operator
>                   key expressions: _col0 (type: bigint)
>                   sort order: +
>                   Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                   TopN Hash Memory Usage: 0.1
>         Reducer 3
>             Execution mode: vectorized
>             Reduce Operator Tree:
>               Select Operator
>                 expressions: KEY.reducesinkkey0 (type: bigint)
>                 outputColumnNames: _col0
>                 Statistics: Num rows: 1 Data size: 120 Basic stats: COMPLETE 
> Column stats: NONE
>                 Limit
>                   Number of rows: 100
>                   Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                   File Output Operator
>                     compressed: false
>                     Statistics: Num rows: 1 Data size: 120 Basic stats: 
> COMPLETE Column stats: NONE
>                     table:
>                         input format: 
> org.apache.hadoop.mapred.SequenceFileInputFormat
>                         output format: 
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
>                         serde: 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>   Stage: Stage-0
>     Fetch Operator
>       limit: 100
>       Processor Tree:
>         ListSink
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to