[
https://issues.apache.org/jira/browse/HIVE-22489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17010073#comment-17010073
]
Hive QA commented on HIVE-22489:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990119/HIVE-22489.10.patch
{color:red}ERROR:{color} -1 due to build exiting with an error
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/20101/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20101/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20101/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-01-07 20:32:43.033
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-20101/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-01-07 20:32:43.036
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 8a4392f HIVE-22652: TopNKey push through Group by with Grouping
sets (Krisztian Kasa, reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 8a4392f HIVE-22652: TopNKey push through Group by with Grouping
sets (Krisztian Kasa, reviewed by Jesus Camacho Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-01-07 20:32:43.759
+ rm -rf ../yetus_PreCommit-HIVE-Build-20101
+ mkdir ../yetus_PreCommit-HIVE-Build-20101
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-20101
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-20101/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: patch failed:
ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_limit.q.out:99
Falling back to three-way merge...
Applied patch to
'ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_limit.q.out'
with conflicts.
error: patch failed: ql/src/test/results/clientpositive/spark/cbo_limit.q.out:8
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/spark/cbo_limit.q.out'
cleanly.
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:48738: trailing whitespace.
z
/data/hiveptest/working/scratch/build.patch:48742: trailing whitespace.
z
/data/hiveptest/working/scratch/build.patch:48840: trailing whitespace.
z
/data/hiveptest/working/scratch/build.patch:48844: trailing whitespace.
z
/data/hiveptest/working/scratch/build.patch:48937: trailing whitespace.
z
error: patch failed:
ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_limit.q.out:99
Falling back to three-way merge...
Applied patch to
'ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_limit.q.out'
with conflicts.
error: patch failed: ql/src/test/results/clientpositive/spark/cbo_limit.q.out:8
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/spark/cbo_limit.q.out'
cleanly.
U
ql/src/test/results/clientpositive/llap/vector_groupby_grouping_sets_limit.q.out
warning: squelched 20 whitespace errors
warning: 25 lines add whitespace errors.
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-20101
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12990119 - PreCommit-HIVE-Build
> Reduce Sink operator should order nulls by parameter
> -----------------------------------------------------
>
> Key: HIVE-22489
> URL: https://issues.apache.org/jira/browse/HIVE-22489
> Project: Hive
> Issue Type: Bug
> Components: Query Planning
> Reporter: Krisztian Kasa
> Assignee: Krisztian Kasa
> Priority: Major
> Attachments: HIVE-22489.1.patch, HIVE-22489.10.patch,
> HIVE-22489.10.patch, HIVE-22489.2.patch, HIVE-22489.3.patch,
> HIVE-22489.3.patch, HIVE-22489.4.patch, HIVE-22489.5.patch,
> HIVE-22489.6.patch, HIVE-22489.7.patch, HIVE-22489.8.patch,
> HIVE-22489.9.patch, HIVE-22489.9.patch
>
>
> When the property hive.default.nulls.last is set to true and no null order is
> explicitly specified in the ORDER BY clause of the query null ordering should
> be NULLS LAST.
> But some of the Reduce Sink operators still orders null first.
> {code}
> SET hive.default.nulls.last=true;
> EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key =
> src2.key) ORDER BY src1.key LIMIT 5;
> {code}
> {code}
> PREHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key =
> src2.key) ORDER BY src1.key
> PREHOOK: type: QUERY
> PREHOOK: Input: default@src
> #### A masked pattern was here ####
> POSTHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key =
> src2.key) ORDER BY src1.key
> POSTHOOK: type: QUERY
> POSTHOOK: Input: default@src
> #### A masked pattern was here ####
> OPTIMIZED SQL: SELECT `t0`.`key`, `t2`.`value`
> FROM (SELECT `key`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t0`
> INNER JOIN (SELECT `key`, `value`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t2` ON `t0`.`key` = `t2`.`key`
> ORDER BY `t0`.`key`
> STAGE DEPENDENCIES:
> Stage-1 is a root stage
> Stage-0 depends on stages: Stage-1
> STAGE PLANS:
> Stage: Stage-1
> Tez
> #### A masked pattern was here ####
> Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
> #### A masked pattern was here ####
> Vertices:
> Map 1
> Map Operator Tree:
> TableScan
> alias: src1
> filterExpr: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 43500 Basic stats:
> COMPLETE Column stats: COMPLETE
> GatherStats: false
> Filter Operator
> isSamplingPred: false
> predicate: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 43500 Basic stats:
> COMPLETE Column stats: COMPLETE
> Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 43500 Basic stats:
> COMPLETE Column stats: COMPLETE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> null sort order: a
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 500 Data size: 43500 Basic
> stats: COMPLETE Column stats: COMPLETE
> tag: 0
> auto parallelism: true
> Execution mode: vectorized, llap
> LLAP IO: no inputs
> Path -> Alias:
> #### A masked pattern was here ####
> Path -> Partition:
> #### A masked pattern was here ####
> Partition
> base file name: src
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> properties:
> COLUMN_STATS_ACCURATE
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
> #### A masked pattern was here ####
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
> #### A masked pattern was here ####
> serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> properties:
> COLUMN_STATS_ACCURATE
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
> #### A masked pattern was here ####
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
> #### A masked pattern was here ####
> serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> name: default.src
> name: default.src
> Truncated Path -> Alias:
> /src [src1]
> Map 4
> Map Operator Tree:
> TableScan
> alias: src2
> filterExpr: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 89000 Basic stats:
> COMPLETE Column stats: COMPLETE
> GatherStats: false
> Filter Operator
> isSamplingPred: false
> predicate: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 89000 Basic stats:
> COMPLETE Column stats: COMPLETE
> Select Operator
> expressions: key (type: string), value (type: string)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 500 Data size: 89000 Basic stats:
> COMPLETE Column stats: COMPLETE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> null sort order: a
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 500 Data size: 89000 Basic
> stats: COMPLETE Column stats: COMPLETE
> tag: 1
> value expressions: _col1 (type: string)
> auto parallelism: true
> Execution mode: vectorized, llap
> LLAP IO: no inputs
> Path -> Alias:
> #### A masked pattern was here ####
> Path -> Partition:
> #### A masked pattern was here ####
> Partition
> base file name: src
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> properties:
> COLUMN_STATS_ACCURATE
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
> #### A masked pattern was here ####
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
> #### A masked pattern was here ####
> serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>
> input format: org.apache.hadoop.mapred.TextInputFormat
> output format:
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
> properties:
> COLUMN_STATS_ACCURATE
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
> #### A masked pattern was here ####
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
> #### A masked pattern was here ####
> serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> name: default.src
> name: default.src
> Truncated Path -> Alias:
> /src [src2]
> Reducer 2
> Execution mode: llap
> Needs Tagging: false
> Reduce Operator Tree:
> Merge Join Operator
> condition map:
> Inner Join 0 to 1
> keys:
> 0 _col0 (type: string)
> 1 _col0 (type: string)
> outputColumnNames: _col0, _col2
> Position of Big Table: 1
> Statistics: Num rows: 791 Data size: 140798 Basic stats:
> COMPLETE Column stats: COMPLETE
> Select Operator
> expressions: _col0 (type: string), _col2 (type: string)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 791 Data size: 140798 Basic stats:
> COMPLETE Column stats: COMPLETE
> Reduce Output Operator
> key expressions: _col0 (type: string)
> null sort order: z
> sort order: +
> Statistics: Num rows: 791 Data size: 140798 Basic stats:
> COMPLETE Column stats: COMPLETE
> tag: -1
> value expressions: _col1 (type: string)
> auto parallelism: false
> Reducer 3
> Execution mode: vectorized, llap
> Needs Tagging: false
> Reduce Operator Tree:
> Select Operator
> expressions: KEY.reducesinkkey0 (type: string), VALUE._col0
> (type: string)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 791 Data size: 140798 Basic stats:
> COMPLETE Column stats: COMPLETE
> File Output Operator
> compressed: false
> GlobalTableId: 0
> #### A masked pattern was here ####
> NumFilesPerFileSink: 1
> Statistics: Num rows: 791 Data size: 140798 Basic stats:
> COMPLETE Column stats: COMPLETE
> #### A masked pattern was here ####
> table:
> input format:
> org.apache.hadoop.mapred.SequenceFileInputFormat
> output format:
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
> properties:
> columns _col0,_col1
> columns.types string:string
> escape.delim \
> hive.serialization.extend.additional.nesting.levels
> true
> serialization.escape.crlf true
> serialization.format 1
> serialization.lib
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> serde:
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> TotalFiles: 1
> GatherStats: false
> MultiFileSpray: false
> Stage: Stage-0
> Fetch Operator
> limit: -1
> Processor Tree:
> ListSink
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)