[jira] [Commented] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274077#comment-16274077 ] Hive QA commented on HIVE-16890: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900089/HIVE-16890.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 11493 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8074/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8074/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8074/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900089 - PreCommit-HIVE-Build > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch, > HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18043) Vectorization: Support List type in MapWork
[ https://issues.apache.org/jira/browse/HIVE-18043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274052#comment-16274052 ] Colin Ma commented on HIVE-18043: - hi, [~kgyrtkirk], vector_complex_join is fixed in the latest patch[HIVE-18043.005.patch], from the QA result, vector_complex_join is passed. > Vectorization: Support List type in MapWork > --- > > Key: HIVE-18043 > URL: https://issues.apache.org/jira/browse/HIVE-18043 > Project: Hive > Issue Type: Improvement >Reporter: Colin Ma >Assignee: Colin Ma > Fix For: 3.0.0 > > Attachments: HIVE-18043.001.patch, HIVE-18043.002.patch, > HIVE-18043.003.patch, HIVE-18043.004.patch, HIVE-18043.005.patch > > > Support Complex Types in vectorization is finished in HIVE-16589, but List > type is still not support in MapWork. It should be supported to improve the > performance when vectorization is enable. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18043) Vectorization: Support List type in MapWork
[ https://issues.apache.org/jira/browse/HIVE-18043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274043#comment-16274043 ] Zoltan Haindrich commented on HIVE-18043: - [~colin_mjj] it looks to me that this patch have broken: TestCliDriver#testCliDriver[vector_complex_join]; could you please take a look? > Vectorization: Support List type in MapWork > --- > > Key: HIVE-18043 > URL: https://issues.apache.org/jira/browse/HIVE-18043 > Project: Hive > Issue Type: Improvement >Reporter: Colin Ma >Assignee: Colin Ma > Fix For: 3.0.0 > > Attachments: HIVE-18043.001.patch, HIVE-18043.002.patch, > HIVE-18043.003.patch, HIVE-18043.004.patch, HIVE-18043.005.patch > > > Support Complex Types in vectorization is finished in HIVE-16589, but List > type is still not support in MapWork. It should be supported to improve the > performance when vectorization is enable. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274036#comment-16274036 ] Hive QA commented on HIVE-16890: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} serde: The patch generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 7m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / f68ebdc | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8074/yetus/patch-asflicense-problems.txt | | modules | C: serde U: serde | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8074/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch, > HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274020#comment-16274020 ] Hive QA commented on HIVE-18088: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900118/HIVE-18088.3.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 11493 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] (batchId=247) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers1 (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerTotalTasks (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8073/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8073/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8073/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900118 - PreCommit-HIVE-Build > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.2.patch, > HIVE-18088.3.patch, HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HIVE-18194) Migrate existing ACID tables to use write id per table rather than global transaction id
[ https://issues.apache.org/jira/browse/HIVE-18194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek resolved HIVE-18194. Resolution: Duplicate > Migrate existing ACID tables to use write id per table rather than global > transaction id > > > Key: HIVE-18194 > URL: https://issues.apache.org/jira/browse/HIVE-18194 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > > dependent upon HIVE-18192 > Migrate existing ACID tables which have the older definition of primary key > to new version. this will require changing the metadata tables / sequences > that are going to be used to generate the write id for a given table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18194) Migrate existing ACID tables to use write id per table rather than global transaction id
[ https://issues.apache.org/jira/browse/HIVE-18194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek reassigned HIVE-18194: -- > Migrate existing ACID tables to use write id per table rather than global > transaction id > > > Key: HIVE-18194 > URL: https://issues.apache.org/jira/browse/HIVE-18194 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > > dependent upon HIVE-18192 > Migrate existing ACID tables which have the older definition of primary key > to new version. this will require changing the metadata tables / sequences > that are going to be used to generate the write id for a given table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id
[ https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek reassigned HIVE-18193: -- > Migrate existing ACID tables to use write id per table rather than global > transaction id > > > Key: HIVE-18193 > URL: https://issues.apache.org/jira/browse/HIVE-18193 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > > dependent upon HIVE-18192 > For existing ACID Tables we need to update the table level write id > metatables/sequences so any new operations on these tables works seamlessly > without any conflicting data in existing base/delta files. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18192) Introduce WriteId per table rather than using global transaction id
[ https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek reassigned HIVE-18192: -- > Introduce WriteId per table rather than using global transaction id > --- > > Key: HIVE-18192 > URL: https://issues.apache.org/jira/browse/HIVE-18192 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > > To support ACID replication, we will be introducing a per table write Id > which will replace the transaction id in the primary key for each row in a > ACID table. > The current primary key is determined via > > which will move to > > a persistable map of global txn id -> to table -> write id for that table has > to be maintained to now allow Snapshot isolation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273978#comment-16273978 ] Hive QA commented on HIVE-18088: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s{color} | {color:red} ql: The patch generated 75 new + 570 unchanged - 34 fixed = 645 total (was 604) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / f68ebdc | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8073/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8073/yetus/patch-asflicense-problems.txt | | modules | C: common ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8073/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.2.patch, > HIVE-18088.3.patch, HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18144) Runtime type inference error when join three table for different column type
[ https://issues.apache.org/jira/browse/HIVE-18144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273968#comment-16273968 ] Pengcheng Xiong commented on HIVE-18144: LGTM. I saw you put 2.1 and 2.2 as affected versions. How about master? > Runtime type inference error when join three table for different column type > - > > Key: HIVE-18144 > URL: https://issues.apache.org/jira/browse/HIVE-18144 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.1.1, 2.2.0 >Reporter: Wang Haihua >Assignee: Wang Haihua > Attachments: HIVE-18144.1.patch > > > Union operation with three or more table, which has different column types, > may cause type inference error when Task execution. > E.g, e.g. t1(with column int) union all t2(with column int) union all t3(with > column bigint), finally should be {{bigint}}, > RowSchema of union t1 with t2, we call {{leftOp}}, should be int, then leftOp > union t3 should finally be bigint. > This mean RowSchema of leftOp would be {{bigint}} instead of {{int}} > However we see in {{SemanticAnalyzer.java}}, leftOp RowSchema is finally > {{int}} which was wrong: > {code} > (_col0: int|{t01-subquery1}diff_long_type,_col1: > int|{t01-subquery1}id2,_col2: bigint|{t01-subquery1}id3)}} > {code} > Impacted code in SemanticAnalyzer.java: > {code} > if(!(leftOp instanceof UnionOperator)) { > Operator oldChild = leftOp; > leftOp = (Operator) leftOp.getParentOperators().get(0); > leftOp.removeChildAndAdoptItsChildren(oldChild); > } > // make left a child of right > List> child = > new ArrayList>(); > child.add(leftOp); > rightOp.setChildOperators(child); > List> parent = leftOp > .getParentOperators(); > parent.add(rightOp); > UnionDesc uDesc = ((UnionOperator) leftOp).getConf(); > // Here we should set RowSchema of leftOp to unionoutRR's, or else the > RowSchema of leftOp is wrong. > // leftOp.setSchema(new RowSchema(unionoutRR.getColumnInfos())); > uDesc.setNumInputs(uDesc.getNumInputs() + 1); > return putOpInsertMap(leftOp, unionoutRR); > {code} > Operation for reproduceļ¼ > {code} > create table test_union_different_type(id bigint, id2 bigint, id3 bigint, > name string); > set hive.auto.convert.join=true; > insert overwrite table test_union_different_type select 1, 2, 3, > "test_union_different_type"; > select > t01.diff_long_type as diff_long_type, > t01.id2 as id2, > t00.id as id, > t01.id3 as id3 > from test_union_different_type t00 > left join > ( > select 1 as diff_long_type, 30 as id2, id3 from test_union_different_type > union ALL > select 2 as diff_long_type, 20 as id2, id3 from test_union_different_type > union ALL > select id as diff_long_type, id2, 30 as id3 from test_union_different_type > ) t01 > on t00.id = t01.diff_long_type > ; > {code} > Stack trace: > {code} > Diagnostic Messages for this Task: > Error: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row {"id":1,"id2":null,"id3":null,"name":null} > at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:169) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row {"id":1,"id2":null,"id3":null,"name":null} > at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:499) > at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:160) > ... 8 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected > exception from MapJoinOperator : org.apache.hadoop.io.LongWritable cannot be > cast to org.apache.hadoop.io.IntWritable > at > org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:465) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:149) > at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:489) > ... 9
[jira] [Updated] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18188: - Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Patch fixes the test failures. Committed to master. > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 3.0.0 > > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18173) Improve plans for correlated subqueries with non-equi predicate
[ https://issues.apache.org/jira/browse/HIVE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273959#comment-16273959 ] Hive QA commented on HIVE-18173: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900084/HIVE-18173.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11493 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_exists] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_exists] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in_having] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_exists] (batchId=123) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8072/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8072/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8072/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900084 - PreCommit-HIVE-Build > Improve plans for correlated subqueries with non-equi predicate > --- > > Key: HIVE-18173 > URL: https://issues.apache.org/jira/browse/HIVE-18173 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18173.1.patch, HIVE-18173.2.patch > > > HIVE-17767 optimized plan to not generate value generator (i.e. an extra join > with outer query to fetch correlated columns) for EQUAL and NOT EQUAL > predicates e.g. > {code:sql} > select * from src b where b.key in (select key from src a where b.value <> > a.value) > {code} > This should be improved and implemented for rest of the predicates e.g. LESS > THAN etc -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16295) Add support for using Hadoop's OutputCommitter
[ https://issues.apache.org/jira/browse/HIVE-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273948#comment-16273948 ] Aaron Fabbri commented on HIVE-16295: - Just FYI for watchers: the S3 Output Committer has been merged to trunk in Hadoop Common (HADOOP-13786). > Add support for using Hadoop's OutputCommitter > -- > > Key: HIVE-16295 > URL: https://issues.apache.org/jira/browse/HIVE-16295 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Sahil Takiar > > Hive doesn't have integration with Hadoop's {{OutputCommitter}}, it uses a > {{NullOutputCommitter}} and uses its own commit logic spread across > {{FileSinkOperator}}, {{MoveTask}}, and {{Hive}}. > The Hadoop community is building an {{OutputCommitter}} that integrates with > S3Guard and does a safe, coordinate commit of data on S3 inside individual > tasks (HADOOP-13786). If Hive can integrate with this new {{OutputCommitter}} > there would be a lot of benefits to Hive-on-S3: > * Data is only written once; directly committing data at a task level means > no renames are necessary > * The commit is done safely, in a coordinated manner; duplicate tasks (from > task retries or speculative execution) should not step on each other -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18052) Run p-tests on mm tables
[ https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom reassigned HIVE-18052: - Assignee: Steve Yeom > Run p-tests on mm tables > > > Key: HIVE-18052 > URL: https://issues.apache.org/jira/browse/HIVE-18052 > Project: Hive > Issue Type: Task >Reporter: Steve Yeom >Assignee: Steve Yeom > Attachments: HIVE-18052.1.patch, HIVE-18052.2.patch, > HIVE-18052.3.patch, HIVE-18052.4.patch, HIVE-18052.5.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18173) Improve plans for correlated subqueries with non-equi predicate
[ https://issues.apache.org/jira/browse/HIVE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273935#comment-16273935 ] Hive QA commented on HIVE-18173: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s{color} | {color:red} ql: The patch generated 26 new + 488 unchanged - 18 fixed = 514 total (was 506) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 508d7e6 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8072/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8072/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8072/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improve plans for correlated subqueries with non-equi predicate > --- > > Key: HIVE-18173 > URL: https://issues.apache.org/jira/browse/HIVE-18173 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18173.1.patch, HIVE-18173.2.patch > > > HIVE-17767 optimized plan to not generate value generator (i.e. an extra join > with outer query to fetch correlated columns) for EQUAL and NOT EQUAL > predicates e.g. > {code:sql} > select * from src b where b.key in (select key from src a where b.value <> > a.value) > {code} > This should be improved and implemented for rest of the predicates e.g. LESS > THAN etc -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273920#comment-16273920 ] Hive QA commented on HIVE-18188: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900085/HIVE-18188.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 11493 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8071/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8071/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8071/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900085 - PreCommit-HIVE-Build > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18144) Runtime type inference error when join three table for different column type
[ https://issues.apache.org/jira/browse/HIVE-18144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273915#comment-16273915 ] Wang Haihua commented on HIVE-18144: failed tests seems are not related. cc [~pxiong] could you please review for it. Thanks > Runtime type inference error when join three table for different column type > - > > Key: HIVE-18144 > URL: https://issues.apache.org/jira/browse/HIVE-18144 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.1.1, 2.2.0 >Reporter: Wang Haihua >Assignee: Wang Haihua > Attachments: HIVE-18144.1.patch > > > Union operation with three or more table, which has different column types, > may cause type inference error when Task execution. > E.g, e.g. t1(with column int) union all t2(with column int) union all t3(with > column bigint), finally should be {{bigint}}, > RowSchema of union t1 with t2, we call {{leftOp}}, should be int, then leftOp > union t3 should finally be bigint. > This mean RowSchema of leftOp would be {{bigint}} instead of {{int}} > However we see in {{SemanticAnalyzer.java}}, leftOp RowSchema is finally > {{int}} which was wrong: > {code} > (_col0: int|{t01-subquery1}diff_long_type,_col1: > int|{t01-subquery1}id2,_col2: bigint|{t01-subquery1}id3)}} > {code} > Impacted code in SemanticAnalyzer.java: > {code} > if(!(leftOp instanceof UnionOperator)) { > Operator oldChild = leftOp; > leftOp = (Operator) leftOp.getParentOperators().get(0); > leftOp.removeChildAndAdoptItsChildren(oldChild); > } > // make left a child of right > List> child = > new ArrayList>(); > child.add(leftOp); > rightOp.setChildOperators(child); > List> parent = leftOp > .getParentOperators(); > parent.add(rightOp); > UnionDesc uDesc = ((UnionOperator) leftOp).getConf(); > // Here we should set RowSchema of leftOp to unionoutRR's, or else the > RowSchema of leftOp is wrong. > // leftOp.setSchema(new RowSchema(unionoutRR.getColumnInfos())); > uDesc.setNumInputs(uDesc.getNumInputs() + 1); > return putOpInsertMap(leftOp, unionoutRR); > {code} > Operation for reproduceļ¼ > {code} > create table test_union_different_type(id bigint, id2 bigint, id3 bigint, > name string); > set hive.auto.convert.join=true; > insert overwrite table test_union_different_type select 1, 2, 3, > "test_union_different_type"; > select > t01.diff_long_type as diff_long_type, > t01.id2 as id2, > t00.id as id, > t01.id3 as id3 > from test_union_different_type t00 > left join > ( > select 1 as diff_long_type, 30 as id2, id3 from test_union_different_type > union ALL > select 2 as diff_long_type, 20 as id2, id3 from test_union_different_type > union ALL > select id as diff_long_type, id2, 30 as id3 from test_union_different_type > ) t01 > on t00.id = t01.diff_long_type > ; > {code} > Stack trace: > {code} > Diagnostic Messages for this Task: > Error: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row {"id":1,"id2":null,"id3":null,"name":null} > at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:169) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row {"id":1,"id2":null,"id3":null,"name":null} > at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:499) > at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:160) > ... 8 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected > exception from MapJoinOperator : org.apache.hadoop.io.LongWritable cannot be > cast to org.apache.hadoop.io.IntWritable > at > org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:465) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:149) > at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:489)
[jira] [Assigned] (HIVE-18191) Vectorization: When text input format is vectorized, TableScanOperator needs to not try to gather statistics
[ https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-18191: --- > Vectorization: When text input format is vectorized, TableScanOperator needs > to not try to gather statistics > > > Key: HIVE-18191 > URL: https://issues.apache.org/jira/browse/HIVE-18191 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > That is, to not an try to use row-mode gatherStats method... -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273871#comment-16273871 ] Hive QA commented on HIVE-18188: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 508d7e6 | | Default Java | 1.8.0_111 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-8071/yetus/whitespace-eol.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8071/yetus/patch-asflicense-problems.txt | | modules | C: service U: service | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8071/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273852#comment-16273852 ] Hive QA commented on HIVE-18068: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900080/HIVE-18068.04.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=239) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic2] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic3] (batchId=60) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_intervals] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_timeseries] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_topn] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_10] (batchId=42) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_gby_join] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_queries] (batchId=98) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=178) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_gby_join] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join] (batchId=121) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query11] (batchId=249) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query4] (batchId=249) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query74] (batchId=249) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query11] (batchId=247) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query4] (batchId=247) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query74] (batchId=247) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8070/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8070/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8070/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 31 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900080 - PreCommit-HIVE-Build > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, > HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18045) can VectorizedOrcAcidRowBatchReader be used all the time
[ https://issues.apache.org/jira/browse/HIVE-18045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18045: -- Priority: Blocker (was: Major) > can VectorizedOrcAcidRowBatchReader be used all the time > > > Key: HIVE-18045 > URL: https://issues.apache.org/jira/browse/HIVE-18045 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Blocker > > Can we use VectorizedOrcAcidRowBatchReader for non-vectorized queries? > It would just need a wrapper on top of it to turn VRBs into rows. > This would mean there is just 1 acid reader to maintain - not 2. > Would this be an issue for sorted reader/SMB support? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17361) Support LOAD DATA for transactional tables
[ https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17361: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) committed to master thanks Alan for the review > Support LOAD DATA for transactional tables > -- > > Key: HIVE-17361 > URL: https://issues.apache.org/jira/browse/HIVE-17361 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Wei Zheng >Assignee: Eugene Koifman >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-17361.07.patch, HIVE-17361.08.patch, > HIVE-17361.09.patch, HIVE-17361.1.patch, HIVE-17361.10.patch, > HIVE-17361.11.patch, HIVE-17361.12.patch, HIVE-17361.14.patch, > HIVE-17361.16.patch, HIVE-17361.17.patch, HIVE-17361.19.patch, > HIVE-17361.2.patch, HIVE-17361.20.patch, HIVE-17361.21.patch, > HIVE-17361.23.patch, HIVE-17361.24.patch, HIVE-17361.25.patch, > HIVE-17361.3.patch, HIVE-17361.4.patch > > > LOAD DATA was not supported since ACID was introduced. Need to fill this gap > between ACID table and regular hive table. > Current Documentation is under [DML > Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations] > and [Loading files into > tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]: > \\ > * Load Data performs very limited validations of the data, in particular it > uses the input file name which may not be in 0_0 which can break some > read logic. (Certainly will for Acid). > * It does not check the schema of the file. This may be a non issue for Acid > which requires ORC which is self describing so Schema Evolution may handle > this seamlessly. (Assuming Schema is not too different). > * It does check that _InputFormat_S are compatible. > * Bucketed (and thus sorted) tables don't support Load Data (but only if > hive.strict.checks.bucketing=true (default)). Will keep this restriction for > Acid. > * Load Data supports OVERWRITE clause > * What happens to file permissions/ownership: rename vs copy differences > \\ > The implementation will follow the same idea as in HIVE-14988 and use a > base_N/ dir for OVERWRITE clause. > \\ > How is minor compaction going to handle delta/base with original files? > Since delta_8_8/_meta_data is created before files are moved, delta_8_8 > becomes visible before it's populated. Is that an issue? > It's not since txn 8 is not committed. > h3. Implementation Notes/Limitations (patch 25) > * bucketed/sorted tables are not supported > * input files names must be of the form 0_0/0_0_copy_1 - enforced. > (HIVE-18125) > * Load Data creates a delta_x_x/ that contains new files > * Load Data w/Overwrite creates a base_x/ that contains new files > * A '_metadata_acid' file is placed in the target directory to indicate it > requires special handling on read > * The input files must be 'plain' ORC files, i.e. not contain acid metadata > columns as would be the case if these files were copied from another Acid > table. In the latter case, the ROW_IDs embedded in the data may not make > sense in the target table (if it's in a different cluster, for example). > Such files may also have a mix of committed and aborted data. > ** this could be relaxed later by adding info to the _metadata_acid file to > ignore existing ROW_IDs on read. > * ROW_IDs are attached dynamically at read time and made permanent by > compaction. This is done the same way has handling of files that were > written to a table before it was converted to Acid. > * Vectorization is supported -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.
[ https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273845#comment-16273845 ] Lefty Leverenz commented on HIVE-14792: --- Doc note: This adds *hive.optimize.update.table.properties.from.serde* and *hive.optimize.update.table.properties.from.serde.list* to HiveConf.java, so they need to be documented in the wiki. * [Avro SerDe | https://cwiki.apache.org/confluence/display/Hive/AvroSerDe] * [Configuration Properties -- SerDes | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-SerDes] Added TODOC2.2 and TODOC2.4 labels. > AvroSerde reads the remote schema-file at least once per mapper, per table > reference. > - > > Key: HIVE-14792 > URL: https://issues.apache.org/jira/browse/HIVE-14792 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Labels: TODOC2.2, TODOC2.4 > Fix For: 3.0.0, 2.4.0, 2.2.1 > > Attachments: HIVE-14792.1.patch > > > Avro tables that use "external" schema files stored on HDFS can cause > excessive calls to {{FileSystem::open()}}, especially for queries that spawn > large numbers of mappers. > This is because of the following code in {{AvroSerDe::initialize()}}: > {code:title=AvroSerDe.java|borderStyle=solid} > public void initialize(Configuration configuration, Properties properties) > throws SerDeException { > // ... > if (hasExternalSchema(properties) > || columnNameProperty == null || columnNameProperty.isEmpty() > || columnTypeProperty == null || columnTypeProperty.isEmpty()) { > schema = determineSchemaOrReturnErrorSchema(configuration, properties); > } else { > // Get column names and sort order > columnNames = Arrays.asList(columnNameProperty.split(",")); > columnTypes = > TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty); > schema = getSchemaFromCols(properties, columnNames, columnTypes, > columnCommentProperty); > > properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), > schema.toString()); > } > // ... > } > {code} > For tables using {{avro.schema.url}}, every time the SerDe is initialized > (i.e. at least once per mapper), the schema file is read remotely. For > queries with thousands of mappers, this leads to a stampede to the handful > (3?) datanodes that host the schema-file. In the best case, this causes > slowdowns. > It would be preferable to distribute the Avro-schema to all mappers as part > of the job-conf. The alternatives aren't exactly appealing: > # One can't rely solely on the {{column.list.types}} stored in the Hive > metastore. (HIVE-14789). > # {{avro.schema.literal}} might not always be usable, because of the > size-limit on table-parameters. The typical size of the Avro-schema file is > between 0.5-3MB, in my limited experience. Bumping the max table-parameter > size isn't a great solution. > If the {{avro.schema.file}} were read during query-planning, and made > available as part of table-properties (but not serialized into the > metastore), the downstream logic will remain largely intact. I have a patch > that does this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18037) Migrate Slider LLAP package to YARN Service framework for Hadoop 3.x
[ https://issues.apache.org/jira/browse/HIVE-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated HIVE-18037: - Attachment: HIVE-18037.003.patch > Migrate Slider LLAP package to YARN Service framework for Hadoop 3.x > > > Key: HIVE-18037 > URL: https://issues.apache.org/jira/browse/HIVE-18037 > Project: Hive > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: 3.0.0 > > Attachments: HIVE-18037.001.patch, HIVE-18037.002.patch, > HIVE-18037.003.patch > > > Apache Slider has been migrated to Hadoop-3.x and is referred to as YARN > Service (YARN-4692). Most of the classic Slider features are now going to be > supported in a first-class manner by core YARN. It includes several new > features like a RESTful API. Command line equivalents of classic Slider are > supported by YARN Service as well. > This jira will take care of all changes required to Slider LLAP packaging and > scripts to make it work against Hadoop 3.x. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.
[ https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-14792: -- Labels: TODOC2.2 TODOC2.4 (was: TODOC3.0) > AvroSerde reads the remote schema-file at least once per mapper, per table > reference. > - > > Key: HIVE-14792 > URL: https://issues.apache.org/jira/browse/HIVE-14792 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Labels: TODOC2.2, TODOC2.4 > Fix For: 3.0.0, 2.4.0, 2.2.1 > > Attachments: HIVE-14792.1.patch > > > Avro tables that use "external" schema files stored on HDFS can cause > excessive calls to {{FileSystem::open()}}, especially for queries that spawn > large numbers of mappers. > This is because of the following code in {{AvroSerDe::initialize()}}: > {code:title=AvroSerDe.java|borderStyle=solid} > public void initialize(Configuration configuration, Properties properties) > throws SerDeException { > // ... > if (hasExternalSchema(properties) > || columnNameProperty == null || columnNameProperty.isEmpty() > || columnTypeProperty == null || columnTypeProperty.isEmpty()) { > schema = determineSchemaOrReturnErrorSchema(configuration, properties); > } else { > // Get column names and sort order > columnNames = Arrays.asList(columnNameProperty.split(",")); > columnTypes = > TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty); > schema = getSchemaFromCols(properties, columnNames, columnTypes, > columnCommentProperty); > > properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), > schema.toString()); > } > // ... > } > {code} > For tables using {{avro.schema.url}}, every time the SerDe is initialized > (i.e. at least once per mapper), the schema file is read remotely. For > queries with thousands of mappers, this leads to a stampede to the handful > (3?) datanodes that host the schema-file. In the best case, this causes > slowdowns. > It would be preferable to distribute the Avro-schema to all mappers as part > of the job-conf. The alternatives aren't exactly appealing: > # One can't rely solely on the {{column.list.types}} stored in the Hive > metastore. (HIVE-14789). > # {{avro.schema.literal}} might not always be usable, because of the > size-limit on table-parameters. The typical size of the Avro-schema file is > between 0.5-3MB, in my limited experience. Bumping the max table-parameter > size isn't a great solution. > If the {{avro.schema.file}} were read during query-planning, and made > available as part of table-properties (but not serialized into the > metastore), the downstream logic will remain largely intact. I have a patch > that does this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.
[ https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-14792: -- Labels: TODOC3.0 (was: ) > AvroSerde reads the remote schema-file at least once per mapper, per table > reference. > - > > Key: HIVE-14792 > URL: https://issues.apache.org/jira/browse/HIVE-14792 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Labels: TODOC3.0 > Fix For: 3.0.0, 2.4.0, 2.2.1 > > Attachments: HIVE-14792.1.patch > > > Avro tables that use "external" schema files stored on HDFS can cause > excessive calls to {{FileSystem::open()}}, especially for queries that spawn > large numbers of mappers. > This is because of the following code in {{AvroSerDe::initialize()}}: > {code:title=AvroSerDe.java|borderStyle=solid} > public void initialize(Configuration configuration, Properties properties) > throws SerDeException { > // ... > if (hasExternalSchema(properties) > || columnNameProperty == null || columnNameProperty.isEmpty() > || columnTypeProperty == null || columnTypeProperty.isEmpty()) { > schema = determineSchemaOrReturnErrorSchema(configuration, properties); > } else { > // Get column names and sort order > columnNames = Arrays.asList(columnNameProperty.split(",")); > columnTypes = > TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty); > schema = getSchemaFromCols(properties, columnNames, columnTypes, > columnCommentProperty); > > properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), > schema.toString()); > } > // ... > } > {code} > For tables using {{avro.schema.url}}, every time the SerDe is initialized > (i.e. at least once per mapper), the schema file is read remotely. For > queries with thousands of mappers, this leads to a stampede to the handful > (3?) datanodes that host the schema-file. In the best case, this causes > slowdowns. > It would be preferable to distribute the Avro-schema to all mappers as part > of the job-conf. The alternatives aren't exactly appealing: > # One can't rely solely on the {{column.list.types}} stored in the Hive > metastore. (HIVE-14789). > # {{avro.schema.literal}} might not always be usable, because of the > size-limit on table-parameters. The typical size of the Avro-schema file is > between 0.5-3MB, in my limited experience. Bumping the max table-parameter > size isn't a great solution. > If the {{avro.schema.file}} were read during query-planning, and made > available as part of table-properties (but not serialized into the > metastore), the downstream logic will remain largely intact. I have a patch > that does this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273834#comment-16273834 ] Hive QA commented on HIVE-18068: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} ql: The patch generated 0 new + 420 unchanged - 2 fixed = 420 total (was 422) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} root: The patch generated 0 new + 420 unchanged - 2 fixed = 420 total (was 422) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / c03001e | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8070/yetus/patch-asflicense-problems.txt | | modules | C: ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8070/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, > HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17870) Update NoDeleteRollingFileAppender to use Log4j2 api
[ https://issues.apache.org/jira/browse/HIVE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273798#comment-16273798 ] Hive QA commented on HIVE-17870: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900075/HIVE-17870.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 11482 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8069/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8069/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8069/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900075 - PreCommit-HIVE-Build > Update NoDeleteRollingFileAppender to use Log4j2 api > > > Key: HIVE-17870 > URL: https://issues.apache.org/jira/browse/HIVE-17870 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Andrew Sherman > Attachments: HIVE-17870.1.patch, HIVE-17870.2.patch > > > NoDeleteRollingFileAppender is still using log4jv1 api. Since we already > moved to use log4j2 in hive, we better update to use log4jv2 as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18190) Consider looking at ORC file schema rather than using _metadata_acid file
[ https://issues.apache.org/jira/browse/HIVE-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273786#comment-16273786 ] Eugene Koifman commented on HIVE-18190: --- probably best to do HIVE-18045 first to simplify the reader before doing this > Consider looking at ORC file schema rather than using _metadata_acid file > - > > Key: HIVE-18190 > URL: https://issues.apache.org/jira/browse/HIVE-18190 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > See if it's possible to just look at the schema of the file in base_ or > delta_ to see if it has Acid metadata columns. If not, it's an 'original' > file and needs ROW_IDs generated. > see more discussion at https://reviews.apache.org/r/64131/ -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18190) Consider looking at ORC file schema rather than using _metadata_acid file
[ https://issues.apache.org/jira/browse/HIVE-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18190: -- Description: See if it's possible to just look at the schema of the file in base_ or delta_ to see if it has Acid metadata columns. If not, it's an 'original' file and needs ROW_IDs generated. see more discussion at https://reviews.apache.org/r/64131/ was:See if it's possible to just look at the schema of the file in base_ or delta_ to see if it has Acid metadata columns. If not, it's an 'original' file and needs ROW_IDs generated. > Consider looking at ORC file schema rather than using _metadata_acid file > - > > Key: HIVE-18190 > URL: https://issues.apache.org/jira/browse/HIVE-18190 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > See if it's possible to just look at the schema of the file in base_ or > delta_ to see if it has Acid metadata columns. If not, it's an 'original' > file and needs ROW_IDs generated. > see more discussion at https://reviews.apache.org/r/64131/ -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18190) Consider looking at ORC file schema rather than using _metadata_acid file
[ https://issues.apache.org/jira/browse/HIVE-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-18190: - > Consider looking at ORC file schema rather than using _metadata_acid file > - > > Key: HIVE-18190 > URL: https://issues.apache.org/jira/browse/HIVE-18190 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > See if it's possible to just look at the schema of the file in base_ or > delta_ to see if it has Acid metadata columns. If not, it's an 'original' > file and needs ROW_IDs generated. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17361) Support LOAD DATA for transactional tables
[ https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273778#comment-16273778 ] Alan Gates commented on HIVE-17361: --- +1 based on discussion in review board. > Support LOAD DATA for transactional tables > -- > > Key: HIVE-17361 > URL: https://issues.apache.org/jira/browse/HIVE-17361 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Wei Zheng >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-17361.07.patch, HIVE-17361.08.patch, > HIVE-17361.09.patch, HIVE-17361.1.patch, HIVE-17361.10.patch, > HIVE-17361.11.patch, HIVE-17361.12.patch, HIVE-17361.14.patch, > HIVE-17361.16.patch, HIVE-17361.17.patch, HIVE-17361.19.patch, > HIVE-17361.2.patch, HIVE-17361.20.patch, HIVE-17361.21.patch, > HIVE-17361.23.patch, HIVE-17361.24.patch, HIVE-17361.25.patch, > HIVE-17361.3.patch, HIVE-17361.4.patch > > > LOAD DATA was not supported since ACID was introduced. Need to fill this gap > between ACID table and regular hive table. > Current Documentation is under [DML > Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations] > and [Loading files into > tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]: > \\ > * Load Data performs very limited validations of the data, in particular it > uses the input file name which may not be in 0_0 which can break some > read logic. (Certainly will for Acid). > * It does not check the schema of the file. This may be a non issue for Acid > which requires ORC which is self describing so Schema Evolution may handle > this seamlessly. (Assuming Schema is not too different). > * It does check that _InputFormat_S are compatible. > * Bucketed (and thus sorted) tables don't support Load Data (but only if > hive.strict.checks.bucketing=true (default)). Will keep this restriction for > Acid. > * Load Data supports OVERWRITE clause > * What happens to file permissions/ownership: rename vs copy differences > \\ > The implementation will follow the same idea as in HIVE-14988 and use a > base_N/ dir for OVERWRITE clause. > \\ > How is minor compaction going to handle delta/base with original files? > Since delta_8_8/_meta_data is created before files are moved, delta_8_8 > becomes visible before it's populated. Is that an issue? > It's not since txn 8 is not committed. > h3. Implementation Notes/Limitations (patch 25) > * bucketed/sorted tables are not supported > * input files names must be of the form 0_0/0_0_copy_1 - enforced. > (HIVE-18125) > * Load Data creates a delta_x_x/ that contains new files > * Load Data w/Overwrite creates a base_x/ that contains new files > * A '_metadata_acid' file is placed in the target directory to indicate it > requires special handling on read > * The input files must be 'plain' ORC files, i.e. not contain acid metadata > columns as would be the case if these files were copied from another Acid > table. In the latter case, the ROW_IDs embedded in the data may not make > sense in the target table (if it's in a different cluster, for example). > Such files may also have a mix of committed and aborted data. > ** this could be relaxed later by adding info to the _metadata_acid file to > ignore existing ROW_IDs on read. > * ROW_IDs are attached dynamically at read time and made permanent by > compaction. This is done the same way has handling of files that were > written to a table before it was converted to Acid. > * Vectorization is supported -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17870) Update NoDeleteRollingFileAppender to use Log4j2 api
[ https://issues.apache.org/jira/browse/HIVE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273771#comment-16273771 ] Hive QA commented on HIVE-17870: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s{color} | {color:red} ql: The patch generated 2 new + 9 unchanged - 2 fixed = 11 total (was 11) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s{color} | {color:red} root: The patch generated 2 new + 9 unchanged - 2 fixed = 11 total (was 11) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 11s{color} | {color:red} The patch generated 4 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / c03001e | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8069/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8069/yetus/diff-checkstyle-root.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8069/yetus/patch-asflicense-problems.txt | | modules | C: ql . testutils/ptest2 U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8069/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update NoDeleteRollingFileAppender to use Log4j2 api > > > Key: HIVE-17870 > URL: https://issues.apache.org/jira/browse/HIVE-17870 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Andrew Sherman > Attachments: HIVE-17870.1.patch, HIVE-17870.2.patch > > > NoDeleteRollingFileAppender is still using log4jv1 api. Since we already > moved to use log4j2 in hive, we better update to use log4jv2 as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18088: - Attachment: HIVE-18088.3.patch Jsonified triggers which was a TODO. Also fixes tests. > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.2.patch, > HIVE-18088.3.patch, HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273712#comment-16273712 ] Hive QA commented on HIVE-18068: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900080/HIVE-18068.04.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=239) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic2] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic3] (batchId=60) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_intervals] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_timeseries] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_topn] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_10] (batchId=42) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_gby_join] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_queries] (batchId=98) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=178) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_gby_join] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join] (batchId=121) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query11] (batchId=249) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query4] (batchId=249) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query74] (batchId=249) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query11] (batchId=247) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query4] (batchId=247) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query74] (batchId=247) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8068/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8068/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8068/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 31 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900080 - PreCommit-HIVE-Build > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, > HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273683#comment-16273683 ] Hive QA commented on HIVE-18068: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 22s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} ql: The patch generated 0 new + 420 unchanged - 2 fixed = 420 total (was 422) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green} root: The patch generated 0 new + 420 unchanged - 2 fixed = 420 total (was 422) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / c03001e | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8068/yetus/patch-asflicense-problems.txt | | modules | C: ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8068/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, > HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18189) Order by position does not work when cbo is disabled
[ https://issues.apache.org/jira/browse/HIVE-18189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18189: -- Status: Patch Available (was: Open) > Order by position does not work when cbo is disabled > > > Key: HIVE-18189 > URL: https://issues.apache.org/jira/browse/HIVE-18189 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-18189.1.patch > > > Investigating a failed query: > {code} > set hive.cbo.enable=false; > set hive.orderby.position.alias=true; > select distinct age from student order by 1 desc limit 20; > {code} > The query does not sort the output correctly when cbo is > disabled/inactivated. I found two issues: > 1. "order by position" query is broken by HIVE-16774 > 2. In particular, select distinct query never work for "order by position" > query -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18189) Order by position does not work when cbo is disabled
[ https://issues.apache.org/jira/browse/HIVE-18189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18189: -- Attachment: HIVE-18189.1.patch > Order by position does not work when cbo is disabled > > > Key: HIVE-18189 > URL: https://issues.apache.org/jira/browse/HIVE-18189 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-18189.1.patch > > > Investigating a failed query: > {code} > set hive.cbo.enable=false; > set hive.orderby.position.alias=true; > select distinct age from student order by 1 desc limit 20; > {code} > The query does not sort the output correctly when cbo is > disabled/inactivated. I found two issues: > 1. "order by position" query is broken by HIVE-16774 > 2. In particular, select distinct query never work for "order by position" > query -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273629#comment-16273629 ] Hive QA commented on HIVE-18088: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900088/HIVE-18088.2.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 36 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestTriggersNoTezSessionPool.testTriggerSlowQueryExecutionTime (batchId=234) org.apache.hive.jdbc.TestTriggersNoTezSessionPool.testTriggerTotalTasks (batchId=234) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testMultipleTriggers1 (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testMultipleTriggers2 (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedFiles (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomReadOps (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesWrite (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime (batchId=237) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerTotalTasks (batchId=237) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers1 (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=234) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerTotalTasks (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8067/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8067/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8067/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 36 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900088 - PreCommit-HIVE-Build > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Af
[jira] [Updated] (HIVE-18036) Stats: Remove usage of clone() methods
[ https://issues.apache.org/jira/browse/HIVE-18036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertalan Kondrat updated HIVE-18036: Attachment: HIVE-18036.3.patch > Stats: Remove usage of clone() methods > -- > > Key: HIVE-18036 > URL: https://issues.apache.org/jira/browse/HIVE-18036 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Bertalan Kondrat > Attachments: HIVE-18036.2.patch, HIVE-18036.3.patch, HIVE-18036.patch > > > {{Statistics}} and {{ColStats}} implements cloneable; however they never > throw clonenotsupported; and this causes try / catch blocks which are just > noise -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18189) Order by position does not work when cbo is disabled
[ https://issues.apache.org/jira/browse/HIVE-18189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-18189: - > Order by position does not work when cbo is disabled > > > Key: HIVE-18189 > URL: https://issues.apache.org/jira/browse/HIVE-18189 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Daniel Dai >Assignee: Daniel Dai > > Investigating a failed query: > {code} > set hive.cbo.enable=false; > set hive.orderby.position.alias=true; > select distinct age from student order by 1 desc limit 20; > {code} > The query does not sort the output correctly when cbo is > disabled/inactivated. I found two issues: > 1. "order by position" query is broken by HIVE-16774 > 2. In particular, select distinct query never work for "order by position" > query -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14069) update curator version to 2.12.0
[ https://issues.apache.org/jira/browse/HIVE-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-14069: -- Status: Open (was: Patch Available) Actually since Hive master now depends on Hadoop-3, there is no need to try to shade this .. we can simply change the Curator version used by Hive to match Hadoop. > update curator version to 2.12.0 > - > > Key: HIVE-14069 > URL: https://issues.apache.org/jira/browse/HIVE-14069 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, Metastore >Reporter: Thejas M Nair >Assignee: Jason Dere > Attachments: HIVE-14069.1.patch, HIVE-14069.2.patch, > HIVE-14069.3.patch, HIVE-14069.4.patch > > > curator-2.10.0 has several bug fixes over current version (2.6.0), updating > would help improve stability. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18037) Migrate Slider LLAP package to YARN Service framework for Hadoop 3.x
[ https://issues.apache.org/jira/browse/HIVE-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated HIVE-18037: - Attachment: HIVE-18037.002.patch Attaching 002 patch after incorporating all RB comments from 001 patch. [~sershe], new RB for this 002 patch - https://reviews.apache.org/r/64229/ > Migrate Slider LLAP package to YARN Service framework for Hadoop 3.x > > > Key: HIVE-18037 > URL: https://issues.apache.org/jira/browse/HIVE-18037 > Project: Hive > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: 3.0.0 > > Attachments: HIVE-18037.001.patch, HIVE-18037.002.patch > > > Apache Slider has been migrated to Hadoop-3.x and is referred to as YARN > Service (YARN-4692). Most of the classic Slider features are now going to be > supported in a first-class manner by core YARN. It includes several new > features like a RESTful API. Command line equivalents of classic Slider are > supported by YARN Service as well. > This jira will take care of all changes required to Slider LLAP packaging and > scripts to make it work against Hadoop 3.x. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273581#comment-16273581 ] Hive QA commented on HIVE-18088: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 1s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 77 new + 562 unchanged - 34 fixed = 639 total (was 596) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / c03001e | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8067/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8067/yetus/patch-asflicense-problems.txt | | modules | C: common ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8067/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.2.patch, > HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18037) Migrate Slider LLAP package to YARN Service framework for Hadoop 3.x
[ https://issues.apache.org/jira/browse/HIVE-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha updated HIVE-18037: - Status: Open (was: Patch Available) > Migrate Slider LLAP package to YARN Service framework for Hadoop 3.x > > > Key: HIVE-18037 > URL: https://issues.apache.org/jira/browse/HIVE-18037 > Project: Hive > Issue Type: Sub-task >Reporter: Gour Saha >Assignee: Gour Saha > Fix For: 3.0.0 > > Attachments: HIVE-18037.001.patch > > > Apache Slider has been migrated to Hadoop-3.x and is referred to as YARN > Service (YARN-4692). Most of the classic Slider features are now going to be > supported in a first-class manner by core YARN. It includes several new > features like a RESTful API. Command line equivalents of classic Slider are > supported by YARN Service as well. > This jira will take care of all changes required to Slider LLAP packaging and > scripts to make it work against Hadoop 3.x. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17552) Enable bucket map join by default
[ https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273548#comment-16273548 ] Hive QA commented on HIVE-17552: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900053/HIVE-17552.5.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_11] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_12] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_13] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_14] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_15] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_1] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_3] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_4] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_5] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_7] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_8] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_9] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_6] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_cache] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=224) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) org.apache.hive.service.cli.thrift.TestThriftCLIServiceWithHttp.testExecuteStatement (batchId=231) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8066/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8066/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8066/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 31 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900053 - PreCommit-HIVE-Build > Enable bucket map join by default > - > > Key: HIVE-17552 > URL: https://issues.apache.org/jira/browse/HIVE-17552 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17552.1.patch, HIVE-17552.2.patch, > HIVE-17552.3.patch, HIVE-17552.4.patch, HIVE-17552.5.patch > > > Currently bucket map join is disabled by default, however, it is potentially > most optimal join we have. Need to enable it by default. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17870) Update NoDeleteRollingFileAppender to use Log4j2 api
[ https://issues.apache.org/jira/browse/HIVE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273495#comment-16273495 ] Aihua Xu commented on HIVE-17870: - Thanks [~asherman] The patch looks good to me. +1. > Update NoDeleteRollingFileAppender to use Log4j2 api > > > Key: HIVE-17870 > URL: https://issues.apache.org/jira/browse/HIVE-17870 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Andrew Sherman > Attachments: HIVE-17870.1.patch, HIVE-17870.2.patch > > > NoDeleteRollingFileAppender is still using log4jv1 api. Since we already > moved to use log4j2 in hive, we better update to use log4jv2 as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18098) Add support for Export/Import for Acid tables
[ https://issues.apache.org/jira/browse/HIVE-18098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18098: -- Description: How should this work? For regular tables export just copies the files under table root to a specified directory. This doesn't make sense for Acid tables: * Some data may belong to aborted transactons * Transaction IDs are imbedded into data/files names. You'd have export delta/ and base/ each of which may have files with the same names, e.g. bucket_0. * On import these IDs won't make sense in a different cluster or even a different table (which may have delta_x_x for example for the same x (but different data of course). * Export creates a _metadata column types, storage format, etc. Perhaps it can include info about aborted IDs (if the whole file can't be skipped). * Even importing into the same table on the same cluster may be a problem. For example delta_5_5/ existed at the time of export and was included in the export. But 2 days later it may not exist because it was compacted and cleaned. * If importing back into the same table on the same cluster, the data could be imported into a different transaction (assuming per table writeIDs) w/o having to remap the IDs in the rows themselves. * support Import Overwrite? * Support Import as a new txn with remapping of ROW_IDs? The new writeID can be stored in a delta_x_x/_meta_data and ROW__IDs can be remapped at read time (like isOriginal) and made permanent by compaction. * It doesn't seem reasonable to import acid data into non-acid table Perhaps import can work similar to Load Data: look at the file imported, if it has Acid columns, leave a note in the delta_x_x/_meta_data to indicate that these columns should be skipped a new ROW_IDs assigned at read time. was: How should this work? For regular tables export just copies the files under table root to a specified directory. This doesn't make sense for Acid tables: * Some data may belong to aborted transactons * Transaction IDs are imbedded into data/files names. You'd have export delta/ and base/ each of which may have files with the same names, e.g. bucket_0. * On import these IDs won't make sense in a different cluster or even a different table (which may have delta_x_x for example for the same x (but different data of course). * Export creates a _metadata column types, storage format, etc. Perhaps it can include info about aborted IDs (if the whole file can't be skipped). * Even importing into the same table on the same cluster may be a problem. For example delta_5_5/ existed at the time of export and was included in the export. But 2 days later it may not exist because it was compacted and cleaned. * If importing back into the same table on the same cluster, the data could be imported into a different transaction (assuming per table writeIDs) w/o having to remap the IDs in the rows themselves. * support Import Overwrite? * Support Import as a new txn with remapping of ROW_IDs? The new writeID can be stored in a delta_x_x/_meta_data and ROW__IDs can be remapped at read time (like isOriginal) and made permanent by compaction. * It doesn't seem reasonable to import acid data into non-acid table * Perhaps import can work similar to Load Data: look at the file imported, if it has Acid columns, leave a note in the delta_x_x/_meta_data to indicate that these columns should be skipped a new ROW_IDs assigned at read time. > Add support for Export/Import for Acid tables > - > > Key: HIVE-18098 > URL: https://issues.apache.org/jira/browse/HIVE-18098 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > How should this work? > For regular tables export just copies the files under table root to a > specified directory. > This doesn't make sense for Acid tables: > * Some data may belong to aborted transactons > * Transaction IDs are imbedded into data/files names. You'd have export > delta/ and base/ each of which may have files with the same names, e.g. > bucket_0. > * On import these IDs won't make sense in a different cluster or even a > different table (which may have delta_x_x for example for the same x (but > different data of course). > * Export creates a _metadata column types, storage format, etc. Perhaps it > can include info about aborted IDs (if the whole file can't be skipped). > * Even importing into the same table on the same cluster may be a problem. > For example delta_5_5/ existed at the time of export and was included in the > export. But 2 days later it may not exist because it was compacted and > cleaned. > * If importing back into the same table on the same cluster, the data could > be imported into a differ
[jira] [Updated] (HIVE-18098) Add support for Export/Import for Acid tables
[ https://issues.apache.org/jira/browse/HIVE-18098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18098: -- Description: How should this work? For regular tables export just copies the files under table root to a specified directory. This doesn't make sense for Acid tables: * Some data may belong to aborted transactons * Transaction IDs are imbedded into data/files names. You'd have export delta/ and base/ each of which may have files with the same names, e.g. bucket_0. * On import these IDs won't make sense in a different cluster or even a different table (which may have delta_x_x for example for the same x (but different data of course). * Export creates a _metadata column types, storage format, etc. Perhaps it can include info about aborted IDs (if the whole file can't be skipped). * Even importing into the same table on the same cluster may be a problem. For example delta_5_5/ existed at the time of export and was included in the export. But 2 days later it may not exist because it was compacted and cleaned. * If importing back into the same table on the same cluster, the data could be imported into a different transaction (assuming per table writeIDs) w/o having to remap the IDs in the rows themselves. * support Import Overwrite? * Support Import as a new txn with remapping of ROW_IDs? The new writeID can be stored in a delta_x_x/_meta_data and ROW__IDs can be remapped at read time (like isOriginal) and made permanent by compaction. * It doesn't seem reasonable to import acid data into non-acid table * Perhaps import can work similar to Load Data: look at the file imported, if it has Acid columns, leave a note in the delta_x_x/_meta_data to indicate that these columns should be skipped a new ROW_IDs assigned at read time. was: How should this work? For regular tables export just copies the files under table root to a specified directory. This doesn't make sense for Acid tables: * Some data may belong to aborted transactons * Transaction IDs are imbedded into data/files names. You'd have export delta/ and base/ each of which may have files with the same names, e.g. bucket_0. * On import these IDs won't make sense in a different cluster or even a different table (which may have delta_x_x for example for the same x (but different data of course). * Export creates a _metadata column types, storage format, etc. Perhaps it can include info about aborted IDs (if the whole file can't be skipped). * Even importing into the same table on the same cluster may be a problem. For example delta_5_5/ existed at the time of export and was included in the export. But 2 days later it may not exist because it was compacted and cleaned. * If importing back into the same table on the same cluster, the data could be imported into a different transaction (assuming per table writeIDs) w/o having to remap the IDs in the rows themselves. * support Import Overwrite? * Support Import as a new txn with remapping of ROW_IDs? The new writeID can be stored in a delta_x_x/_meta_data and ROW__IDs can be remapped at read time (like isOriginal) and made permanent by compaction. * It doesn't seem reasonable to import acid data into non-acid table > Add support for Export/Import for Acid tables > - > > Key: HIVE-18098 > URL: https://issues.apache.org/jira/browse/HIVE-18098 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > How should this work? > For regular tables export just copies the files under table root to a > specified directory. > This doesn't make sense for Acid tables: > * Some data may belong to aborted transactons > * Transaction IDs are imbedded into data/files names. You'd have export > delta/ and base/ each of which may have files with the same names, e.g. > bucket_0. > * On import these IDs won't make sense in a different cluster or even a > different table (which may have delta_x_x for example for the same x (but > different data of course). > * Export creates a _metadata column types, storage format, etc. Perhaps it > can include info about aborted IDs (if the whole file can't be skipped). > * Even importing into the same table on the same cluster may be a problem. > For example delta_5_5/ existed at the time of export and was included in the > export. But 2 days later it may not exist because it was compacted and > cleaned. > * If importing back into the same table on the same cluster, the data could > be imported into a different transaction (assuming per table writeIDs) w/o > having to remap the IDs in the rows themselves. > * support Import Overwrite? > * Support Import as a new txn with remapping of ROW_IDs? The new writeID can > be stored in
[jira] [Commented] (HIVE-17988) Replace patch utility usage with git apply in ptest
[ https://issues.apache.org/jira/browse/HIVE-17988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273429#comment-16273429 ] Sergio PeƱa commented on HIVE-17988: It looks good. Btw, there is another smart-apply-patch.py file under hive/dev-support. Shouldn't we modify that file too? > Replace patch utility usage with git apply in ptest > --- > > Key: HIVE-17988 > URL: https://issues.apache.org/jira/browse/HIVE-17988 > Project: Hive > Issue Type: Improvement > Components: Testing Infrastructure >Reporter: Zoltan Haindrich >Assignee: Daniel Voros > Attachments: HIVE-17988.1.patch, HIVE-17988.2.patch > > > It would be great to replace the standard diff util because {{git}} can do a > 3-way merge - which in most cases successfull. > This could reduce the ptest results which are erroring out because of build > failure. > {code} > error: patch failed: > ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java:7003 > Falling back to three-way merge... > Applied patch to > 'ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java' cleanly. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17552) Enable bucket map join by default
[ https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273413#comment-16273413 ] Hive QA commented on HIVE-17552: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / c03001e | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8066/yetus/patch-asflicense-problems.txt | | modules | C: common U: common | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8066/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Enable bucket map join by default > - > > Key: HIVE-17552 > URL: https://issues.apache.org/jira/browse/HIVE-17552 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17552.1.patch, HIVE-17552.2.patch, > HIVE-17552.3.patch, HIVE-17552.4.patch, HIVE-17552.5.patch > > > Currently bucket map join is disabled by default, however, it is potentially > most optimal join we have. Need to enable it by default. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-16890: - Status: Patch Available (was: Open) > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: Naveen Gangam >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch, > HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273405#comment-16273405 ] Prasanth Jayachandran commented on HIVE-18088: -- for some reasons RB doesn't allow me to upload the rebased patch (says invalid diff although it looks valid).. cannot create new RB entry as well.. will retry after sometime.. > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.2.patch, > HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-16890: - Attachment: HIVE-16890.1.patch > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: Naveen Gangam >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch, > HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam reassigned HIVE-16890: Assignee: BELUGA BEHR (was: Naveen Gangam) > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch, > HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam reassigned HIVE-16890: Assignee: Naveen Gangam (was: BELUGA BEHR) > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: Naveen Gangam >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16890) org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous Wrapper
[ https://issues.apache.org/jira/browse/HIVE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-16890: - Status: Open (was: Patch Available) the last patch did not get picked up by pre-commit runs. Will re-add the patch. > org.apache.hadoop.hive.serde2.io.HiveVarcharWritable - Adds Superfluous > Wrapper > --- > > Key: HIVE-16890 > URL: https://issues.apache.org/jira/browse/HIVE-16890 > Project: Hive > Issue Type: Improvement > Components: Serializers/Deserializers >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16890.1.patch, HIVE-16890.1.patch > > > Class {{org.apache.hadoop.hive.serde2.io.HiveVarcharWritable}} creates a > superfluous wrapper and then immediately unwraps it. Don't bother wrapping > in this scenario. > {code} > public void set(HiveVarchar val, int len) { > set(val.getValue(), len); > } > public void set(String val, int maxLength) { > value.set(HiveBaseChar.enforceMaxLength(val, maxLength)); > } > public HiveVarchar getHiveVarchar() { > return new HiveVarchar(value.toString(), -1); > } > // Here calls getHiveVarchar() which creates a new HiveVarchar object with > a string in it > // The object is passed to set(HiveVarchar val, int len) > // The string is pulled out > public void enforceMaxLength(int maxLength) { > // Might be possible to truncate the existing Text value, for now just do > something simple. > if (value.getLength()>maxLength && getCharacterLength()>maxLength) > set(getHiveVarchar(), maxLength); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18088: - Attachment: HIVE-18088.2.patch Same patch rebased > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.2.patch, > HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18187) Add jamon generated-sources as source folder
[ https://issues.apache.org/jira/browse/HIVE-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273393#comment-16273393 ] Hive QA commented on HIVE-18187: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900054/HIVE-18187.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8065/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8065/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8065/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900054 - PreCommit-HIVE-Build > Add jamon generated-sources as source folder > > > Key: HIVE-18187 > URL: https://issues.apache.org/jira/browse/HIVE-18187 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18187.patch > > > In idea we should add manually the {{target/generated-jamon}} forder as a > source folder to be able to build in ide without compilation error -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273385#comment-16273385 ] Sergey Shelukhin commented on HIVE-18188: - +1 pending tests > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18170) User mapping not initialized correctly on start
[ https://issues.apache.org/jira/browse/HIVE-18170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18170: - Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > User mapping not initialized correctly on start > --- > > Key: HIVE-18170 > URL: https://issues.apache.org/jira/browse/HIVE-18170 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 3.0.0 > > Attachments: HIVE-18170.1.patch, HIVE-18170.2.patch, > HIVE-18170.3.patch, HIVE-18170.4.patch > > > User mapping throws NPE as it is not initialized during HS2 start. > Initial RP is notified in WM c'tor but wm thread has not started yet > resulting in NPE accessing user-pool mapping. > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.isManaged(WorkloadManager.java:1866) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManagerFederation.getSession(WorkloadManagerFederation.java:43) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:169) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2230) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1882) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1613) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1358) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1351) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:252) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:344) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_121] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_121] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > ~[hadoop-common-2.8.1.jar:?] > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_121] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_121] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18170) User mapping not initialized correctly on start
[ https://issues.apache.org/jira/browse/HIVE-18170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18170: - Attachment: HIVE-18170.4.patch Rebased patch committed to master. Thanks for the reviews! > User mapping not initialized correctly on start > --- > > Key: HIVE-18170 > URL: https://issues.apache.org/jira/browse/HIVE-18170 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 3.0.0 > > Attachments: HIVE-18170.1.patch, HIVE-18170.2.patch, > HIVE-18170.3.patch, HIVE-18170.4.patch > > > User mapping throws NPE as it is not initialized during HS2 start. > Initial RP is notified in WM c'tor but wm thread has not started yet > resulting in NPE accessing user-pool mapping. > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.isManaged(WorkloadManager.java:1866) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManagerFederation.getSession(WorkloadManagerFederation.java:43) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:169) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2230) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1882) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1613) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1358) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1351) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:252) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:344) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_121] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_121] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > ~[hadoop-common-2.8.1.jar:?] > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_121] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_121] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273366#comment-16273366 ] Prasanth Jayachandran commented on HIVE-18188: -- [~kgyrtkirk]/[~sershe] can someone please take a look? the failure was in getActiveResourcePlan() which should not be called when workload manager is not enabled. > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18173) Improve plans for correlated subqueries with non-equi predicate
[ https://issues.apache.org/jira/browse/HIVE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18173: --- Attachment: (was: HIVE-18173.2.patch) > Improve plans for correlated subqueries with non-equi predicate > --- > > Key: HIVE-18173 > URL: https://issues.apache.org/jira/browse/HIVE-18173 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18173.1.patch, HIVE-18173.2.patch > > > HIVE-17767 optimized plan to not generate value generator (i.e. an extra join > with outer query to fetch correlated columns) for EQUAL and NOT EQUAL > predicates e.g. > {code:sql} > select * from src b where b.key in (select key from src a where b.value <> > a.value) > {code} > This should be improved and implemented for rest of the predicates e.g. LESS > THAN etc -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18173) Improve plans for correlated subqueries with non-equi predicate
[ https://issues.apache.org/jira/browse/HIVE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18173: --- Status: Patch Available (was: Open) > Improve plans for correlated subqueries with non-equi predicate > --- > > Key: HIVE-18173 > URL: https://issues.apache.org/jira/browse/HIVE-18173 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18173.1.patch, HIVE-18173.2.patch > > > HIVE-17767 optimized plan to not generate value generator (i.e. an extra join > with outer query to fetch correlated columns) for EQUAL and NOT EQUAL > predicates e.g. > {code:sql} > select * from src b where b.key in (select key from src a where b.value <> > a.value) > {code} > This should be improved and implemented for rest of the predicates e.g. LESS > THAN etc -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18188: - Status: Patch Available (was: Open) > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18173) Improve plans for correlated subqueries with non-equi predicate
[ https://issues.apache.org/jira/browse/HIVE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18173: --- Attachment: HIVE-18173.2.patch > Improve plans for correlated subqueries with non-equi predicate > --- > > Key: HIVE-18173 > URL: https://issues.apache.org/jira/browse/HIVE-18173 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18173.1.patch, HIVE-18173.2.patch > > > HIVE-17767 optimized plan to not generate value generator (i.e. an extra join > with outer query to fetch correlated columns) for EQUAL and NOT EQUAL > predicates e.g. > {code:sql} > select * from src b where b.key in (select key from src a where b.value <> > a.value) > {code} > This should be improved and implemented for rest of the predicates e.g. LESS > THAN etc -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18188: - Attachment: HIVE-18188.1.patch > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18188.1.patch > > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18173) Improve plans for correlated subqueries with non-equi predicate
[ https://issues.apache.org/jira/browse/HIVE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18173: --- Status: Open (was: Patch Available) > Improve plans for correlated subqueries with non-equi predicate > --- > > Key: HIVE-18173 > URL: https://issues.apache.org/jira/browse/HIVE-18173 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18173.1.patch, HIVE-18173.2.patch > > > HIVE-17767 optimized plan to not generate value generator (i.e. an extra join > with outer query to fetch correlated columns) for EQUAL and NOT EQUAL > predicates e.g. > {code:sql} > select * from src b where b.key in (select key from src a where b.value <> > a.value) > {code} > This should be improved and implemented for rest of the predicates e.g. LESS > THAN etc -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18188) Fix TestSSL failures in master
[ https://issues.apache.org/jira/browse/HIVE-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-18188: > Fix TestSSL failures in master > -- > > Key: HIVE-18188 > URL: https://issues.apache.org/jira/browse/HIVE-18188 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > > HIVE-18170 broke TestSSL tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-15433) setting hive.warehouse.subdir.inherit.perms in HIVE won't overwrite it in hive configuration
[ https://issues.apache.org/jira/browse/HIVE-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vlad Gudikov reassigned HIVE-15433: --- Assignee: Vlad Gudikov (was: Alina Abramova) > setting hive.warehouse.subdir.inherit.perms in HIVE won't overwrite it in > hive configuration > > > Key: HIVE-15433 > URL: https://issues.apache.org/jira/browse/HIVE-15433 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 1.0.0, 1.2.0, 2.0.0 >Reporter: Alina Abramova >Assignee: Vlad Gudikov > Fix For: 1.2.0 > > Attachments: HIVE-15433-branch-1.2.patch, HIVE-15433.1.patch > > > Setting hive.warehouse.subdir.inherit.perms in HIVE won't make any effect. It > will always take the default value from HiveConf until you define it in > hive-site.xml. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18068: --- Attachment: HIVE-18068.04.patch > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, > HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18170) User mapping not initialized correctly on start
[ https://issues.apache.org/jira/browse/HIVE-18170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273328#comment-16273328 ] Sergey Shelukhin commented on HIVE-18170: - +1 > User mapping not initialized correctly on start > --- > > Key: HIVE-18170 > URL: https://issues.apache.org/jira/browse/HIVE-18170 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18170.1.patch, HIVE-18170.2.patch, > HIVE-18170.3.patch > > > User mapping throws NPE as it is not initialized during HS2 start. > Initial RP is notified in WM c'tor but wm thread has not started yet > resulting in NPE accessing user-pool mapping. > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.isManaged(WorkloadManager.java:1866) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManagerFederation.getSession(WorkloadManagerFederation.java:43) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:169) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2230) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1882) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1613) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1358) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1351) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:252) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:344) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_121] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_121] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > ~[hadoop-common-2.8.1.jar:?] > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) > ~[hive-service-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_121] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_121] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_121] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18005) Improve size estimation for array() to be not 0
[ https://issues.apache.org/jira/browse/HIVE-18005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273324#comment-16273324 ] Vineet Garg commented on HIVE-18005: Looks good to me. +1 > Improve size estimation for array() to be not 0 > --- > > Key: HIVE-18005 > URL: https://issues.apache.org/jira/browse/HIVE-18005 > Project: Hive > Issue Type: Sub-task > Components: Statistics >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-18005.01.patch, HIVE-18005.01.patch, > HIVE-18005.02.patch > > > happens only in case the array is not from a column; and the array contains > no column references > {code} > EXPLAIN > SELECT sort_array(array("b", "d", "c", "a")),array("1","2") FROM t > ... > Statistics: Num rows: 1 Data size: 0 Basic stats: COMPLETE > Column stats: COMPLETE > ListSink > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17870) Update NoDeleteRollingFileAppender to use Log4j2 api
[ https://issues.apache.org/jira/browse/HIVE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-17870: -- Attachment: HIVE-17870.2.patch fix minor style issues found by yetus > Update NoDeleteRollingFileAppender to use Log4j2 api > > > Key: HIVE-17870 > URL: https://issues.apache.org/jira/browse/HIVE-17870 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Andrew Sherman > Attachments: HIVE-17870.1.patch, HIVE-17870.2.patch > > > NoDeleteRollingFileAppender is still using log4jv1 api. Since we already > moved to use log4j2 in hive, we better update to use log4jv2 as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18088: - Attachment: HIVE-18088.1.patch I was not able to get the tests work reliably with qfile tests. So piggybacked on the existing jdbc tests for triggers. Also added optional json printing of event trace. > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18088: - Status: Patch Available (was: Open) > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.1.patch, HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18187) Add jamon generated-sources as source folder
[ https://issues.apache.org/jira/browse/HIVE-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273294#comment-16273294 ] Hive QA commented on HIVE-18187: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 7m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 7dfbbd8 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8065/yetus/patch-asflicense-problems.txt | | modules | C: service U: service | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8065/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add jamon generated-sources as source folder > > > Key: HIVE-18187 > URL: https://issues.apache.org/jira/browse/HIVE-18187 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18187.patch > > > In idea we should add manually the {{target/generated-jamon}} forder as a > source folder to be able to build in ide without compilation error -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18068: --- Attachment: HIVE-18068.03.patch > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.03.patch, HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17361) Support LOAD DATA for transactional tables
[ https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273282#comment-16273282 ] Alan Gates commented on HIVE-17361: --- Put a couple of comments in review board. > Support LOAD DATA for transactional tables > -- > > Key: HIVE-17361 > URL: https://issues.apache.org/jira/browse/HIVE-17361 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Wei Zheng >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-17361.07.patch, HIVE-17361.08.patch, > HIVE-17361.09.patch, HIVE-17361.1.patch, HIVE-17361.10.patch, > HIVE-17361.11.patch, HIVE-17361.12.patch, HIVE-17361.14.patch, > HIVE-17361.16.patch, HIVE-17361.17.patch, HIVE-17361.19.patch, > HIVE-17361.2.patch, HIVE-17361.20.patch, HIVE-17361.21.patch, > HIVE-17361.23.patch, HIVE-17361.24.patch, HIVE-17361.25.patch, > HIVE-17361.3.patch, HIVE-17361.4.patch > > > LOAD DATA was not supported since ACID was introduced. Need to fill this gap > between ACID table and regular hive table. > Current Documentation is under [DML > Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations] > and [Loading files into > tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]: > \\ > * Load Data performs very limited validations of the data, in particular it > uses the input file name which may not be in 0_0 which can break some > read logic. (Certainly will for Acid). > * It does not check the schema of the file. This may be a non issue for Acid > which requires ORC which is self describing so Schema Evolution may handle > this seamlessly. (Assuming Schema is not too different). > * It does check that _InputFormat_S are compatible. > * Bucketed (and thus sorted) tables don't support Load Data (but only if > hive.strict.checks.bucketing=true (default)). Will keep this restriction for > Acid. > * Load Data supports OVERWRITE clause > * What happens to file permissions/ownership: rename vs copy differences > \\ > The implementation will follow the same idea as in HIVE-14988 and use a > base_N/ dir for OVERWRITE clause. > \\ > How is minor compaction going to handle delta/base with original files? > Since delta_8_8/_meta_data is created before files are moved, delta_8_8 > becomes visible before it's populated. Is that an issue? > It's not since txn 8 is not committed. > h3. Implementation Notes/Limitations (patch 25) > * bucketed/sorted tables are not supported > * input files names must be of the form 0_0/0_0_copy_1 - enforced. > (HIVE-18125) > * Load Data creates a delta_x_x/ that contains new files > * Load Data w/Overwrite creates a base_x/ that contains new files > * A '_metadata_acid' file is placed in the target directory to indicate it > requires special handling on read > * The input files must be 'plain' ORC files, i.e. not contain acid metadata > columns as would be the case if these files were copied from another Acid > table. In the latter case, the ROW_IDs embedded in the data may not make > sense in the target table (if it's in a different cluster, for example). > Such files may also have a mix of committed and aborted data. > ** this could be relaxed later by adding info to the _metadata_acid file to > ignore existing ROW_IDs on read. > * ROW_IDs are attached dynamically at read time and made permanent by > compaction. This is done the same way has handling of files that were > written to a table before it was converted to Acid. > * Vectorization is supported -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18036) Stats: Remove usage of clone() methods
[ https://issues.apache.org/jira/browse/HIVE-18036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273263#comment-16273263 ] Hive QA commented on HIVE-18036: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900047/HIVE-18036.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8064/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8064/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8064/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-11-30 19:37:21.333 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8064/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-11-30 19:37:21.336 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive a5d5473..98b2d2e branch-2 -> origin/branch-2 5f532ac..b921f99 branch-2.2 -> origin/branch-2.2 + git reset --hard HEAD HEAD is now at 7dfbbd8 HIVE-14792: AvroSerde reads the remote schema-file at least once per mapper, per table reference. (Mithun Radhakrishnan, reviewed by Aihua Xu) + git clean -f -d Removing common/src/java/org/apache/hadoop/hive/conf/HiveConf.java.orig Removing ql/src/test/queries/clientpositive/testSetQueryString.q Removing ql/src/test/results/clientpositive/testSetQueryString.q.out + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 7dfbbd8 HIVE-14792: AvroSerde reads the remote schema-file at least once per mapper, per table reference. (Mithun Radhakrishnan, reviewed by Aihua Xu) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-11-30 19:37:29.845 + rm -rf ../yetus + mkdir ../yetus + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8064/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: patch -p0 patching file ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java patching file ql/src/java/org/apache/hadoop/hive/ql/optimizer/stats/annotation/StatsRulesProcFactory.java Hunk #1 succeeded at 276 (offset 10 lines). Hunk #2 succeeded at 502 (offset 10 lines). Hunk #3 succeeded at 535 (offset 10 lines). Hunk #4 succeeded at 834 (offset 10 lines). Hunk #5 succeeded at 1578 (offset 18 lines). Hunk #6 succeeded at 1670 (offset 18 lines). + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: protoc version: 250, detected platform: linux/amd64 protoc-jar: executing: [/tmp/protoc923109931172528862.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g log4j:WARN No appenders could be found for logger (DataNucleus.Genera
[jira] [Commented] (HIVE-18166) Result of hive.query.string is encoded.
[ https://issues.apache.org/jira/browse/HIVE-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273259#comment-16273259 ] Hive QA commented on HIVE-18166: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900044/HIVE-18166.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 11482 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8063/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8063/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8063/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900044 - PreCommit-HIVE-Build > Result of hive.query.string is encoded. > --- > > Key: HIVE-18166 > URL: https://issues.apache.org/jira/browse/HIVE-18166 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18166.1.patch, HIVE-18166.2.patch > > > set hive.query.string returns encoded string. > hive.query.string=%0A%0Aselect+*+from+t1 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18088) Add WM event traces at query level for debugging
[ https://issues.apache.org/jira/browse/HIVE-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273237#comment-16273237 ] Prasanth Jayachandran commented on HIVE-18088: -- Added optional json printing which will look like {code} { "queryId" : "pjayachandran_20171130112601_901e788a-1b0c-4255-9b27-e6d31bdf8d6e", "queryStartTime" : 1512069961112, "queryEndTime" : 1512069968854, "queryCompleted" : true, "queryWmEvents" : [ { "wmTezSessionInfo" : { "sessionId" : "4876920a-dcf3-40bf-ab61-d9efbb515ae1", "poolName" : "default.bi", "clusterPercent" : 80.0 }, "eventTimestamp" : 1512069963983, "eventType" : "GET" }, { "wmTezSessionInfo" : { "sessionId" : "4876920a-dcf3-40bf-ab61-d9efbb515ae1", "poolName" : "default.etl", "clusterPercent" : 20.0 }, "eventTimestamp" : 1512069965254, "eventType" : "MOVE" }, { "wmTezSessionInfo" : { "sessionId" : "4876920a-dcf3-40bf-ab61-d9efbb515ae1", "poolName" : null, "clusterPercent" : 0.0 }, "eventTimestamp" : 1512069968889, "eventType" : "RETURN" } ], "appliedTriggers" : [ "{ name: slow_in_bi, expression: ELAPSED_TIME > 1000, action: MOVE TO default.etl }" ], "appliedTriggersNames" : [ "slow_in_bi" ], "desiredCounters" : [ "ELAPSED_TIME" ], "currentCounters" : { "ELAPSED_TIME" : 7739 } } {code} > Add WM event traces at query level for debugging > > > Key: HIVE-18088 > URL: https://issues.apache.org/jira/browse/HIVE-18088 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18088.WIP.patch > > > For debugging and testing purpose, expose workload manager events via /jmx > endpoint and print summary at the scope of query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18068) Upgrade to Calcite 1.15
[ https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18068: --- Summary: Upgrade to Calcite 1.15 (was: Replace LocalInterval by Interval in Druid storage handler) > Upgrade to Calcite 1.15 > --- > > Key: HIVE-18068 > URL: https://issues.apache.org/jira/browse/HIVE-18068 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-18068.2.patch, HIVE-18068.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18025) Push resource plan changes to tez/unmanaged sessions
[ https://issues.apache.org/jira/browse/HIVE-18025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273203#comment-16273203 ] Prasanth Jayachandran commented on HIVE-18025: -- [~kgyrtkirk] Thanks for reporting. Will take a look. > Push resource plan changes to tez/unmanaged sessions > > > Key: HIVE-18025 > URL: https://issues.apache.org/jira/browse/HIVE-18025 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 3.0.0 > > Attachments: HIVE-18025.1.patch, HIVE-18025.2.patch, > HIVE-18025.3.patch, HIVE-18025.4.patch > > > This is to remove MetastoreGlobalTriggersFetcher and make changes so that > DDLTask can push RP changes to tez session pool manager as well for updating > the triggers. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18025) Push resource plan changes to tez/unmanaged sessions
[ https://issues.apache.org/jira/browse/HIVE-18025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273199#comment-16273199 ] Zoltan Haindrich commented on HIVE-18025: - [~prasanth_j]: it looks like this patch have broken those cases from {{TestSSL}} > Push resource plan changes to tez/unmanaged sessions > > > Key: HIVE-18025 > URL: https://issues.apache.org/jira/browse/HIVE-18025 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 3.0.0 > > Attachments: HIVE-18025.1.patch, HIVE-18025.2.patch, > HIVE-18025.3.patch, HIVE-18025.4.patch > > > This is to remove MetastoreGlobalTriggersFetcher and make changes so that > DDLTask can push RP changes to tez session pool manager as well for updating > the triggers. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18166) Result of hive.query.string is encoded.
[ https://issues.apache.org/jira/browse/HIVE-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273154#comment-16273154 ] Hive QA commented on HIVE-18166: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 7dfbbd8 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-8063/yetus/patch-asflicense-problems.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8063/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Result of hive.query.string is encoded. > --- > > Key: HIVE-18166 > URL: https://issues.apache.org/jira/browse/HIVE-18166 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18166.1.patch, HIVE-18166.2.patch > > > set hive.query.string returns encoded string. > hive.query.string=%0A%0Aselect+*+from+t1 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.
[ https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mithun Radhakrishnan updated HIVE-14792: Resolution: Fixed Fix Version/s: 2.2.1 2.4.0 3.0.0 Status: Resolved (was: Patch Available) > AvroSerde reads the remote schema-file at least once per mapper, per table > reference. > - > > Key: HIVE-14792 > URL: https://issues.apache.org/jira/browse/HIVE-14792 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Fix For: 3.0.0, 2.4.0, 2.2.1 > > Attachments: HIVE-14792.1.patch > > > Avro tables that use "external" schema files stored on HDFS can cause > excessive calls to {{FileSystem::open()}}, especially for queries that spawn > large numbers of mappers. > This is because of the following code in {{AvroSerDe::initialize()}}: > {code:title=AvroSerDe.java|borderStyle=solid} > public void initialize(Configuration configuration, Properties properties) > throws SerDeException { > // ... > if (hasExternalSchema(properties) > || columnNameProperty == null || columnNameProperty.isEmpty() > || columnTypeProperty == null || columnTypeProperty.isEmpty()) { > schema = determineSchemaOrReturnErrorSchema(configuration, properties); > } else { > // Get column names and sort order > columnNames = Arrays.asList(columnNameProperty.split(",")); > columnTypes = > TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty); > schema = getSchemaFromCols(properties, columnNames, columnTypes, > columnCommentProperty); > > properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), > schema.toString()); > } > // ... > } > {code} > For tables using {{avro.schema.url}}, every time the SerDe is initialized > (i.e. at least once per mapper), the schema file is read remotely. For > queries with thousands of mappers, this leads to a stampede to the handful > (3?) datanodes that host the schema-file. In the best case, this causes > slowdowns. > It would be preferable to distribute the Avro-schema to all mappers as part > of the job-conf. The alternatives aren't exactly appealing: > # One can't rely solely on the {{column.list.types}} stored in the Hive > metastore. (HIVE-14789). > # {{avro.schema.literal}} might not always be usable, because of the > size-limit on table-parameters. The typical size of the Avro-schema file is > between 0.5-3MB, in my limited experience. Bumping the max table-parameter > size isn't a great solution. > If the {{avro.schema.file}} were read during query-planning, and made > available as part of table-properties (but not serialized into the > metastore), the downstream logic will remain largely intact. I have a patch > that does this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14792) AvroSerde reads the remote schema-file at least once per mapper, per table reference.
[ https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273151#comment-16273151 ] Mithun Radhakrishnan commented on HIVE-14792: - Committed to {{master}}, {{branch-2}}, and {{branch-2.2}}. Thank you for the review, [~aihuaxu]! > AvroSerde reads the remote schema-file at least once per mapper, per table > reference. > - > > Key: HIVE-14792 > URL: https://issues.apache.org/jira/browse/HIVE-14792 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-14792.1.patch > > > Avro tables that use "external" schema files stored on HDFS can cause > excessive calls to {{FileSystem::open()}}, especially for queries that spawn > large numbers of mappers. > This is because of the following code in {{AvroSerDe::initialize()}}: > {code:title=AvroSerDe.java|borderStyle=solid} > public void initialize(Configuration configuration, Properties properties) > throws SerDeException { > // ... > if (hasExternalSchema(properties) > || columnNameProperty == null || columnNameProperty.isEmpty() > || columnTypeProperty == null || columnTypeProperty.isEmpty()) { > schema = determineSchemaOrReturnErrorSchema(configuration, properties); > } else { > // Get column names and sort order > columnNames = Arrays.asList(columnNameProperty.split(",")); > columnTypes = > TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty); > schema = getSchemaFromCols(properties, columnNames, columnTypes, > columnCommentProperty); > > properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), > schema.toString()); > } > // ... > } > {code} > For tables using {{avro.schema.url}}, every time the SerDe is initialized > (i.e. at least once per mapper), the schema file is read remotely. For > queries with thousands of mappers, this leads to a stampede to the handful > (3?) datanodes that host the schema-file. In the best case, this causes > slowdowns. > It would be preferable to distribute the Avro-schema to all mappers as part > of the job-conf. The alternatives aren't exactly appealing: > # One can't rely solely on the {{column.list.types}} stored in the Hive > metastore. (HIVE-14789). > # {{avro.schema.literal}} might not always be usable, because of the > size-limit on table-parameters. The typical size of the Avro-schema file is > between 0.5-3MB, in my limited experience. Bumping the max table-parameter > size isn't a great solution. > If the {{avro.schema.file}} were read during query-planning, and made > available as part of table-properties (but not serialized into the > metastore), the downstream logic will remain largely intact. I have a patch > that does this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18186) Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test
[ https://issues.apache.org/jira/browse/HIVE-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273118#comment-16273118 ] Hive QA commented on HIVE-18186: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900040/HIVE-18186.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8062/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8062/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8062/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12900040 - PreCommit-HIVE-Build > Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test > --- > > Key: HIVE-18186 > URL: https://issues.apache.org/jira/browse/HIVE-18186 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18186.patch > > > HIVE-17942 introduced new test {{TestHiveMetaStoreAlterColumnPar}}, but it > uses wrong assertion, and this way it absorbs all exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18186) Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test
[ https://issues.apache.org/jira/browse/HIVE-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273088#comment-16273088 ] Janaki Lahorani commented on HIVE-18186: Thanks [~k0b3rit] for identifying and fixing the issue with the test. > Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test > --- > > Key: HIVE-18186 > URL: https://issues.apache.org/jira/browse/HIVE-18186 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18186.patch > > > HIVE-17942 introduced new test {{TestHiveMetaStoreAlterColumnPar}}, but it > uses wrong assertion, and this way it absorbs all exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18186) Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test
[ https://issues.apache.org/jira/browse/HIVE-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273085#comment-16273085 ] Vihang Karajgaonkar commented on HIVE-18186: [~k0b3rit] Thanks for the fixing the test. +1 (pending tests) > Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test > --- > > Key: HIVE-18186 > URL: https://issues.apache.org/jira/browse/HIVE-18186 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18186.patch > > > HIVE-17942 introduced new test {{TestHiveMetaStoreAlterColumnPar}}, but it > uses wrong assertion, and this way it absorbs all exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18186) Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test
[ https://issues.apache.org/jira/browse/HIVE-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273005#comment-16273005 ] Hive QA commented on HIVE-18186: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 2efb7d3 | | Default Java | 1.8.0_111 | | modules | C: itests/hive-unit U: itests/hive-unit | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8062/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix wrong assertion in TestHiveMetaStoreAlterColumnPar test > --- > > Key: HIVE-18186 > URL: https://issues.apache.org/jira/browse/HIVE-18186 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18186.patch > > > HIVE-17942 introduced new test {{TestHiveMetaStoreAlterColumnPar}}, but it > uses wrong assertion, and this way it absorbs all exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18036) Stats: Remove usage of clone() methods
[ https://issues.apache.org/jira/browse/HIVE-18036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272979#comment-16272979 ] Hive QA commented on HIVE-18036: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900047/HIVE-18036.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8061/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8061/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8061/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-11-30 17:16:07.477 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8061/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-11-30 17:16:07.479 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 2efb7d3 HIVE-17972: Implement Parquet vectorization reader for Map type (Colin Ma, reviewed by Ferdinand Xu) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 2efb7d3 HIVE-17972: Implement Parquet vectorization reader for Map type (Colin Ma, reviewed by Ferdinand Xu) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-11-30 17:16:14.270 + rm -rf ../yetus + mkdir ../yetus + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8061/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: patch -p0 patching file ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java patching file ql/src/java/org/apache/hadoop/hive/ql/optimizer/stats/annotation/StatsRulesProcFactory.java Hunk #1 succeeded at 276 (offset 10 lines). Hunk #2 succeeded at 502 (offset 10 lines). Hunk #3 succeeded at 535 (offset 10 lines). Hunk #4 succeeded at 834 (offset 10 lines). Hunk #5 succeeded at 1578 (offset 18 lines). Hunk #6 succeeded at 1670 (offset 18 lines). + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: protoc version: 250, detected platform: linux/amd64 protoc-jar: executing: [/tmp/protoc5358375346517534044.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g log4j:WARN No appenders could be found for logger (DataNucleus.General). log4j:WARN Please initialize the log4j system properly. DataNucleus Enhancer (version 4.1.17) for API "JDO" DataNucleus Enhancer : Classpath >> /usr/share/maven/boot/plexus-classworlds-2.x.jar ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MDatabase ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MFieldSchema ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MType ENHANCED (Per
[jira] [Updated] (HIVE-18187) Add jamon generated-sources as source folder
[ https://issues.apache.org/jira/browse/HIVE-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertalan Kondrat updated HIVE-18187: Status: Patch Available (was: Open) > Add jamon generated-sources as source folder > > > Key: HIVE-18187 > URL: https://issues.apache.org/jira/browse/HIVE-18187 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18187.patch > > > In idea we should add manually the {{target/generated-jamon}} forder as a > source folder to be able to build in ide without compilation error -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18187) Add jamon generated-sources as source folder
[ https://issues.apache.org/jira/browse/HIVE-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bertalan Kondrat updated HIVE-18187: Attachment: HIVE-18187.patch > Add jamon generated-sources as source folder > > > Key: HIVE-18187 > URL: https://issues.apache.org/jira/browse/HIVE-18187 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Bertalan Kondrat >Assignee: Bertalan Kondrat >Priority: Minor > Attachments: HIVE-18187.patch > > > In idea we should add manually the {{target/generated-jamon}} forder as a > source folder to be able to build in ide without compilation error -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-13567) Enable auto-gather column stats by default
[ https://issues.apache.org/jira/browse/HIVE-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272971#comment-16272971 ] Hive QA commented on HIVE-13567: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12900025/HIVE-13567.23wip08.patch {color:green}SUCCESS:{color} +1 due to 44 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 11481 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_13] (batchId=64) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_boolean] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_complex_join] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_mapjoin] (batchId=56) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_13] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainanalyze_2] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_nonvec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_nonvec_part_all_primitive] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_vec_part_all_primitive] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_nonvec_part] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_nonvec_part_all_primitive] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part_all_primitive] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vecrow_part] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vecrow_part_all_primitive] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[unionDistinct_3] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_complex_join] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_mapjoin] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_insert_into_bucketed_table] (batchId=156) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_3] (batchId=102) org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[bucket_num_reducers_acid2] (batchId=90) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=93) org.apache.hadoop.hive.ql.TestAcidOnTez.testBucketedAcidInsertWithRemoveUnion (batchId=224) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=227) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=233) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=233) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=233) org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementParallel (batchId=231) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8060/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8060/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8060/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 37 tests failed {noformat} This message is automatically generated. ATTACH
[jira] [Updated] (HIVE-17552) Enable bucket map join by default
[ https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-17552: -- Attachment: HIVE-17552.5.patch > Enable bucket map join by default > - > > Key: HIVE-17552 > URL: https://issues.apache.org/jira/browse/HIVE-17552 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17552.1.patch, HIVE-17552.2.patch, > HIVE-17552.3.patch, HIVE-17552.4.patch, HIVE-17552.5.patch > > > Currently bucket map join is disabled by default, however, it is potentially > most optimal join we have. Need to enable it by default. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17856) MM tables - IOW is not ACID compliant
[ https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272964#comment-16272964 ] Steve Yeom commented on HIVE-17856: --- It looks like all pre-commit test failures are gone with the last patch 17. > MM tables - IOW is not ACID compliant > - > > Key: HIVE-17856 > URL: https://issues.apache.org/jira/browse/HIVE-17856 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Steve Yeom > Labels: mm-gap-1 > Attachments: HIVE-17856.1.patch, HIVE-17856.10.patch, > HIVE-17856.11.patch, HIVE-17856.12.patch, HIVE-17856.13.patch, > HIVE-17856.14.patch, HIVE-17856.15.patch, HIVE-17856.16.patch, > HIVE-17856.17.patch, HIVE-17856.2.patch, HIVE-17856.3.patch, > HIVE-17856.4.patch, HIVE-17856.5.patch, HIVE-17856.6.patch, > HIVE-17856.7.patch, HIVE-17856.8.patch, HIVE-17856.9.patch > > > The following tests were removed from mm_all during "integration"... I should > have never allowed such manner of intergration. > MM logic should have been kept intact until ACID logic could catch up. Alas, > here we are. > {noformat} > drop table iow0_mm; > create table iow0_mm(key int) tblproperties("transactional"="true", > "transactional_properties"="insert_only"); > insert overwrite table iow0_mm select key from intermediate; > insert into table iow0_mm select key + 1 from intermediate; > select * from iow0_mm order by key; > insert overwrite table iow0_mm select key + 2 from intermediate; > select * from iow0_mm order by key; > drop table iow0_mm; > drop table iow1_mm; > create table iow1_mm(key int) partitioned by (key2 int) > tblproperties("transactional"="true", > "transactional_properties"="insert_only"); > insert overwrite table iow1_mm partition (key2) > select key as k1, key from intermediate union all select key as k1, key from > intermediate; > insert into table iow1_mm partition (key2) > select key + 1 as k1, key from intermediate union all select key as k1, key > from intermediate; > select * from iow1_mm order by key, key2; > insert overwrite table iow1_mm partition (key2) > select key + 3 as k1, key from intermediate union all select key + 4 as k1, > key from intermediate; > select * from iow1_mm order by key, key2; > insert overwrite table iow1_mm partition (key2) > select key + 3 as k1, key + 3 from intermediate union all select key + 2 as > k1, key + 2 from intermediate; > select * from iow1_mm order by key, key2; > drop table iow1_mm; > {noformat} > {noformat} > drop table simple_mm; > create table simple_mm(key int) stored as orc tblproperties > ("transactional"="true", "transactional_properties"="insert_only"); > insert into table simple_mm select key from intermediate; > -insert overwrite table simple_mm select key from intermediate; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)