[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-17751: -- Attachment: HIVE-17751.07.patch > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.07.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-17751: -- Attachment: (was: HIVE-17751.05.patch) > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.07.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-17751: -- Attachment: (was: HIVE-17751.06.patch) > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.07.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20048) Consider enabling hive.fetch.task.aggr by default
[ https://issues.apache.org/jira/browse/HIVE-20048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529407#comment-16529407 ] Gopal V commented on HIVE-20048: -1 on this, because local tasks aren't vectorized (& HS2 has to do more work, which limits concurrency). This would actually slow down queries in everything other than MRv2 (where we spin up a 2nd job to do the aggr task, which is what this bypasses). > Consider enabling hive.fetch.task.aggr by default > - > > Key: HIVE-20048 > URL: https://issues.apache.org/jira/browse/HIVE-20048 > Project: Hive > Issue Type: Task > Components: Physical Optimizer >Reporter: Ashutosh Chauhan >Priority: Major > > This optimization is in code base for long time. We shall consider enabling > it by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529392#comment-16529392 ] Hive QA commented on HIVE-17751: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929899/HIVE-17751.06.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12312/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12312/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12312/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12929899/HIVE-17751.06.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12929899 - PreCommit-HIVE-Build > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, HIVE-17751.05.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.06.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529391#comment-16529391 ] Hive QA commented on HIVE-19532: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929895/HIVE-19532.14.patch {color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 14645 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=21) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=50) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[change_allowincompatible_vectorization_false_date] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats] (batchId=175) org.apache.hadoop.hive.ql.metadata.TestHive.testTable (batchId=292) org.apache.hadoop.hive.ql.metadata.TestHive.testThriftTable (batchId=292) org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testTable (batchId=293) org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testThriftTable (batchId=293) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithConcurrentDropPartition (batchId=238) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithConcurrentDropTable (batchId=238) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithDropPartitionedTable (batchId=238) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConcatenatePartitionedTable (batchId=238) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testIncrementalLoadFailAndRetry (batchId=238) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testStatus (batchId=238) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12311/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12311/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12311/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929895 - PreCommit-HIVE-Build > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch, HIVE-19532.09.patch, > HIVE-19532.10.patch, HIVE-19532.11.patch, HIVE-19532.12.patch, > HIVE-19532.13.patch, HIVE-19532.14.patch, HIVE-19532.15.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529383#comment-16529383 ] Ganesha Shreedhara commented on HIVE-19850: --- I believe we should also consider the events in context.eventOperatorSet while updating event operators with the cloned table scan. [~hagleitn] [~jdere] Please review the patch. > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_int,c_float,c_boolean,dt from t1 union > all select v1.key,v1.value,v1.c_int,v1.c_float,v1.c_boolean,v1.dt from view1 > v1 join view2 v2 on v1.dt=v2.dt; > CREATE VIEW `view3` AS select > `t4`.`key`,`t4`.`value`,`t4`.`c_int`,`t4`.`c_float`,`t4`.`c_boolean`,`t4`.`dt` > from `t4` union all select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1`; > > select count(0) from view2 v2 join view3 v3 on v2.dt=v3.dt; // Throws No work > found for tablescan error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesha Shreedhara updated HIVE-19850: -- Attachment: HIVE-19850.patch > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_int,c_float,c_boolean,dt from t1 union > all select v1.key,v1.value,v1.c_int,v1.c_float,v1.c_boolean,v1.dt from view1 > v1 join view2 v2 on v1.dt=v2.dt; > CREATE VIEW `view3` AS select > `t4`.`key`,`t4`.`value`,`t4`.`c_int`,`t4`.`c_float`,`t4`.`c_boolean`,`t4`.`dt` > from `t4` union all select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1`; > > select count(0) from view2 v2 join view3 v3 on v2.dt=v3.dt; // Throws No work > found for tablescan error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesha Shreedhara updated HIVE-19850: -- Attachment: (was: HIVE-19850.patch) > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_int,c_float,c_boolean,dt from t1 union > all select v1.key,v1.value,v1.c_int,v1.c_float,v1.c_boolean,v1.dt from view1 > v1 join view2 v2 on v1.dt=v2.dt; > CREATE VIEW `view3` AS select > `t4`.`key`,`t4`.`value`,`t4`.`c_int`,`t4`.`c_float`,`t4`.`c_boolean`,`t4`.`dt` > from `t4` union all select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1`; > > select count(0) from view2 v2 join view3 v3 on v2.dt=v3.dt; // Throws No work > found for tablescan error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-19850) Dynamic partition pruning in Tez is leading to 'No work found for tablescan' error
[ https://issues.apache.org/jira/browse/HIVE-19850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesha Shreedhara updated HIVE-19850: -- Comment: was deleted (was: Found that the value for TableScanOperator is not getting retrieved from rootToWorkMap even though operatorId/name/identifier is matching. It is not a good idea to override equals/hashcode in Operator class as it can break other things, so added one more level of comparing the table scan operators based on operatorId if the objects comparison doesn't work. Please review and let me know if there is a better way of handling this.) > Dynamic partition pruning in Tez is leading to 'No work found for tablescan' > error > -- > > Key: HIVE-19850 > URL: https://issues.apache.org/jira/browse/HIVE-19850 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 3.0.0 >Reporter: Ganesha Shreedhara >Assignee: Ganesha Shreedhara >Priority: Major > Attachments: HIVE-19850.patch > > > > When multiple views are used along with union all, it is resulting in the > following error when dynamic partition pruning is enabled in tez. > > {code:java} > Exception in thread "main" java.lang.AssertionError: No work found for > tablescan TS[8] > at > org.apache.hadoop.hive.ql.parse.GenTezUtils.processAppMasterEvent(GenTezUtils.java:408) > at > org.apache.hadoop.hive.ql.parse.TezCompiler.generateTaskTree(TezCompiler.java:383) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:205) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10371) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:239) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1203) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1257) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1140) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1130) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:204) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:433) > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:894) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:825) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:726) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:223) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136){code} > > *Steps to reproduce:* > set hive.execution.engine=tez; > set hive.tez.dynamic.partition.pruning=true; > CREATE TABLE t1(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t2(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > CREATE TABLE t3(key string, value string, c_int int, c_float float, c_boolean > boolean) partitioned by (dt string); > > insert into table t1 partition(dt='2018') values ('k1','v1',1,1.0,true); > insert into table t2 partition(dt='2018') values ('k2','v2',2,2.0,true); > insert into table t3 partition(dt='2018') values ('k3','v3',3,3.0,true); > > CREATE VIEW `view1` AS select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1` union all select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2`; > CREATE VIEW `view2` AS select > `t2`.`key`,`t2`.`value`,`t2`.`c_int`,`t2`.`c_float`,`t2`.`c_boolean`,`t2`.`dt` > from `t2` union all select > `t3`.`key`,`t3`.`value`,`t3`.`c_int`,`t3`.`c_float`,`t3`.`c_boolean`,`t3`.`dt` > from `t3`; > create table t4 as select key,value,c_int,c_float,c_boolean,dt from t1 union > all select v1.key,v1.value,v1.c_int,v1.c_float,v1.c_boolean,v1.dt from view1 > v1 join view2 v2 on v1.dt=v2.dt; > CREATE VIEW `view3` AS select > `t4`.`key`,`t4`.`value`,`t4`.`c_int`,`t4`.`c_float`,`t4`.`c_boolean`,`t4`.`dt` > from `t4` union all select > `t1`.`key`,`t1`.`value`,`t1`.`c_int`,`t1`.`c_float`,`t1`.`c_boolean`,`t1`.`dt` > from `t1`; > > select count(0) from view2 v2 join view3 v3 on
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529374#comment-16529374 ] Hive QA commented on HIVE-19532: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 31s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 57s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 47s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 23s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} storage-api: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s{color} | {color:red} standalone-metastore: The patch generated 70 new + 1726 unchanged - 17 fixed = 1796 total (was 1743) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s{color} | {color:red} ql: The patch generated 18 new + 1011 unchanged - 3 fixed = 1029 total (was 1014) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 30s{color} | {color:red} root: The patch generated 92 new + 3295 unchanged - 20 fixed = 3387 total (was 3315) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} itests/hcatalog-unit: The patch generated 2 new + 28 unchanged - 0 fixed = 30 total (was 28) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 118 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 19s{color} | {color:red} standalone-metastore generated 6 new + 227 unchanged - 1 fixed = 233 total (was 228) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 57s{color} | {color:red} ql generated 1 new + 2287 unchanged - 0 fixed = 2288 total (was 2287) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 57s{color} | {color:red} standalone-metastore generated 3 new + 54 unchanged - 0 fixed = 57 total (was 54) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 5m 51s{color} | {color:red} root generated 3 new + 371 unchanged - 0 fixed = 374 total (was 371) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests ||
[jira] [Updated] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19532: Attachment: HIVE-19532.15.patch > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch, HIVE-19532.09.patch, > HIVE-19532.10.patch, HIVE-19532.11.patch, HIVE-19532.12.patch, > HIVE-19532.13.patch, HIVE-19532.14.patch, HIVE-19532.15.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19812: --- Attachment: HIVE-19812.11.patch > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, > HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, > HIVE-19812.06-branch-3.patch, HIVE-19812.06.patch, HIVE-19812.07.patch, > HIVE-19812.08.patch, HIVE-19812.09.patch, HIVE-19812.10.patch, > HIVE-19812.11.patch > > > use a hive config property to allow external table replication. set this > property by default to prevent external table replication. > for metadata only hive repl always export metadata for external tables. > > REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false, > "Indicates if repl dump should include information about external tables. It > should be \n" > + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if > 'hive.repl.dump.metadata.only' \n" > + " is set to true then this config parameter has no effect as external table > meta data is flushed \n" > + " always by default.") > This should be done for only replication dump and not for export -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529350#comment-16529350 ] Matt McCline commented on HIVE-19951: - Again. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch, HIVE-19951.092.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19812: --- Attachment: (was: HIVE-19812.11.patch) > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, > HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, > HIVE-19812.06-branch-3.patch, HIVE-19812.06.patch, HIVE-19812.07.patch, > HIVE-19812.08.patch, HIVE-19812.09.patch, HIVE-19812.10.patch, > HIVE-19812.11.patch > > > use a hive config property to allow external table replication. set this > property by default to prevent external table replication. > for metadata only hive repl always export metadata for external tables. > > REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false, > "Indicates if repl dump should include information about external tables. It > should be \n" > + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if > 'hive.repl.dump.metadata.only' \n" > + " is set to true then this config parameter has no effect as external table > meta data is flushed \n" > + " always by default.") > This should be done for only replication dump and not for export -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529351#comment-16529351 ] Hive QA commented on HIVE-17751: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929899/HIVE-17751.06.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12310/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12310/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12310/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-02 03:14:37.559 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12310/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-02 03:14:37.563 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 455b71e..b122aea master -> origin/master cd8f693..d7bbc20 master-txnstats -> origin/master-txnstats + git reset --hard HEAD HEAD is now at 455b71e HIVE-20034: Roll back MetaStore exception handling changes for backward compatibility (Peter Vary, reviewed by Sergey Shelukhin) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at b122aea HIVE-20045 : Update hidden config list + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-02 03:14:39.068 + rm -rf ../yetus_PreCommit-HIVE-Build-12310 + mkdir ../yetus_PreCommit-HIVE-Build-12310 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12310 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12310/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch fatal: git apply: bad git-diff - inconsistent old filename on line 2738 Going to apply patch with: git apply -p1 /data/hiveptest/working/scratch/build.patch:6844: new blank line at EOF. + warning: 1 line adds whitespace errors. + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven ANTLR Parser Generator Version 3.5.2 protoc-jar: executing: [/tmp/protoc5306571441180887017.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc5306571441180887017.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g log4j:WARN No appenders could be found for logger (DataNucleus.Persistence). log4j:WARN Please initialize the log4j system properly. DataNucleus Enhancer (version 4.1.17) for API "JDO" DataNucleus Enhancer completed with success for 40 classes. ANTLR Parser Generator Version 3.5.2 Output file
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: Patch Available (was: In Progress) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch, HIVE-19951.092.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Attachment: HIVE-19951.092.patch > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch, HIVE-19951.092.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: In Progress (was: Patch Available) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch, HIVE-19951.092.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529349#comment-16529349 ] Hive QA commented on HIVE-19951: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929893/HIVE-19951.091.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 14640 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12309/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12309/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12309/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929893 - PreCommit-HIVE-Build > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20025) Clean-up of event files created by HiveProtoLoggingHook.
[ https://issues.apache.org/jira/browse/HIVE-20025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529341#comment-16529341 ] ASF GitHub Bot commented on HIVE-20025: --- GitHub user sankarh opened a pull request: https://github.com/apache/hive/pull/384 HIVE-20025: Clean-up of event files created by HiveProtoLoggingHook. You can merge this pull request into a Git repository by running: $ git pull https://github.com/sankarh/hive HIVE-20025 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/384.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #384 commit 52c24baa28ed305f3be2b47f6246ffede0f08e6e Author: Sankar Hariappan Date: 2018-07-01T17:18:06Z HIVE-20025: Clean-up of event files created by HiveProtoLoggingHook. > Clean-up of event files created by HiveProtoLoggingHook. > > > Key: HIVE-20025 > URL: https://issues.apache.org/jira/browse/HIVE-20025 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: Hive, hooks, pull-request-available > Fix For: 4.0.0 > > > Currently, HiveProtoLoggingHook write event data to hdfs. The number of files > can grow to very large numbers. > Since the files are created under a folder with Date being a part of the > path, hive should have a way to clean up data older than a certain configured > time / date. This can be a job that can run with as little frequency as just > once a day. > This time should be set to 1 week default. There should also be a sane upper > bound of # of files so that when a large cluster generates a lot of files > during a spike, we don't force the cluster fall over. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20025) Clean-up of event files created by HiveProtoLoggingHook.
[ https://issues.apache.org/jira/browse/HIVE-20025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-20025: -- Labels: Hive hooks pull-request-available (was: Hive hooks) > Clean-up of event files created by HiveProtoLoggingHook. > > > Key: HIVE-20025 > URL: https://issues.apache.org/jira/browse/HIVE-20025 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: Hive, hooks, pull-request-available > Fix For: 4.0.0 > > > Currently, HiveProtoLoggingHook write event data to hdfs. The number of files > can grow to very large numbers. > Since the files are created under a folder with Date being a part of the > path, hive should have a way to clean up data older than a certain configured > time / date. This can be a job that can run with as little frequency as just > once a day. > This time should be set to 1 week default. There should also be a sane upper > bound of # of files so that when a large cluster generates a lot of files > during a spike, we don't force the cluster fall over. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20045) Update hidden config list
[ https://issues.apache.org/jira/browse/HIVE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20045: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. > Update hidden config list > - > > Key: HIVE-20045 > URL: https://issues.apache.org/jira/browse/HIVE-20045 > Project: Hive > Issue Type: Task > Components: Configuration >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20045.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529328#comment-16529328 ] Hive QA commented on HIVE-19951: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 55s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} llap-server in master has 84 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} llap-server: The patch generated 12 new + 30 unchanged - 0 fixed = 42 total (was 30) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12309/dev-support/hive-personality.sh | | git revision | master / 455b71e | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12309/yetus/diff-checkstyle-llap-server.txt | | modules | C: ql llap-server itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12309/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529326#comment-16529326 ] Sergey Shelukhin commented on HIVE-19975: - branch patch excluding generated files. This patch fixes the partition case, and also: 1) Removes writeID and txnId cols from tables. 2) Makes sure all (or at least more) calls propagate txnID and partId; to avoid breaking and adding apis, via EnvCtx for now. This is actually a bad method, but changing more APIs is going to be another disruptive change so I can do it as a follow-up. 3) Removes a bunch of overloads that use default values to avoid mistakes; the values are now passed explicitly in most places. Adds comments in many places as to why txnId/writeIdList are not being passed. 4) Adds some TODO## comments as review comments on the original patch (some in CachedStore, etc.). 5) Restores a removed metastore API for backward compat (hence lots of generated code changes). 6) Some other small changes. I filed a follow-up to potentially also remove txnId argument; for now I kept the check in place. Attached the full patch in addition to the branch only patch, for HiveQA to compare with branch HiveQA > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.patch, branch-19975.nogen.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529324#comment-16529324 ] Alexander Kolbasov commented on HIVE-17751: --- service module has a dependency on metastore-common > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, HIVE-17751.05.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.06.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-17751: -- Attachment: HIVE-17751.06.patch > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, HIVE-17751.05.patch, > HIVE-17751.06-standalone-metastore.patch, HIVE-17751.06.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19975: Attachment: (was: HIVE-19975.01.patch) > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.patch, branch-19975.nogen.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19975: Attachment: branch-19975.nogen.patch > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.patch, branch-19975.nogen.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19975: Attachment: HIVE-19975.patch > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.patch, branch-19975.nogen.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19975: Attachment: (was: branch-19975.patch) > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19975: Attachment: branch-19975.patch > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529318#comment-16529318 ] Hive QA commented on HIVE-19711: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929887/HIVE-19711.08.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14639 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12308/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12308/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12308/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929887 - PreCommit-HIVE-Build > Refactor Hive Schema Tool > - > > Key: HIVE-19711 > URL: https://issues.apache.org/jira/browse/HIVE-19711 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-19711.01.patch, HIVE-19711.02.patch, > HIVE-19711.03.patch, HIVE-19711.04.patch, HIVE-19711.05.patch, > HIVE-19711.06.patch, HIVE-19711.07.patch, HIVE-19711.08.patch > > > HiveSchemaTool is an 1500 lines long class trying to do everything It shold > be cut into multiple classes doing smaller components. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater
[ https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529317#comment-16529317 ] Sergey Shelukhin commented on HIVE-19820: - Actually committing this patch right now before the patch that fixes the partition case is a pain causing some conflicts between two patches. This will go in after the partitions patch, or later if the partitions patch takes too long. > add ACID stats support to background stats updater > -- > > Key: HIVE-19820 > URL: https://issues.apache.org/jira/browse/HIVE-19820 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19820.01-master-txnstats.patch, > HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch > > > Follow-up from HIVE-19418. > Right now it checks whether stats are valid in an old-fashioned way... and > also gets ACID state, and discards it without using. > When ACID stats are implemented, ACID state needs to be used to do > version-aware valid stats checks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19532: Attachment: HIVE-19532.14.patch > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch, HIVE-19532.09.patch, > HIVE-19532.10.patch, HIVE-19532.11.patch, HIVE-19532.12.patch, > HIVE-19532.13.patch, HIVE-19532.14.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529310#comment-16529310 ] Sergey Shelukhin commented on HIVE-19532: - Rebased again. This is before committing the stats updater patch. > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch, HIVE-19532.09.patch, > HIVE-19532.10.patch, HIVE-19532.11.patch, HIVE-19532.12.patch, > HIVE-19532.13.patch, HIVE-19532.14.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529309#comment-16529309 ] Hive QA commented on HIVE-19711: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 26s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} beeline in master has 70 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 36s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} beeline: The patch generated 12 new + 72 unchanged - 63 fixed = 84 total (was 135) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 53s{color} | {color:red} root: The patch generated 13 new + 238 unchanged - 93 fixed = 251 total (was 331) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 166 unchanged - 30 fixed = 167 total (was 196) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s{color} | {color:red} beeline generated 5 new + 51 unchanged - 19 fixed = 56 total (was 70) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:beeline | | | org.apache.hive.beeline.BeeLineOpts.getInitFiles() may expose internal representation by returning BeeLineOpts.initFiles At BeeLineOpts.java:by returning BeeLineOpts.initFiles At BeeLineOpts.java:[line 469] | | | org.apache.hive.beeline.BeeLineOpts.setInitFiles(String[]) may expose internal representation by storing an externally mutable object into BeeLineOpts.initFiles At BeeLineOpts.java:by storing an externally mutable object into BeeLineOpts.initFiles At BeeLineOpts.java:[line 473] | | | org.apache.hive.beeline.BeeLineOpts.env should be package protected At BeeLineOpts.java: At BeeLineOpts.java:[line 143] | | | org.apache.hive.beeline.schematool.HiveSchemaToolTaskAlterCatalog.execute() passes a nonconstant String to an execute or addBatch method on an SQL statement At HiveSchemaToolTaskAlterCatalog.java:to an execute or addBatch method on an SQL statement At HiveSchemaToolTaskAlterCatalog.java:[line 70] | | | Found reliance on default encoding in org.apache.hive.beeline.schematool.HiveSchemaToolTaskValidate.findCreateTable(String, List):in org.apache.hive.beeline.schematool.HiveSchemaToolTaskValidate.findCreateTable(String, List): new java.io.FileReader(String) At HiveSchemaToolTaskValidate.java:[line 283] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle
[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-17751: -- Attachment: HIVE-17751.05.patch > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, HIVE-17751.05.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20047) consider removing txnID argument for txn stats methods
[ https://issues.apache.org/jira/browse/HIVE-20047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-20047: --- > consider removing txnID argument for txn stats methods > -- > > Key: HIVE-20047 > URL: https://issues.apache.org/jira/browse/HIVE-20047 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > Followup from HIVE-19975. > W.r.t. write IDs and txn IDs, stats validity check currently verifies one of > two things - that stats write ID is valid for query write ID list, or that > stats txn ID (derived from write ID) is the same as the query txn ID. > I'm not sure the latter check is needed; removing it would allow us to make a > bunch of APIs a little bit simpler. > [~ekoifman] do you have any feedback? Can any stats reader (e.g. compile) > observe stats written by the same txn; but in such manner that it doesn't > have the write ID of the same-txn stats writer, in its valid write ID list? > I'm assuming it's not possible, e.g. in multi statement txn each query would > have the previous same-txn writer for the same table in its valid write ID > list? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20046) remove NUM_FILES check or add a negative test
[ https://issues.apache.org/jira/browse/HIVE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-20046: --- > remove NUM_FILES check or add a negative test > - > > Key: HIVE-20046 > URL: https://issues.apache.org/jira/browse/HIVE-20046 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > {noformat} > // Since newly initialized empty table has 0 for the parameter. > if (Long.parseLong(statsParams.get(StatsSetupConst.NUM_FILES)) == 0) { > return true; > } > {noformat} > This doesn't look safe; # of files could be set to 0 by an invalid update, or > potentially a parallel update that we cannot see (not sure if this is > possible; there's some code in metastore that updates basic stats outside of > the scope of the query). > It would be better to remove this, and see if it breaks some tests. If we do > need this, there should be a negative test at some point -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater
[ https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529292#comment-16529292 ] Jason Dere commented on HIVE-19820: --- +1 > add ACID stats support to background stats updater > -- > > Key: HIVE-19820 > URL: https://issues.apache.org/jira/browse/HIVE-19820 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19820.01-master-txnstats.patch, > HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch > > > Follow-up from HIVE-19418. > Right now it checks whether stats are valid in an old-fashioned way... and > also gets ACID state, and discards it without using. > When ACID stats are implemented, ACID state needs to be used to do > version-aware valid stats checks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: Patch Available (was: In Progress) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Attachment: HIVE-19951.091.patch > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: In Progress (was: Patch Available) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch, HIVE-19951.091.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529286#comment-16529286 ] Hive QA commented on HIVE-17751: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929886/HIVE-17751.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12307/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12307/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12307/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-02 00:24:15.292 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12307/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-02 00:24:15.296 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 455b71e HIVE-20034: Roll back MetaStore exception handling changes for backward compatibility (Peter Vary, reviewed by Sergey Shelukhin) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 455b71e HIVE-20034: Roll back MetaStore exception handling changes for backward compatibility (Peter Vary, reviewed by Sergey Shelukhin) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-02 00:24:16.235 + rm -rf ../yetus_PreCommit-HIVE-Build-12307 + mkdir ../yetus_PreCommit-HIVE-Build-12307 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12307 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12307/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch fatal: git apply: bad git-diff - inconsistent old filename on line 2560 Going to apply patch with: git apply -p1 /data/hiveptest/working/scratch/build.patch:: new blank line at EOF. + warning: 1 line adds whitespace errors. + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven ANTLR Parser Generator Version 3.5.2 protoc-jar: executing: [/tmp/protoc3295323820695517351.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc3295323820695517351.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g log4j:WARN No appenders could be found for logger (DataNucleus.Persistence). log4j:WARN Please initialize the log4j system properly. DataNucleus Enhancer (version 4.1.17) for API "JDO" DataNucleus Enhancer completed with success for 40 classes. ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java does not exist: must build
[jira] [Commented] (HIVE-20045) Update hidden config list
[ https://issues.apache.org/jira/browse/HIVE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529285#comment-16529285 ] Hive QA commented on HIVE-20045: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929885/HIVE-20045.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14639 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12306/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12306/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12306/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929885 - PreCommit-HIVE-Build > Update hidden config list > - > > Key: HIVE-20045 > URL: https://issues.apache.org/jira/browse/HIVE-20045 > Project: Hive > Issue Type: Task > Components: Configuration >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-20045.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20020) Hive contrib jar should not be in lib
[ https://issues.apache.org/jira/browse/HIVE-20020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529277#comment-16529277 ] Shawn Weeks commented on HIVE-20020: Yeah, and really what I was trying to say is that aside from the MultiDelimiterSerde, just about everything else in there is just examples. If we could get out of there what's actually being used we could quit include hive-contrib altogether. I don't really see how compiled copies of the examples actually belongs in the standard installation. > Hive contrib jar should not be in lib > - > > Key: HIVE-20020 > URL: https://issues.apache.org/jira/browse/HIVE-20020 > Project: Hive > Issue Type: Improvement > Components: Contrib >Reporter: Johndee Burks >Priority: Trivial > > Currently the way hive is packaged it includes hive-contrib-.jar in > lib, we should not include it here because it is picked up by services like > HS2. This creates a situation in which experimental features such as the > [MultiDelimitSerDe|https://github.com/apache/hive/blob/master/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/MultiDelimitSerDe.java] > are accessible without understanding how to really install and use it. For > example you can create a table using HS2 via beeline with the aforementioned > SerDe and it will work as long you do not do M/R jobs. The M/R jobs do not > work because the SerDe is not in aux to get shipped into distcache. I propose > we do not package it this way and if someone would like to leverage an > experimental feature they can add it manually to their environment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20045) Update hidden config list
[ https://issues.apache.org/jira/browse/HIVE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529274#comment-16529274 ] Hive QA commented on HIVE-20045: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12306/dev-support/hive-personality.sh | | git revision | master / 455b71e | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common U: common | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12306/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update hidden config list > - > > Key: HIVE-20045 > URL: https://issues.apache.org/jira/browse/HIVE-20045 > Project: Hive > Issue Type: Task > Components: Configuration >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-20045.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20020) Hive contrib jar should not be in lib
[ https://issues.apache.org/jira/browse/HIVE-20020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529271#comment-16529271 ] BELUGA BEHR commented on HIVE-20020: Well, {{MultiDelimitSerDe}} is just an example. There may be other features facing the same issue. It perhaps should be considered to make {{MultiDelimitSerDe}} a first-class SerDe, but the larger issue about pre-installing the contrib package remains. I think the first step should be to remove the hive-contrib from the install base. Discussions about moving {{MultiDelimitSerDe}} into Hive proper can commence after that. > Hive contrib jar should not be in lib > - > > Key: HIVE-20020 > URL: https://issues.apache.org/jira/browse/HIVE-20020 > Project: Hive > Issue Type: Improvement > Components: Contrib >Reporter: Johndee Burks >Priority: Trivial > > Currently the way hive is packaged it includes hive-contrib-.jar in > lib, we should not include it here because it is picked up by services like > HS2. This creates a situation in which experimental features such as the > [MultiDelimitSerDe|https://github.com/apache/hive/blob/master/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/MultiDelimitSerDe.java] > are accessible without understanding how to really install and use it. For > example you can create a table using HS2 via beeline with the aforementioned > SerDe and it will work as long you do not do M/R jobs. The M/R jobs do not > work because the SerDe is not in aux to get shipped into distcache. I propose > we do not package it this way and if someone would like to leverage an > experimental feature they can add it manually to their environment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19926) Remove deprecated hcatalog streaming
[ https://issues.apache.org/jira/browse/HIVE-19926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529268#comment-16529268 ] Hive QA commented on HIVE-19926: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929884/HIVE-19926.4.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12305/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12305/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12305/ Messages: {noformat} This message was trimmed, see log for full details + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/hcatalog/pom.xml: does not exist in index error: a/hcatalog/streaming/pom.xml: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/AbstractRecordWriter.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/ConnectionError.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/DelimitedInputWriter.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HeartBeatFailure.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/HiveEndPoint.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/ImpersonationFailed.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/InvalidColumn.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/InvalidPartition.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/InvalidTable.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/InvalidTrasactionState.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/PartitionCreationFailed.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/QueryFailedException.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/RecordWriter.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/SerializationError.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/StreamingConnection.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/StreamingException.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/StreamingIOFailure.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/StrictJsonWriter.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/StrictRegexWriter.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/TransactionBatch.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/TransactionBatchUnAvailable.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/TransactionError.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/HiveConfFactory.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/UgiMetaStoreClientFactory.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/AcidTable.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/AcidTableSerializer.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/ClientException.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/ConnectionException.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/MutatorClient.java: does not exist in index error: a/hcatalog/streaming/src/java/org/apache/hive/hcatalog/streaming/mutate/client/MutatorClientBuilder.java: does not exist in index error:
[jira] [Commented] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529267#comment-16529267 ] Hive QA commented on HIVE-19792: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929883/HIVE-19792.3.patch {color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 14639 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_file_dump] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge11] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge12] (batchId=65) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge11] (batchId=165) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[orc_merge12] (batchId=105) org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning (batchId=320) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12304/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12304/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12304/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929883 - PreCommit-HIVE-Build > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch, > HIVE-19792.3.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529263#comment-16529263 ] Hive QA commented on HIVE-19792: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12304/dev-support/hive-personality.sh | | git revision | master / 455b71e | | Default Java | 1.8.0_111 | | modules | C: ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12304/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch, > HIVE-19792.3.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-19711: -- Release Note: Uploaded the same patch to get it tested as it failed last time because of unrelated reasons. Status: Patch Available (was: Open) > Refactor Hive Schema Tool > - > > Key: HIVE-19711 > URL: https://issues.apache.org/jira/browse/HIVE-19711 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-19711.01.patch, HIVE-19711.02.patch, > HIVE-19711.03.patch, HIVE-19711.04.patch, HIVE-19711.05.patch, > HIVE-19711.06.patch, HIVE-19711.07.patch, HIVE-19711.08.patch > > > HiveSchemaTool is an 1500 lines long class trying to do everything It shold > be cut into multiple classes doing smaller components. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-19711: -- Status: Open (was: Patch Available) > Refactor Hive Schema Tool > - > > Key: HIVE-19711 > URL: https://issues.apache.org/jira/browse/HIVE-19711 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-19711.01.patch, HIVE-19711.02.patch, > HIVE-19711.03.patch, HIVE-19711.04.patch, HIVE-19711.05.patch, > HIVE-19711.06.patch, HIVE-19711.07.patch, HIVE-19711.08.patch > > > HiveSchemaTool is an 1500 lines long class trying to do everything It shold > be cut into multiple classes doing smaller components. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-19711: -- Attachment: HIVE-19711.08.patch > Refactor Hive Schema Tool > - > > Key: HIVE-19711 > URL: https://issues.apache.org/jira/browse/HIVE-19711 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-19711.01.patch, HIVE-19711.02.patch, > HIVE-19711.03.patch, HIVE-19711.04.patch, HIVE-19711.05.patch, > HIVE-19711.06.patch, HIVE-19711.07.patch, HIVE-19711.08.patch > > > HiveSchemaTool is an 1500 lines long class trying to do everything It shold > be cut into multiple classes doing smaller components. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529251#comment-16529251 ] Hive QA commented on HIVE-20038: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929882/HIVE-20038.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14641 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12303/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12303/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12303/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929882 - PreCommit-HIVE-Build > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch, HIVE-20038.2.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529249#comment-16529249 ] Alexander Kolbasov commented on HIVE-17751: --- We have a problem with this patch - it is huge and touches many files. By the time it reaches the slot in the test queue, someone commits a conflicting code and the patch fails to merge. patch.04 is applied on top of {code} * commit 455b71e4a2a51838818b77786566297b1fabb233 (origin/master, origin/HEAD) | Author: Peter Vary | Date: Sun Jul 1 22:27:37 2018 +0200 | | HIVE-20034: Roll back MetaStore exception handling changes for backward compatibility (Peter Vary, reviewed by Sergey Shelukhin) | {code} > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-17751: -- Attachment: HIVE-17751.04.patch > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, HIVE-17751.02.patch, > HIVE-17751.03.patch, HIVE-17751.04.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20045) Update hidden config list
[ https://issues.apache.org/jira/browse/HIVE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20045: Attachment: HIVE-20045.patch > Update hidden config list > - > > Key: HIVE-20045 > URL: https://issues.apache.org/jira/browse/HIVE-20045 > Project: Hive > Issue Type: Task > Components: Configuration >Reporter: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-20045.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20045) Update hidden config list
[ https://issues.apache.org/jira/browse/HIVE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20045: Assignee: Ashutosh Chauhan Status: Patch Available (was: Open) > Update hidden config list > - > > Key: HIVE-20045 > URL: https://issues.apache.org/jira/browse/HIVE-20045 > Project: Hive > Issue Type: Task > Components: Configuration >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-20045.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529236#comment-16529236 ] Hive QA commented on HIVE-20038: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 4s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 24 new + 215 unchanged - 0 fixed = 239 total (was 215) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12303/dev-support/hive-personality.sh | | git revision | master / 455b71e | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12303/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12303/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch, HIVE-20038.2.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529231#comment-16529231 ] Hive QA commented on HIVE-19267: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929881/HIVE-19267.21.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14633 tests executed *Failed tests:* {noformat} TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=190) [druidmini_dynamic_partition.q,druidmini_expressions.q,druidmini_test_alter.q,druidmini_test1.q,druidmini_test_insert.q] {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12302/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12302/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12302/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929881 - PreCommit-HIVE-Build > Create/Replicate ACID Write event > - > > Key: HIVE-19267 > URL: https://issues.apache.org/jira/browse/HIVE-19267 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, > HIVE-19267.02.patch, HIVE-19267.03.patch, HIVE-19267.04.patch, > HIVE-19267.05.patch, HIVE-19267.06.patch, HIVE-19267.07.patch, > HIVE-19267.08.patch, HIVE-19267.09.patch, HIVE-19267.10.patch, > HIVE-19267.11.patch, HIVE-19267.12.patch, HIVE-19267.13.patch, > HIVE-19267.14.patch, HIVE-19267.15.patch, HIVE-19267.16.patch, > HIVE-19267.17.patch, HIVE-19267.18.patch, HIVE-19267.19.patch, > HIVE-19267.20.patch, HIVE-19267.21.patch > > > > h1. Replicate ACID write Events > * Create new EVENT_WRITE event with related message format to log the write > operations with in a txn along with data associated. > * Log this event when perform any writes (insert into, insert overwrite, > load table, delete, update, merge, truncate) on table/partition. > * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple > partitions, then need to log one event per partition. > * DbNotificationListener should log this type of event to special metastore > table named "MTxnWriteNotificationLog". > * This table should maintain a map of txn ID against list of > tables/partitions written by given txn. > * The entry for a given txn should be removed by the cleaner thread that > removes the expired events from EventNotificationTable. > h1. Replicate Commit Txn operation (with writes) > Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions > modified within the txn. > *Source warehouse:* > * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" > metastore table to consolidate the list of tables/partitions modified within > this txn scope. > * Based on the list of tables/partitions modified and table Write ID, need > to compute the list of delta files added by this txn. > * Repl dump should read this message and dump the metadata and delta files > list. > *Target warehouse:* > * Ensure snapshot isolation at target for on-going read txns which shouldn't > view the data replicated from committed txn. (Ensured with open and allocate > write ID events). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-20034: -- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks for the review [~sershe]! > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-20034.2.patch, HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19926) Remove deprecated hcatalog streaming
[ https://issues.apache.org/jira/browse/HIVE-19926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19926: - Attachment: HIVE-19926.4.patch > Remove deprecated hcatalog streaming > > > Key: HIVE-19926 > URL: https://issues.apache.org/jira/browse/HIVE-19926 > Project: Hive > Issue Type: Improvement > Components: Streaming >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19926.1.patch, HIVE-19926.2.patch, > HIVE-19926.3.patch, HIVE-19926.4.patch > > > hcatalog streaming is deprecated in 3.0.0. We should remove it in 4.0.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20038) Update queries on non-bucketed + partitioned tables throws NPE
[ https://issues.apache.org/jira/browse/HIVE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-20038: - Attachment: HIVE-20038.2.patch > Update queries on non-bucketed + partitioned tables throws NPE > -- > > Key: HIVE-20038 > URL: https://issues.apache.org/jira/browse/HIVE-20038 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.2.0 >Reporter: Kavan Suresh >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-20038.1.patch, HIVE-20038.2.patch > > > With HIVE-19890 delete deltas of non-bucketed tables are computed from > ROW__ID. This can create holes in output paths (and final paths) in > FSOp.commit() resulting in NPE. > Following is the exception > {code:java} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commitOneOutPath(FileSinkOperator.java:246) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:235) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$400(FileSinkOperator.java:168) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1325) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:383){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19792) Enable schema evolution tests for decimal 64
[ https://issues.apache.org/jira/browse/HIVE-19792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19792: - Attachment: HIVE-19792.3.patch > Enable schema evolution tests for decimal 64 > > > Key: HIVE-19792 > URL: https://issues.apache.org/jira/browse/HIVE-19792 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19792.1.patch, HIVE-19792.2.patch, > HIVE-19792.3.patch > > > Following tests are disabled in HIVE-19629 as orc ConvertTreeReaderFactory > does not handle Decimal64ColumnVectors. This jira is to re-enable those tests > after orc supports it. > 1) type_change_test_int_vectorized.q > 2) type_change_test_int.q > 3) orc_schema_evolution_float.q > 4) schema_evol_orc_nonvec_part_all_primitive.q > 5) schema_evol_orc_nonvec_part_all_primitive_llap_io.q > 6) schema_evol_orc_vec_part_all_primitive.q > 7) schema_evol_orc_vec_part_all_primitive_llap_io.q > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529207#comment-16529207 ] Hive QA commented on HIVE-19267: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 8s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 2s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 26s{color} | {color:blue} hcatalog/server-extensions in master has 4 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} The patch common passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} standalone-metastore: The patch generated 7 new + 2187 unchanged - 6 fixed = 2194 total (was 2193) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 52s{color} | {color:red} ql: The patch generated 6 new + 1252 unchanged - 19 fixed = 1258 total (was 1271) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} hcatalog/server-extensions: The patch generated 0 new + 4 unchanged - 3 fixed = 4 total (was 7) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch hcatalog-unit passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} itests/hive-unit: The patch generated 3 new + 654 unchanged - 5 fixed = 657 total (was 659) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 44 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 12s{color} | {color:red} ql generated 1 new + 2286 unchanged - 1 fixed = 2287 total (was 2287) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} hcatalog/server-extensions generated 1 new + 2 unchanged - 2 fixed = 3 total (was 4) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Found reliance on default encoding in
[jira] [Commented] (HIVE-20020) Hive contrib jar should not be in lib
[ https://issues.apache.org/jira/browse/HIVE-20020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529203#comment-16529203 ] Shawn Weeks commented on HIVE-20020: MultiDelimitSerDe has been in contrib for several years now, what is needed to actually make it a proper SerDe so we can just eliminate the issue altogether? I know it still has a rather nasty bug in it and I've gone to using the Hive split() function instead but I don't think it's that much work to fix that. > Hive contrib jar should not be in lib > - > > Key: HIVE-20020 > URL: https://issues.apache.org/jira/browse/HIVE-20020 > Project: Hive > Issue Type: Improvement > Components: Contrib >Reporter: Johndee Burks >Priority: Trivial > > Currently the way hive is packaged it includes hive-contrib-.jar in > lib, we should not include it here because it is picked up by services like > HS2. This creates a situation in which experimental features such as the > [MultiDelimitSerDe|https://github.com/apache/hive/blob/master/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/MultiDelimitSerDe.java] > are accessible without understanding how to really install and use it. For > example you can create a table using HS2 via beeline with the aforementioned > SerDe and it will work as long you do not do M/R jobs. The M/R jobs do not > work because the SerDe is not in aux to get shipped into distcache. I propose > we do not package it this way and if someone would like to leverage an > experimental feature they can add it manually to their environment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529193#comment-16529193 ] Hive QA commented on HIVE-19812: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929878/HIVE-19812.11.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 14643 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12301/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12301/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12301/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929878 - PreCommit-HIVE-Build > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, > HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, > HIVE-19812.06-branch-3.patch, HIVE-19812.06.patch, HIVE-19812.07.patch, > HIVE-19812.08.patch, HIVE-19812.09.patch, HIVE-19812.10.patch, > HIVE-19812.11.patch > > > use a hive config property to allow external table replication. set this > property by default to prevent external table replication. > for metadata only hive repl always export metadata for external tables. > > REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false, > "Indicates if repl dump should include information about external tables. It > should be \n" > + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if > 'hive.repl.dump.metadata.only' \n" > + " is set to true then this config parameter has no effect as external table > meta data is flushed \n" > + " always by default.") > This should be done for only replication dump and not for export -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529186#comment-16529186 ] Hive QA commented on HIVE-19812: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 55s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 110 unchanged - 0 fixed = 112 total (was 110) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12301/dev-support/hive-personality.sh | | git revision | master / 08eba3e | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12301/yetus/diff-checkstyle-itests_hive-unit.txt | | modules | C: common ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12301/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, > HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, > HIVE-19812.06-branch-3.patch, HIVE-19812.06.patch, HIVE-19812.07.patch, > HIVE-19812.08.patch, HIVE-19812.09.patch, HIVE-19812.10.patch, > HIVE-19812.11.patch > > > use a hive
[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19267: --- Attachment: HIVE-19267.21.patch > Create/Replicate ACID Write event > - > > Key: HIVE-19267 > URL: https://issues.apache.org/jira/browse/HIVE-19267 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, > HIVE-19267.02.patch, HIVE-19267.03.patch, HIVE-19267.04.patch, > HIVE-19267.05.patch, HIVE-19267.06.patch, HIVE-19267.07.patch, > HIVE-19267.08.patch, HIVE-19267.09.patch, HIVE-19267.10.patch, > HIVE-19267.11.patch, HIVE-19267.12.patch, HIVE-19267.13.patch, > HIVE-19267.14.patch, HIVE-19267.15.patch, HIVE-19267.16.patch, > HIVE-19267.17.patch, HIVE-19267.18.patch, HIVE-19267.19.patch, > HIVE-19267.20.patch, HIVE-19267.21.patch > > > > h1. Replicate ACID write Events > * Create new EVENT_WRITE event with related message format to log the write > operations with in a txn along with data associated. > * Log this event when perform any writes (insert into, insert overwrite, > load table, delete, update, merge, truncate) on table/partition. > * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple > partitions, then need to log one event per partition. > * DbNotificationListener should log this type of event to special metastore > table named "MTxnWriteNotificationLog". > * This table should maintain a map of txn ID against list of > tables/partitions written by given txn. > * The entry for a given txn should be removed by the cleaner thread that > removes the expired events from EventNotificationTable. > h1. Replicate Commit Txn operation (with writes) > Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions > modified within the txn. > *Source warehouse:* > * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" > metastore table to consolidate the list of tables/partitions modified within > this txn scope. > * Based on the list of tables/partitions modified and table Write ID, need > to compute the list of delta files added by this txn. > * Repl dump should read this message and dump the metadata and delta files > list. > *Target warehouse:* > * Ensure snapshot isolation at target for on-going read txns which shouldn't > view the data replicated from committed txn. (Ensured with open and allocate > write ID events). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19812: --- Attachment: HIVE-19812.11.patch > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, > HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, > HIVE-19812.06-branch-3.patch, HIVE-19812.06.patch, HIVE-19812.07.patch, > HIVE-19812.08.patch, HIVE-19812.09.patch, HIVE-19812.10.patch, > HIVE-19812.11.patch > > > use a hive config property to allow external table replication. set this > property by default to prevent external table replication. > for metadata only hive repl always export metadata for external tables. > > REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false, > "Indicates if repl dump should include information about external tables. It > should be \n" > + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if > 'hive.repl.dump.metadata.only' \n" > + " is set to true then this config parameter has no effect as external table > meta data is flushed \n" > + " always by default.") > This should be done for only replication dump and not for export -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19995) Aggregate row traffic for acid tables
[ https://issues.apache.org/jira/browse/HIVE-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19995: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Zoltan! > Aggregate row traffic for acid tables > - > > Key: HIVE-19995 > URL: https://issues.apache.org/jira/browse/HIVE-19995 > Project: Hive > Issue Type: Sub-task > Components: Statistics, Transactions >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19995.01.patch, HIVE-19995.01wip01.patch, > HIVE-19995.01wip01.patch, HIVE-19995.01wip02.patch, HIVE-19995.02.patch, > HIVE-19995.02.patch, HIVE-19995.03.patch, HIVE-19995.03.patch, > HIVE-19995.03.patch, HIVE-19995.03.patch > > > for transactional tables we store basic stats in case of explicit > analyze/rewrite; but doesn't do anything in other caseswhich may even > lead to plans which oom... > It would be better to aggregate the total row traffic...because that is > already available; so that operator tree estimations could work with a real > upper bound of the row numbers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: Patch Available (was: In Progress) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: In Progress (was: Patch Available) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch, HIVE-19951.07.patch, HIVE-19951.08.patch, > HIVE-19951.09.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529152#comment-16529152 ] Hive QA commented on HIVE-19267: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929867/HIVE-19267.20.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 14623 tests executed *Failed tests:* {noformat} TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestReplicationScenariosAcidTables - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestSparkStatistics - did not produce a TEST-*.xml file (likely timed out) (batchId=240) TestStorageBasedMetastoreAuthorizationProvider - did not produce a TEST-*.xml file (likely timed out) (batchId=240) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testDynamicPartitionsMerge (batchId=306) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12300/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12300/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12300/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929867 - PreCommit-HIVE-Build > Create/Replicate ACID Write event > - > > Key: HIVE-19267 > URL: https://issues.apache.org/jira/browse/HIVE-19267 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, > HIVE-19267.02.patch, HIVE-19267.03.patch, HIVE-19267.04.patch, > HIVE-19267.05.patch, HIVE-19267.06.patch, HIVE-19267.07.patch, > HIVE-19267.08.patch, HIVE-19267.09.patch, HIVE-19267.10.patch, > HIVE-19267.11.patch, HIVE-19267.12.patch, HIVE-19267.13.patch, > HIVE-19267.14.patch, HIVE-19267.15.patch, HIVE-19267.16.patch, > HIVE-19267.17.patch, HIVE-19267.18.patch, HIVE-19267.19.patch, > HIVE-19267.20.patch > > > > h1. Replicate ACID write Events > * Create new EVENT_WRITE event with related message format to log the write > operations with in a txn along with data associated. > * Log this event when perform any writes (insert into, insert overwrite, > load table, delete, update, merge, truncate) on table/partition. > * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple > partitions, then need to log one event per partition. > * DbNotificationListener should log this type of event to special metastore > table named "MTxnWriteNotificationLog". > * This table should maintain a map of txn ID against list of > tables/partitions written by given txn. > * The entry for a given txn should be removed by the cleaner thread that > removes the expired events from EventNotificationTable. > h1. Replicate Commit Txn operation (with writes) > Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions > modified within the txn. > *Source warehouse:* > * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" > metastore table to consolidate the list of tables/partitions modified within > this txn scope. > * Based on the list of tables/partitions modified and table Write ID, need > to compute the list of delta files added by this txn. > * Repl dump should read this message and dump the metadata and delta files > list. > *Target warehouse:* > * Ensure snapshot isolation at target for on-going read txns which shouldn't > view the data replicated from committed txn. (Ensured with open and allocate > write ID events). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529147#comment-16529147 ] Hive QA commented on HIVE-19267: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 3s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 52s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} hcatalog/server-extensions in master has 4 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} The patch common passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} standalone-metastore: The patch generated 7 new + 2187 unchanged - 6 fixed = 2194 total (was 2193) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 50s{color} | {color:red} ql: The patch generated 6 new + 1252 unchanged - 19 fixed = 1258 total (was 1271) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} hcatalog/server-extensions: The patch generated 0 new + 4 unchanged - 3 fixed = 4 total (was 7) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch hcatalog-unit passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s{color} | {color:red} itests/hive-unit: The patch generated 34 new + 654 unchanged - 5 fixed = 688 total (was 659) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 44 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 6s{color} | {color:red} ql generated 1 new + 2286 unchanged - 1 fixed = 2287 total (was 2287) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} hcatalog/server-extensions generated 1 new + 2 unchanged - 2 fixed = 3 total (was 4) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Found reliance on default encoding in
[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19267: --- Attachment: HIVE-19267.20.patch > Create/Replicate ACID Write event > - > > Key: HIVE-19267 > URL: https://issues.apache.org/jira/browse/HIVE-19267 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, > HIVE-19267.02.patch, HIVE-19267.03.patch, HIVE-19267.04.patch, > HIVE-19267.05.patch, HIVE-19267.06.patch, HIVE-19267.07.patch, > HIVE-19267.08.patch, HIVE-19267.09.patch, HIVE-19267.10.patch, > HIVE-19267.11.patch, HIVE-19267.12.patch, HIVE-19267.13.patch, > HIVE-19267.14.patch, HIVE-19267.15.patch, HIVE-19267.16.patch, > HIVE-19267.17.patch, HIVE-19267.18.patch, HIVE-19267.19.patch, > HIVE-19267.20.patch > > > > h1. Replicate ACID write Events > * Create new EVENT_WRITE event with related message format to log the write > operations with in a txn along with data associated. > * Log this event when perform any writes (insert into, insert overwrite, > load table, delete, update, merge, truncate) on table/partition. > * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple > partitions, then need to log one event per partition. > * DbNotificationListener should log this type of event to special metastore > table named "MTxnWriteNotificationLog". > * This table should maintain a map of txn ID against list of > tables/partitions written by given txn. > * The entry for a given txn should be removed by the cleaner thread that > removes the expired events from EventNotificationTable. > h1. Replicate Commit Txn operation (with writes) > Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions > modified within the txn. > *Source warehouse:* > * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" > metastore table to consolidate the list of tables/partitions modified within > this txn scope. > * Based on the list of tables/partitions modified and table Write ID, need > to compute the list of delta files added by this txn. > * Repl dump should read this message and dump the metadata and delta files > list. > *Target warehouse:* > * Ensure snapshot isolation at target for on-going read txns which shouldn't > view the data replicated from committed txn. (Ensured with open and allocate > write ID events). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19267: --- Attachment: (was: HIVE-19267.20.patch) > Create/Replicate ACID Write event > - > > Key: HIVE-19267 > URL: https://issues.apache.org/jira/browse/HIVE-19267 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, > HIVE-19267.02.patch, HIVE-19267.03.patch, HIVE-19267.04.patch, > HIVE-19267.05.patch, HIVE-19267.06.patch, HIVE-19267.07.patch, > HIVE-19267.08.patch, HIVE-19267.09.patch, HIVE-19267.10.patch, > HIVE-19267.11.patch, HIVE-19267.12.patch, HIVE-19267.13.patch, > HIVE-19267.14.patch, HIVE-19267.15.patch, HIVE-19267.16.patch, > HIVE-19267.17.patch, HIVE-19267.18.patch, HIVE-19267.19.patch > > > > h1. Replicate ACID write Events > * Create new EVENT_WRITE event with related message format to log the write > operations with in a txn along with data associated. > * Log this event when perform any writes (insert into, insert overwrite, > load table, delete, update, merge, truncate) on table/partition. > * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple > partitions, then need to log one event per partition. > * DbNotificationListener should log this type of event to special metastore > table named "MTxnWriteNotificationLog". > * This table should maintain a map of txn ID against list of > tables/partitions written by given txn. > * The entry for a given txn should be removed by the cleaner thread that > removes the expired events from EventNotificationTable. > h1. Replicate Commit Txn operation (with writes) > Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions > modified within the txn. > *Source warehouse:* > * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" > metastore table to consolidate the list of tables/partitions modified within > this txn scope. > * Based on the list of tables/partitions modified and table Write ID, need > to compute the list of delta files added by this txn. > * Repl dump should read this message and dump the metadata and delta files > list. > *Target warehouse:* > * Ensure snapshot isolation at target for on-going read txns which shouldn't > view the data replicated from committed txn. (Ensured with open and allocate > write ID events). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529106#comment-16529106 ] Hive QA commented on HIVE-20034: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929860/HIVE-20034.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14638 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12299/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12299/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12299/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929860 - PreCommit-HIVE-Build > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Attachments: HIVE-20034.2.patch, HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529093#comment-16529093 ] Hive QA commented on HIVE-20034: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 54s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12299/dev-support/hive-personality.sh | | git revision | master / 1c33fea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12299/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Attachments: HIVE-20034.2.patch, HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-20034: -- Attachment: HIVE-20034.2.patch > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Attachments: HIVE-20034.2.patch, HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19995) Aggregate row traffic for acid tables
[ https://issues.apache.org/jira/browse/HIVE-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529072#comment-16529072 ] Hive QA commented on HIVE-19995: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929858/HIVE-19995.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14639 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12298/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12298/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12298/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929858 - PreCommit-HIVE-Build > Aggregate row traffic for acid tables > - > > Key: HIVE-19995 > URL: https://issues.apache.org/jira/browse/HIVE-19995 > Project: Hive > Issue Type: Sub-task > Components: Statistics, Transactions >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19995.01.patch, HIVE-19995.01wip01.patch, > HIVE-19995.01wip01.patch, HIVE-19995.01wip02.patch, HIVE-19995.02.patch, > HIVE-19995.02.patch, HIVE-19995.03.patch, HIVE-19995.03.patch, > HIVE-19995.03.patch, HIVE-19995.03.patch > > > for transactional tables we store basic stats in case of explicit > analyze/rewrite; but doesn't do anything in other caseswhich may even > lead to plans which oom... > It would be better to aggregate the total row traffic...because that is > already available; so that operator tree estimations could work with a real > upper bound of the row numbers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19995) Aggregate row traffic for acid tables
[ https://issues.apache.org/jira/browse/HIVE-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529053#comment-16529053 ] Hive QA commented on HIVE-19995: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 36s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12298/dev-support/hive-personality.sh | | git revision | master / 1c33fea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-12298/yetus/whitespace-eol.txt | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12298/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Aggregate row traffic for acid tables > - > > Key: HIVE-19995 > URL: https://issues.apache.org/jira/browse/HIVE-19995 > Project: Hive > Issue Type: Sub-task > Components: Statistics, Transactions >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19995.01.patch, HIVE-19995.01wip01.patch, > HIVE-19995.01wip01.patch, HIVE-19995.01wip02.patch, HIVE-19995.02.patch, > HIVE-19995.02.patch, HIVE-19995.03.patch, HIVE-19995.03.patch, > HIVE-19995.03.patch, HIVE-19995.03.patch > > > for transactional tables we store basic stats in case of explicit > analyze/rewrite; but doesn't do anything in other caseswhich may even > lead to plans which oom... > It would be better to aggregate the total row traffic...because that is > already available; so that operator tree estimations could work with a real > upper bound of the row numbers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19995) Aggregate row traffic for acid tables
[ https://issues.apache.org/jira/browse/HIVE-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529046#comment-16529046 ] Zoltan Haindrich commented on HIVE-19995: - these druid tests seems to be failing even without the patchreattaching > Aggregate row traffic for acid tables > - > > Key: HIVE-19995 > URL: https://issues.apache.org/jira/browse/HIVE-19995 > Project: Hive > Issue Type: Sub-task > Components: Statistics, Transactions >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19995.01.patch, HIVE-19995.01wip01.patch, > HIVE-19995.01wip01.patch, HIVE-19995.01wip02.patch, HIVE-19995.02.patch, > HIVE-19995.02.patch, HIVE-19995.03.patch, HIVE-19995.03.patch, > HIVE-19995.03.patch, HIVE-19995.03.patch > > > for transactional tables we store basic stats in case of explicit > analyze/rewrite; but doesn't do anything in other caseswhich may even > lead to plans which oom... > It would be better to aggregate the total row traffic...because that is > already available; so that operator tree estimations could work with a real > upper bound of the row numbers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19995) Aggregate row traffic for acid tables
[ https://issues.apache.org/jira/browse/HIVE-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19995: Attachment: HIVE-19995.03.patch > Aggregate row traffic for acid tables > - > > Key: HIVE-19995 > URL: https://issues.apache.org/jira/browse/HIVE-19995 > Project: Hive > Issue Type: Sub-task > Components: Statistics, Transactions >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19995.01.patch, HIVE-19995.01wip01.patch, > HIVE-19995.01wip01.patch, HIVE-19995.01wip02.patch, HIVE-19995.02.patch, > HIVE-19995.02.patch, HIVE-19995.03.patch, HIVE-19995.03.patch, > HIVE-19995.03.patch, HIVE-19995.03.patch > > > for transactional tables we store basic stats in case of explicit > analyze/rewrite; but doesn't do anything in other caseswhich may even > lead to plans which oom... > It would be better to aggregate the total row traffic...because that is > already available; so that operator tree estimations could work with a real > upper bound of the row numbers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529022#comment-16529022 ] Hive QA commented on HIVE-17593: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929711/HIVE-17593.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 14638 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_parquet_types] (batchId=69) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=247) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12297/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12297/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12297/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929711 - PreCommit-HIVE-Build > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20044) Arrow Serde should pad char values and handle empty strings correctly
[ https://issues.apache.org/jira/browse/HIVE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-20044: -- Attachment: HIVE-20044.patch > Arrow Serde should pad char values and handle empty strings correctly > - > > Key: HIVE-20044 > URL: https://issues.apache.org/jira/browse/HIVE-20044 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Attachments: HIVE-20044.patch > > > When Arrow Serde serializes char values, it loses padding. Also when it > counts empty strings, sometimes it makes a smaller number. It should pad char > values and handle empty strings correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20044) Arrow Serde should pad char values and handle empty strings correctly
[ https://issues.apache.org/jira/browse/HIVE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-20044: -- Status: Patch Available (was: Open) > Arrow Serde should pad char values and handle empty strings correctly > - > > Key: HIVE-20044 > URL: https://issues.apache.org/jira/browse/HIVE-20044 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Attachments: HIVE-20044.patch > > > When Arrow Serde serializes char values, it loses padding. Also when it > counts empty strings, sometimes it makes a smaller number. It should pad char > values and handle empty strings correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20044) Arrow Serde should pad char values and handle empty strings correctly
[ https://issues.apache.org/jira/browse/HIVE-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi reassigned HIVE-20044: - > Arrow Serde should pad char values and handle empty strings correctly > - > > Key: HIVE-20044 > URL: https://issues.apache.org/jira/browse/HIVE-20044 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > > When Arrow Serde serializes char values, it loses padding. Also when it > counts empty strings, sometimes it makes a smaller number. It should pad char > values and handle empty strings correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.
[ https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529016#comment-16529016 ] Hive QA commented on HIVE-17593: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 47s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12297/dev-support/hive-personality.sh | | git revision | master / 1c33fea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12297/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > DataWritableWriter strip spaces for CHAR type before writing, but predicate > generator doesn't do same thing. > > > Key: HIVE-17593 > URL: https://issues.apache.org/jira/browse/HIVE-17593 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.0, 3.0.0 >Reporter: Junjie Chen >Assignee: Junjie Chen >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-17593.patch > > > DataWritableWriter strip spaces for CHAR type before writing. While when > generating predicate, it does NOT do same striping which should cause data > missing! > In current version, it doesn't cause data missing since predicate is not well > push down to parquet due to HIVE-17261. > Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as > same which will build a predicate with tail spaces. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event
[ https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19267: --- Attachment: HIVE-19267.20.patch > Create/Replicate ACID Write event > - > > Key: HIVE-19267 > URL: https://issues.apache.org/jira/browse/HIVE-19267 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, > HIVE-19267.02.patch, HIVE-19267.03.patch, HIVE-19267.04.patch, > HIVE-19267.05.patch, HIVE-19267.06.patch, HIVE-19267.07.patch, > HIVE-19267.08.patch, HIVE-19267.09.patch, HIVE-19267.10.patch, > HIVE-19267.11.patch, HIVE-19267.12.patch, HIVE-19267.13.patch, > HIVE-19267.14.patch, HIVE-19267.15.patch, HIVE-19267.16.patch, > HIVE-19267.17.patch, HIVE-19267.18.patch, HIVE-19267.19.patch, > HIVE-19267.20.patch > > > > h1. Replicate ACID write Events > * Create new EVENT_WRITE event with related message format to log the write > operations with in a txn along with data associated. > * Log this event when perform any writes (insert into, insert overwrite, > load table, delete, update, merge, truncate) on table/partition. > * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple > partitions, then need to log one event per partition. > * DbNotificationListener should log this type of event to special metastore > table named "MTxnWriteNotificationLog". > * This table should maintain a map of txn ID against list of > tables/partitions written by given txn. > * The entry for a given txn should be removed by the cleaner thread that > removes the expired events from EventNotificationTable. > h1. Replicate Commit Txn operation (with writes) > Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions > modified within the txn. > *Source warehouse:* > * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" > metastore table to consolidate the list of tables/partitions modified within > this txn scope. > * Based on the list of tables/partitions modified and table Write ID, need > to compute the list of delta files added by this txn. > * Repl dump should read this message and dump the metadata and delta files > list. > *Target warehouse:* > * Ensure snapshot isolation at target for on-going read txns which shouldn't > view the data replicated from committed txn. (Ensured with open and allocate > write ID events). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529011#comment-16529011 ] Hive QA commented on HIVE-20034: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929703/HIVE-20034.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14633 tests executed *Failed tests:* {noformat} TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=190) [druidmini_dynamic_partition.q,druidmini_expressions.q,druidmini_test_alter.q,druidmini_test1.q,druidmini_test_insert.q] {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12296/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12296/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12296/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929703 - PreCommit-HIVE-Build > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Attachments: HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20034) Roll back MetaStore exception handling changes for backward compatibility
[ https://issues.apache.org/jira/browse/HIVE-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529001#comment-16529001 ] Hive QA commented on HIVE-20034: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 52s{color} | {color:blue} standalone-metastore in master has 228 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12296/dev-support/hive-personality.sh | | git revision | master / 1c33fea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12296/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Roll back MetaStore exception handling changes for backward compatibility > - > > Key: HIVE-20034 > URL: https://issues.apache.org/jira/browse/HIVE-20034 > Project: Hive > Issue Type: Bug >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Minor > Attachments: HIVE-20034.patch > > > HIVE-19418 changed thrown exceptions by HiveMetaStoreClient.createTable, > alterTable method. > For backward compatibility we should revert these changes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20033) Backport HIVE-19432 to branch-2, branch-3
[ https://issues.apache.org/jira/browse/HIVE-20033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528995#comment-16528995 ] Hive QA commented on HIVE-20033: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929702/HIVE-20033.1.branch-2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12295/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12295/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12295/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-01 06:58:50.584 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12295/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-01 06:58:50.587 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 1c33fea HIVE-19970: Replication dump has a NPE when table is empty (Mahesh Kumar Behera, reviewed by Peter Vary, Sankar Hariappan) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 1c33fea HIVE-19970: Replication dump has a NPE when table is empty (Mahesh Kumar Behera, reviewed by Peter Vary, Sankar Hariappan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-01 06:58:51.232 + rm -rf ../yetus_PreCommit-HIVE-Build-12295 + mkdir ../yetus_PreCommit-HIVE-Build-12295 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12295 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12295/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java:102 Falling back to three-way merge... Applied patch to 'service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java' with conflicts. Going to apply patch with: git apply -p0 error: patch failed: service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java:102 Falling back to three-way merge... Applied patch to 'service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java' with conflicts. U service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-12295 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12929702 - PreCommit-HIVE-Build > Backport HIVE-19432 to branch-2, branch-3 > - > > Key: HIVE-20033 > URL: https://issues.apache.org/jira/browse/HIVE-20033 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20033.1.branch-2.patch, HIVE-20033.1.branch-3.patch > > > Backport HIVE-19432 to branch-2, branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528994#comment-16528994 ] Hive QA commented on HIVE-19975: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929692/HIVE-19975.01.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12294/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12294/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12294/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-07-01 06:57:09.250 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12294/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-07-01 06:57:09.253 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 1c33fea HIVE-19970: Replication dump has a NPE when table is empty (Mahesh Kumar Behera, reviewed by Peter Vary, Sankar Hariappan) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 1c33fea HIVE-19970: Replication dump has a NPE when table is empty (Mahesh Kumar Behera, reviewed by Peter Vary, Sankar Hariappan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-07-01 06:57:10.883 + rm -rf ../yetus_PreCommit-HIVE-Build-12294 + mkdir ../yetus_PreCommit-HIVE-Build-12294 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12294 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12294/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java:12347 Falling back to three-way merge... Applied patch to 'standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java' with conflicts. error: patch failed: storage-api/src/java/org/apache/hive/common/util/TxnIdUtils.java:73 Falling back to three-way merge... Applied patch to 'storage-api/src/java/org/apache/hive/common/util/TxnIdUtils.java' cleanly. Going to apply patch with: git apply -p0 error: patch failed: standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java:12347 Falling back to three-way merge... Applied patch to 'standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java' with conflicts. error: patch failed: storage-api/src/java/org/apache/hive/common/util/TxnIdUtils.java:73 Falling back to three-way merge... Applied patch to 'storage-api/src/java/org/apache/hive/common/util/TxnIdUtils.java' cleanly. U standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-12294 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12929692 - PreCommit-HIVE-Build > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table > --- > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19975.01.patch > > > writeIdList is per table entity but stats
[jira] [Commented] (HIVE-19733) RemoteSparkJobStatus#getSparkStageProgress inefficient implementation
[ https://issues.apache.org/jira/browse/HIVE-19733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528993#comment-16528993 ] Hive QA commented on HIVE-19733: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929689/HIVE-19733.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14608 tests executed *Failed tests:* {noformat} TestMiniLlapLocalCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=155) [vector_udf_octet_length.q,schema_evol_orc_acidvec_table_update.q,vector_decimal_5.q,escape1.q,schema_evol_orc_acid_table_update_llap_io.q,cte_mat_5.q,smb_mapjoin_19.q,vector_string_decimal.q,results_cache_lifetime.q,cross_prod_3.q,join46.q,dynpart_sort_optimization2.q,tez_bmj_schema_evolution.q,insert_into_default_keyword.q,bucketmapjoin4.q,vector_include_no_sel.q,vector_orc_null_check.q,semijoin7.q,uber_reduce.q,schema_evol_orc_nonvec_part_all_complex.q,vector_interval_arithmetic.q,is_distinct_from.q,materialized_view_create_rewrite_5.q,schema_evol_text_vec_part_all_complex_llap_io.q,auto_sortmerge_join_3.q,vectorization_9.q,materialized_view_create_rewrite.q,merge2.q,join_nulls.q,bucketmapjoin2.q] {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12293/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12293/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12293/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929689 - PreCommit-HIVE-Build > RemoteSparkJobStatus#getSparkStageProgress inefficient implementation > - > > Key: HIVE-19733 > URL: https://issues.apache.org/jira/browse/HIVE-19733 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19733.1.patch > > > The implementation of {{RemoteSparkJobStatus#getSparkStageProgress}} is a bit > inefficient. There is one RPC call to get the {{SparkJobInfo}} and then for > every stage there is another RPC call to get each {{SparkStageInfo}}. This > could all be done in a single RPC call. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20019) Remove commons-logging and move to slf4j
[ https://issues.apache.org/jira/browse/HIVE-20019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528991#comment-16528991 ] Hive QA commented on HIVE-20019: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 52s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} branch/shims/common cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} branch/shims/0.23 cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} branch/shims/scheduler cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} branch/common cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s{color} | {color:red} branch/serde cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 58s{color} | {color:red} branch/standalone-metastore cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} branch/metastore cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} branch/llap-common cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} branch/llap-client cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} branch/llap-tez cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} branch/spark-client cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 47s{color} | {color:red} branch/ql cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s{color} | {color:red} branch/llap-server cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s{color} | {color:red} branch/service cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} branch/accumulo-handler cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} branch/jdbc cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} branch/beeline cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 46s{color} | {color:red} branch/cli cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s{color} | {color:red} branch/contrib cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 44s{color} | {color:red} branch/druid-handler cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 47s{color} | {color:red} branch/hbase-handler cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 35s{color} | {color:red} branch/jdbc-handler cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 16s{color} | {color:red} branch/hcatalog no
[jira] [Commented] (HIVE-19733) RemoteSparkJobStatus#getSparkStageProgress inefficient implementation
[ https://issues.apache.org/jira/browse/HIVE-19733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528988#comment-16528988 ] Hive QA commented on HIVE-19733: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 47s{color} | {color:blue} ql in master has 2287 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12293/dev-support/hive-personality.sh | | git revision | master / 1c33fea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12293/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > RemoteSparkJobStatus#getSparkStageProgress inefficient implementation > - > > Key: HIVE-19733 > URL: https://issues.apache.org/jira/browse/HIVE-19733 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19733.1.patch > > > The implementation of {{RemoteSparkJobStatus#getSparkStageProgress}} is a bit > inefficient. There is one RPC call to get the {{SparkJobInfo}} and then for > every stage there is another RPC call to get each {{SparkStageInfo}}. This > could all be done in a single RPC call. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20042) HiveServer2: All operations lock on a Single HiveConf object
[ https://issues.apache.org/jira/browse/HIVE-20042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-20042: --- Labels: Concurrency (was: ) > HiveServer2: All operations lock on a Single HiveConf object > - > > Key: HIVE-20042 > URL: https://issues.apache.org/jira/browse/HIVE-20042 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Gopal V >Priority: Major > Labels: Concurrency > > With the 1000 user test, the session start/tear-down runs only at 100% CPU, > which is due to all threads locking on the same HiveConf object. > OpenSession locks on 0x0005c091a3a0 > {code} > "HiveServer2-HttpHandler-Pool: Thread-65084" #65084 prio=5 os_prio=0 > tid=0x103bb000 nid=0x4a09 waiting for monitor entry > [0x7fc1b0987000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.conf.Configuration.getOverlay(Configuration.java:1418) > - waiting to lock <0x0005c091a3a0> (a > org.apache.hadoop.hive.conf.HiveConf) > at > org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:711) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1437) > at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4996) > at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5069) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.getUserName(ThriftCLIService.java:424) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:467) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:325) > {code} > GetOperationStatus locks on the same > {code} > "HiveServer2-HttpHandler-Pool: Thread-65082" #65082 prio=5 os_prio=0 > tid=0x7fc2656be000 nid=0x4a06 waiting for monitor entry > [0x7fc3159db000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.conf.Configuration.getOverlay(Configuration.java:1418) > - waiting to lock <0x0005c091a3a0> (a > org.apache.hadoop.hive.conf.HiveConf) > at > org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:711) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1437) > at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4996) > at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5069) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.GetOperationStatus(ThriftCLIService.java:709) > {code} > Session clean up locks on the same > {code} > "8fd1db09-9f96-49dc-becf-5702826bd4f5 HiveServer2-HttpHandler-Pool: > Thread-64981" #64981 prio=5 os_prio=0 tid=0x1d1ab000 nid=0x23d5 > waiting for monitor entry [0x7fc1b65e3000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.conf.Configuration.getOverlay(Configuration.java:1418) > - waiting to lock <0x0005c091a3a0> (a > org.apache.hadoop.hive.conf.HiveConf) > at > org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:711) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1177) > at > org.apache.hadoop.conf.Configuration.getTrimmedStringCollection(Configuration.java:2122) > at > org.apache.hadoop.hdfs.DFSUtilClient.getNameServiceIds(DFSUtilClient.java:197) > at org.apache.hadoop.hdfs.HAUtilClient.isLogicalUri(HAUtilClient.java:53) > ... > at > org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:959) > at org.apache.hadoop.hive.ql.Context.clear(Context.java:724) > {code} > Hadoop RPC blocks on the same > {code} > "HiveServer2-HttpHandler-Pool: Thread-59227" #59227 prio=5 os_prio=0 > tid=0x7fc270aeb800 nid=0x129b waiting for monitor entry > [0x7fc28b7b5000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.conf.Configuration.getOverlay(Configuration.java:1418) > - waiting to lock <0x0005c091a3a0> (a > org.apache.hadoop.hive.conf.HiveConf) > at > org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:711) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1177) > at > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1234) > at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1459) > at org.apache.hadoop.ipc.Client$Connection.(Client.java:451) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1532) > ... > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1580) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734) > at