[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171135#comment-16171135 ] Tao Li commented on HIVE-17496: --- Test results look good now. > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, > HIVE-17496.6.patch, HIVE-17496.7.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17483) HS2 kill command to kill queries using query id
[ https://issues.apache.org/jira/browse/HIVE-17483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-17483: -- Attachment: HIVE-17483.4.patch > HS2 kill command to kill queries using query id > --- > > Key: HIVE-17483 > URL: https://issues.apache.org/jira/browse/HIVE-17483 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Teddy Choi > Attachments: HIVE-17483.1.patch, HIVE-17483.2.patch, > HIVE-17483.2.patch, HIVE-17483.3.patch, HIVE-17483.4.patch > > > For administrators, it is important to be able to kill queries if required. > Currently, there is no clean way to do it. > It would help to have a "kill query " command that can be run using > odbc/jdbc against a HiveServer2 instance, to kill a query with that queryid > running in that instance. > Authorization will have to be done to ensure that the user that is invoking > the API is allowed to perform this action. > In case of SQL std authorization, this would require admin role. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171110#comment-16171110 ] Hive QA commented on HIVE-17496: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887752/HIVE-17496.7.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 11043 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=170) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6876/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6876/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6876/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887752 - PreCommit-HIVE-Build > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, > HIVE-17496.6.patch, HIVE-17496.7.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17529) Bucket Map Join : Sets incorrect edge type causing execution failure
[ https://issues.apache.org/jira/browse/HIVE-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171057#comment-16171057 ] Hive QA commented on HIVE-17529: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887746/HIVE-17529.3.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=170) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=137) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hive.hcatalog.pig.TestTextFileHCatStorer.testWriteChar (batchId=183) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6875/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6875/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6875/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887746 - PreCommit-HIVE-Build > Bucket Map Join : Sets incorrect edge type causing execution failure > > > Key: HIVE-17529 > URL: https://issues.apache.org/jira/browse/HIVE-17529 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17529.1.patch, HIVE-17529.2.patch, > HIVE-17529.3.patch > > > If while traversing the tree to generate tasks, a bucket mapjoin may set its > edge as CUSTOM_SIMPLE_EDGE against CUSTOM_EDGE if the bigtable is already not > traversed causing Tez to assert and fail the vertex. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17549) Use SHA-256 for RowContainer to improve security
[ https://issues.apache.org/jira/browse/HIVE-17549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171051#comment-16171051 ] Saijin Huang commented on HIVE-17549: - [~lirui],can you take a review? > Use SHA-256 for RowContainer to improve security > > > Key: HIVE-17549 > URL: https://issues.apache.org/jira/browse/HIVE-17549 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang > Attachments: HIVE-17549.1.patch > > > Use SHA-256 to replace md5 for RowContainer to improve security -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17542) Make HoS CombineEquivalentWorkResolver Configurable
[ https://issues.apache.org/jira/browse/HIVE-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16171000#comment-16171000 ] Rui Li commented on HIVE-17542: --- Thanks [~stakiar] for the work. +1 > Make HoS CombineEquivalentWorkResolver Configurable > --- > > Key: HIVE-17542 > URL: https://issues.apache.org/jira/browse/HIVE-17542 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer, Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17542.1.patch, HIVE-17542.2.patch > > > The {{CombineEquivalentWorkResolver}} is run by default. We should make it > configurable so that users can disable it in case there are any issues. We > can enable it by default to preserve backwards compatibility. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17508) Implement pool rules and triggers based on counters
[ https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170998#comment-16170998 ] Prasanth Jayachandran commented on HIVE-17508: -- [~sershe] Made some minor changes so that WM can not set the rules using SQLOperation object. The way I think about this is that WM -> SessionManager -> get all operations -> set rules. Driver will always get a copy of the "current" rules for validation. When a rule is violation, RuleViolationException will be thrown which will trigger cancellation of SQLOperation. After HIVE-17386 and for SQL for rule creation is committed will update the patch with more integration changes + tests. > Implement pool rules and triggers based on counters > --- > > Key: HIVE-17508 > URL: https://issues.apache.org/jira/browse/HIVE-17508 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-17508.1.patch, HIVE-17508.2.patch, > HIVE-17508.WIP.2.patch, HIVE-17508.WIP.patch > > > Workload management can defined Rules that are bound to a resource plan. Each > rule can have a trigger expression and an action associated with it. Trigger > expressions are evaluated at runtime after configurable check interval, based > on which actions like killing a query, moving a query to different pool etc. > will get invoked. Simple rule could be something like > {code} > CREATE RULE slow_query IN resource_plan_name > WHEN execution_time_ms > 1 > MOVE TO slow_queue > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17508) Implement pool rules and triggers based on counters
[ https://issues.apache.org/jira/browse/HIVE-17508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17508: - Attachment: HIVE-17508.2.patch > Implement pool rules and triggers based on counters > --- > > Key: HIVE-17508 > URL: https://issues.apache.org/jira/browse/HIVE-17508 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-17508.1.patch, HIVE-17508.2.patch, > HIVE-17508.WIP.2.patch, HIVE-17508.WIP.patch > > > Workload management can defined Rules that are bound to a resource plan. Each > rule can have a trigger expression and an action associated with it. Trigger > expressions are evaluated at runtime after configurable check interval, based > on which actions like killing a query, moving a query to different pool etc. > will get invoked. Simple rule could be something like > {code} > CREATE RULE slow_query IN resource_plan_name > WHEN execution_time_ms > 1 > MOVE TO slow_queue > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170990#comment-16170990 ] Hive QA commented on HIVE-17112: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887741/HIVE-17112.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=170) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6874/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6874/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6874/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887741 - PreCommit-HIVE-Build > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17112.1.patch > > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. > Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17214) check/fix conversion of unbucketed non-acid to acid
[ https://issues.apache.org/jira/browse/HIVE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170954#comment-16170954 ] Eugene Koifman commented on HIVE-17214: --- non acid2acid conversion works except for TestAcidOnTez.testNonStandardConversion02 which tests data files found at different levels (root and subdirs). For some reason it works locally but not in ptest > check/fix conversion of unbucketed non-acid to acid > --- > > Key: HIVE-17214 > URL: https://issues.apache.org/jira/browse/HIVE-17214 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > bucketed tables have stricter rules for file layout on disk - bucket files > are direct children of a partition directory. > for un-bucketed tables I'm not sure there are any rules > for example, CTAS with Tez + Union operator creates 1 directory for each leg > of the union > Supposedly Hive can read table by picking all files recursively. > Can it also write (other than CTAS example above) arbitrarily? > Does it mean Acid write can also write anywhere? > Figure out what can be supported and how can existing layout can be checked? > Examining a full "ls -l -R" for a large table could be expensive. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-15899) Make CTAS with acid target table and insert into acid_tbl select ... union all ... work
[ https://issues.apache.org/jira/browse/HIVE-15899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-15899: -- Summary: Make CTAS with acid target table and insert into acid_tbl select ... union all ... work (was: check CTAS over acid table ) > Make CTAS with acid target table and insert into acid_tbl select ... union > all ... work > --- > > Key: HIVE-15899 > URL: https://issues.apache.org/jira/browse/HIVE-15899 > Project: Hive > Issue Type: Sub-task >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-15899.01.patch, HIVE-15899.02.patch, > HIVE-15899.03.patch, HIVE-15899.04.patch, HIVE-15899.05.patch, > HIVE-15899.07.patch, HIVE-15899.08.patch, HIVE-15899.09.patch, > HIVE-15899.10.patch, HIVE-15899.11.patch > > > need to add a test to check if create table as works correctly with acid > tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-15899) Make CTAS with acid target table and insert into acid_tbl select ... union all ... work
[ https://issues.apache.org/jira/browse/HIVE-15899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-15899: -- Attachment: HIVE-15899.11.patch > Make CTAS with acid target table and insert into acid_tbl select ... union > all ... work > --- > > Key: HIVE-15899 > URL: https://issues.apache.org/jira/browse/HIVE-15899 > Project: Hive > Issue Type: Sub-task >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-15899.01.patch, HIVE-15899.02.patch, > HIVE-15899.03.patch, HIVE-15899.04.patch, HIVE-15899.05.patch, > HIVE-15899.07.patch, HIVE-15899.08.patch, HIVE-15899.09.patch, > HIVE-15899.10.patch, HIVE-15899.11.patch > > > need to add a test to check if create table as works correctly with acid > tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17483) HS2 kill command to kill queries using query id
[ https://issues.apache.org/jira/browse/HIVE-17483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170948#comment-16170948 ] Hive QA commented on HIVE-17483: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887698/HIVE-17483.3.patch {color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 11049 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hive.jdbc.TestJdbcDriver2.testSelectExecAsync2 (batchId=225) org.apache.hive.service.cli.session.TestHiveSessionImpl.testLeakOperationHandle (batchId=223) org.apache.hive.service.cli.session.TestQueryDisplay.testQueryDisplay (batchId=223) org.apache.hive.service.cli.session.TestQueryDisplay.testWebUI (batchId=223) org.apache.hive.service.cli.session.TestSessionManagerMetrics.testAbandonedSessionMetrics (batchId=197) org.apache.hive.service.cli.session.TestSessionManagerMetrics.testActiveSessionMetrics (batchId=197) org.apache.hive.service.cli.session.TestSessionManagerMetrics.testActiveSessionTimeMetrics (batchId=197) org.apache.hive.service.cli.session.TestSessionManagerMetrics.testOpenSessionMetrics (batchId=197) org.apache.hive.service.cli.session.TestSessionManagerMetrics.testOpenSessionTimeMetrics (batchId=197) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6873/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6873/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6873/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887698 - PreCommit-HIVE-Build > HS2 kill command to kill queries using query id > --- > > Key: HIVE-17483 > URL: https://issues.apache.org/jira/browse/HIVE-17483 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Teddy Choi > Attachments: HIVE-17483.1.patch, HIVE-17483.2.patch, > HIVE-17483.2.patch, HIVE-17483.3.patch > > > For administrators, it is important to be able to kill queries if required. > Currently, there is no clean way to do it. > It would help to have a "kill query " command that can be run using > odbc/jdbc against a HiveServer2 instance, to kill a query with that queryid > running in that instance. > Authorization will have to be done to ensure that the user that is invoking > the API is allowed to perform this action. > In case of SQL std authorization, this would require admin role. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Attachment: HIVE-17535.4.patch > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch, HIVE-17535.4.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Status: Patch Available (was: Open) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch, HIVE-17535.4.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Status: Open (was: Patch Available) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch, HIVE-17535.4.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16898) Validation of source file after distcp in repl load
[ https://issues.apache.org/jira/browse/HIVE-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-16898: -- Attachment: HIVE-16898.3.patch > Validation of source file after distcp in repl load > > > Key: HIVE-16898 > URL: https://issues.apache.org/jira/browse/HIVE-16898 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: Daniel Dai > Fix For: 3.0.0 > > Attachments: HIVE-16898.1.patch, HIVE-16898.2.patch, > HIVE-16898.3.patch > > > time between deciding the source and destination path for distcp to invoking > of distcp can have a change of the source file, hence distcp might copy the > wrong file to destination, hence we should an additional check on the > checksum of the source file path after distcp finishes to make sure the path > didnot change during the copy process. if it has take additional steps to > delete the previous file on destination and copy the new source and repeat > the same process as above till we copy the correct file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17214) check/fix conversion of unbucketed non-acid to acid
[ https://issues.apache.org/jira/browse/HIVE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170929#comment-16170929 ] Eugene Koifman commented on HIVE-17214: --- insert into T select ... union all ... can also create subdirs HIVE-15899 has various tests demonstrating this > check/fix conversion of unbucketed non-acid to acid > --- > > Key: HIVE-17214 > URL: https://issues.apache.org/jira/browse/HIVE-17214 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > bucketed tables have stricter rules for file layout on disk - bucket files > are direct children of a partition directory. > for un-bucketed tables I'm not sure there are any rules > for example, CTAS with Tez + Union operator creates 1 directory for each leg > of the union > Supposedly Hive can read table by picking all files recursively. > Can it also write (other than CTAS example above) arbitrarily? > Does it mean Acid write can also write anywhere? > Figure out what can be supported and how can existing layout can be checked? > Examining a full "ls -l -R" for a large table could be expensive. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HIVE-17505) hive.optimize.union.remove=true doesn't work with insert into
[ https://issues.apache.org/jira/browse/HIVE-17505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman resolved HIVE-17505. --- Resolution: Invalid this is a red herring > hive.optimize.union.remove=true doesn't work with insert into > - > > Key: HIVE-17505 > URL: https://issues.apache.org/jira/browse/HIVE-17505 > Project: Hive > Issue Type: Bug >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > add this to TestTxnNoBuckets (not related to Acid - just a repro) > {noformat} > @Test > public void testToAcidConversionMultiBucket() throws Exception { > hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_OPTIMIZE_UNION_REMOVE, true); > hiveConf.setVar(HiveConf.ConfVars.HIVEFETCHTASKCONVERSION, "none"); > d.close(); > d = new Driver(hiveConf); > int[][] values = {{1,2},{3,4},{5,6},{7,8},{9,10}}; > runStatementOnDriver("insert into " + Table.ACIDTBL + > makeValuesClause(values)); > runStatementOnDriver("drop table if exists T"); > runStatementOnDriver("create table T (a int, b int) stored as ORC > TBLPROPERTIES ('transactional'='false')");//todo: try with T bucketd > //runStatementOnDriver("insert into T select a,b from (" + "select a, b > from " + Table.ACIDTBL + " where a <= 5 union all select a, b from " + > Table.ACIDTBL + " where a >= 5" + ") S order by a, b"); > runStatementOnDriver("insert into T(a,b) select a, b from " + > Table.ACIDTBL + " where a between 1 and 3 group by a, b union all select a, b > from " + Table.ACIDTBL + " where a between 5 and 7 union all select a, b from > " + Table.ACIDTBL + " where a >= 9"); > List rs = runStatementOnDriver("select a, b, INPUT__FILE__NAME > from T order by a, b, INPUT__FILE__NAME"); > LOG.warn("before converting to acid"); > for(String s : rs) { > LOG.warn(s); > } > {noformat} > this creates > {noformat} > ekoifman:apache-hive-3.0.0-SNAPSHOT-bin ekoifman$ tree > ~/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/ > /Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/ > ├── -ext-10002 > │ ├── 19 > │ │ └── 00_0 > │ ├── 20 > │ │ └── 00_0 > │ └── 21 > │ └── 00_0 > └── _tmp.-ext-1 > 5 directories, 3 files > {noformat} > but > _Hive.copyFiles(HiveConf conf, Path srcf, Path destf, FileSystem fs, boolean > isSrcLocal, boolean isAcid, List newFiles)_ > bails out at > {noformat} > if (srcs == null) { > LOG.info("No sources specified to move: " + srcf); > return; > // srcs = new FileStatus[0]; Why is this needed? > } > {noformat} > and so the table T ends up empty. (because srcs is > file:/Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505156503971/warehouse/t/.hive-staging_hive_2017-09-11_12-02-47_021_1458754468823875082-1/-ext-1 > (not -ext-10002)) > {noformat} > ekoifman:apache-hive-3.0.0-SNAPSHOT-bin ekoifman$ ./bin/hive --orcfiledump -d > -j > ~/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/ > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/Users/ekoifman/dev/hiverwcommit/packaging/target/apache-hive-3.0.0-SNAPSHOT-bin/apache-hive-3.0.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/Users/ekoifman/dev/hwxhadoop/hadoop-dist/target/hadoop-2.7.3.2.6.0.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Processing data file > file:/Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/-ext-10002/19/00_0 > [length: 242] > {"a":1,"b":2} > {"a":3,"b":4} > > Processing data file > file:/Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/-ext-10002/20/00_0 > [length: 243] > {"a":7,"b":8} > {"a":5,"b":6} >
[jira] [Assigned] (HIVE-17505) hive.optimize.union.remove=true doesn't work with insert into
[ https://issues.apache.org/jira/browse/HIVE-17505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-17505: - Assignee: Eugene Koifman > hive.optimize.union.remove=true doesn't work with insert into > - > > Key: HIVE-17505 > URL: https://issues.apache.org/jira/browse/HIVE-17505 > Project: Hive > Issue Type: Bug >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > add this to TestTxnNoBuckets (not related to Acid - just a repro) > {noformat} > @Test > public void testToAcidConversionMultiBucket() throws Exception { > hiveConf.setBoolVar(HiveConf.ConfVars.HIVE_OPTIMIZE_UNION_REMOVE, true); > hiveConf.setVar(HiveConf.ConfVars.HIVEFETCHTASKCONVERSION, "none"); > d.close(); > d = new Driver(hiveConf); > int[][] values = {{1,2},{3,4},{5,6},{7,8},{9,10}}; > runStatementOnDriver("insert into " + Table.ACIDTBL + > makeValuesClause(values)); > runStatementOnDriver("drop table if exists T"); > runStatementOnDriver("create table T (a int, b int) stored as ORC > TBLPROPERTIES ('transactional'='false')");//todo: try with T bucketd > //runStatementOnDriver("insert into T select a,b from (" + "select a, b > from " + Table.ACIDTBL + " where a <= 5 union all select a, b from " + > Table.ACIDTBL + " where a >= 5" + ") S order by a, b"); > runStatementOnDriver("insert into T(a,b) select a, b from " + > Table.ACIDTBL + " where a between 1 and 3 group by a, b union all select a, b > from " + Table.ACIDTBL + " where a between 5 and 7 union all select a, b from > " + Table.ACIDTBL + " where a >= 9"); > List rs = runStatementOnDriver("select a, b, INPUT__FILE__NAME > from T order by a, b, INPUT__FILE__NAME"); > LOG.warn("before converting to acid"); > for(String s : rs) { > LOG.warn(s); > } > {noformat} > this creates > {noformat} > ekoifman:apache-hive-3.0.0-SNAPSHOT-bin ekoifman$ tree > ~/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/ > /Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/ > ├── -ext-10002 > │ ├── 19 > │ │ └── 00_0 > │ ├── 20 > │ │ └── 00_0 > │ └── 21 > │ └── 00_0 > └── _tmp.-ext-1 > 5 directories, 3 files > {noformat} > but > _Hive.copyFiles(HiveConf conf, Path srcf, Path destf, FileSystem fs, boolean > isSrcLocal, boolean isAcid, List newFiles)_ > bails out at > {noformat} > if (srcs == null) { > LOG.info("No sources specified to move: " + srcf); > return; > // srcs = new FileStatus[0]; Why is this needed? > } > {noformat} > and so the table T ends up empty. (because srcs is > file:/Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505156503971/warehouse/t/.hive-staging_hive_2017-09-11_12-02-47_021_1458754468823875082-1/-ext-1 > (not -ext-10002)) > {noformat} > ekoifman:apache-hive-3.0.0-SNAPSHOT-bin ekoifman$ ./bin/hive --orcfiledump -d > -j > ~/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/ > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/Users/ekoifman/dev/hiverwcommit/packaging/target/apache-hive-3.0.0-SNAPSHOT-bin/apache-hive-3.0.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/Users/ekoifman/dev/hwxhadoop/hadoop-dist/target/hadoop-2.7.3.2.6.0.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Processing data file > file:/Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/-ext-10002/19/00_0 > [length: 242] > {"a":1,"b":2} > {"a":3,"b":4} > > Processing data file > file:/Users/ekoifman/dev/hiverwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnNoBuckets-1505153866252/warehouse/t/.hive-staging_hive_2017-09-11_11-18-48_614_1924461543400304640-1/-ext-10002/20/00_0 > [length: 243] > {"a":7,"b":8} > {"a":5,"b":6} > > Processing
[jira] [Commented] (HIVE-17422) Skip non-native/temporary tables for all major table/partition related scenarios
[ https://issues.apache.org/jira/browse/HIVE-17422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170877#comment-16170877 ] Hive QA commented on HIVE-17422: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887565/HIVE-17422.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6872/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6872/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6872/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887565 - PreCommit-HIVE-Build > Skip non-native/temporary tables for all major table/partition related > scenarios > > > Key: HIVE-17422 > URL: https://issues.apache.org/jira/browse/HIVE-17422 > Project: Hive > Issue Type: Improvement > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Fix For: 3.0.0 > > Attachments: HIVE-17422.1.patch, HIVE-17422.2.patch, > HIVE-17422.3.patch, HIVE-17422.4.patch > > > Currently during incremental dump, the non-native/temporary table info is > partially dumped in metadata file and will be ignored later by the repl load. > We can optimize it by moving the check (whether the table should be exported > or not) earlier so that we don't save any info to dump file for such types of > tables. CreateTableHandler already has this optimization, so we just need to > apply similar logic to other scenarios. > The change is to apply the EximUtil.shouldExportTable check to all scenarios > (e.g. alter table) that calls into the common dump method. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16898) Validation of source file after distcp in repl load
[ https://issues.apache.org/jira/browse/HIVE-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-16898: -- Attachment: HIVE-16898.2.patch Addressing Anishek's review comments. > Validation of source file after distcp in repl load > > > Key: HIVE-16898 > URL: https://issues.apache.org/jira/browse/HIVE-16898 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: Daniel Dai > Fix For: 3.0.0 > > Attachments: HIVE-16898.1.patch, HIVE-16898.2.patch > > > time between deciding the source and destination path for distcp to invoking > of distcp can have a change of the source file, hence distcp might copy the > wrong file to destination, hence we should an additional check on the > checksum of the source file path after distcp finishes to make sure the path > didnot change during the copy process. if it has take additional steps to > delete the previous file on destination and copy the new source and repeat > the same process as above till we copy the correct file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17422) Skip non-native/temporary tables for all major table/partition related scenarios
[ https://issues.apache.org/jira/browse/HIVE-17422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-17422: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) +1. Patch pushed to master. > Skip non-native/temporary tables for all major table/partition related > scenarios > > > Key: HIVE-17422 > URL: https://issues.apache.org/jira/browse/HIVE-17422 > Project: Hive > Issue Type: Improvement > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Fix For: 3.0.0 > > Attachments: HIVE-17422.1.patch, HIVE-17422.2.patch, > HIVE-17422.3.patch, HIVE-17422.4.patch > > > Currently during incremental dump, the non-native/temporary table info is > partially dumped in metadata file and will be ignored later by the repl load. > We can optimize it by moving the check (whether the table should be exported > or not) earlier so that we don't save any info to dump file for such types of > tables. CreateTableHandler already has this optimization, so we just need to > apply similar logic to other scenarios. > The change is to apply the EximUtil.shouldExportTable check to all scenarios > (e.g. alter table) that calls into the common dump method. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17541) Move testing related methods from MetaStoreUtils to some testing related utility
[ https://issues.apache.org/jira/browse/HIVE-17541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170777#comment-16170777 ] Hive QA commented on HIVE-17541: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887682/HIVE-17541.02.patch {color:green}SUCCESS:{color} +1 due to 38 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6871/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6871/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6871/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887682 - PreCommit-HIVE-Build > Move testing related methods from MetaStoreUtils to some testing related > utility > > > Key: HIVE-17541 > URL: https://issues.apache.org/jira/browse/HIVE-17541 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-17541.01.patch, HIVE-17541.02.patch > > > MetaStoreUtils has a very wide range of methods...when the last time tried to > do some modularization related with it - it always came back problematic :) > The most usefull observation I made that it doesn't neccessarily needs the > {{HMSHandler}} import. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-15373) DbNotificationListener should use thread-local RawStore
[ https://issues.apache.org/jira/browse/HIVE-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov reassigned HIVE-15373: - Assignee: Alexander Kolbasov > DbNotificationListener should use thread-local RawStore > --- > > Key: HIVE-15373 > URL: https://issues.apache.org/jira/browse/HIVE-15373 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov > > ObjectStore.java has several important calls which are not thread-safe: > * openTransaction() > * commitTransaction() > * rollbackTransaction() > These should be made thread-safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16511) CBO looses inner casts on constants of complex type
[ https://issues.apache.org/jira/browse/HIVE-16511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170770#comment-16170770 ] Vineet Garg commented on HIVE-16511: Test {{min_structvalue.q}} needs to be enabled once this issue is fixed. > CBO looses inner casts on constants of complex type > --- > > Key: HIVE-16511 > URL: https://issues.apache.org/jira/browse/HIVE-16511 > Project: Hive > Issue Type: Bug > Components: CBO, Query Planning >Reporter: Ashutosh Chauhan > > type for map <10, cast(null as int)> becomes map-- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Status: Open (was: Patch Available) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Status: Patch Available (was: Open) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Attachment: HIVE-17535.3.patch > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Attachment: (was: HIVE-17535.3.patch) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170764#comment-16170764 ] Vineet Garg commented on HIVE-17535: Good to know. I'll disable the test for now and will udpate HIVE-16511. > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170758#comment-16170758 ] Ashutosh Chauhan commented on HIVE-17535: - That is actually a known issue: HIVE-16511 > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170749#comment-16170749 ] Vineet Garg commented on HIVE-17535: Latest patch(3) has known failure {{min_structvalue}} which is a bug exposed by the patch. Queries such as {code:sql} select max(a), min(a) FROM (select named_struct("field",1) as a union all select named_struct("field",2) as a union all select named_struct("field",cast(null as int)) as a) tmp{code} fails with CBO because CBO ends up loosing {{CAST}} operation resulting in {{named_struct("field",cast(null as int)}} to just {{named_struct("field",null}}. This results in different schema structure b/w union statements which is semantically incorrect. {{ This could be reproduced using simple {code:sql}select named_struct("field",cast(null as int)) as a{code}. If we dump new ast after CBO we will notice missing CAST operation. > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Attachment: HIVE-17535.3.patch > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Status: Patch Available (was: Open) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch, > HIVE-17535.3.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17535) Select 1 EXCEPT Select 1 fails with NPE
[ https://issues.apache.org/jira/browse/HIVE-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17535: --- Status: Open (was: Patch Available) > Select 1 EXCEPT Select 1 fails with NPE > --- > > Key: HIVE-17535 > URL: https://issues.apache.org/jira/browse/HIVE-17535 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17535.1.patch, HIVE-17535.2.patch > > > Since Hive CBO isn't able to handle queries with no table e.g. {{select 1}} > queries with SET operators fail (intersect requires CBO). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Attachment: HIVE-17496.7.patch > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, > HIVE-17496.6.patch, HIVE-17496.7.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Status: Patch Available (was: Open) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, > HIVE-17496.6.patch, HIVE-17496.7.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170718#comment-16170718 ] Tao Li commented on HIVE-17496: --- The cause to the deletion failure is because the table dir was deleted on target db for some reason. We did not try to drop the table so not sure why the table dir was deleted. See log below. This issue cannot be reproduced locally. {noformat} 2017-09-18T12:49:20,642 ERROR [main] parse.TestReplicationScenarios: Error verifying the staging dir deletion java.io.FileNotFoundException: File file:/home/hiveptest/130.211.162.127-hiveptest-0/apache-github-source-source/itests/hive-unit/target/warehouse/deleteStagingDir_org_apache_hadoop_hive_ql_parse_testreplicationscenarios_1505763609715_dupe.db/unptned does not exist {noformat} So for now the fix is to add a existence check. > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Attachment: (was: HIVE-17496.7.patch) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Status: Open (was: Patch Available) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Status: Patch Available (was: Open) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, > HIVE-17496.6.patch, HIVE-17496.7.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Attachment: HIVE-17496.7.patch > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, > HIVE-17496.6.patch, HIVE-17496.7.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Status: Open (was: Patch Available) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17529) Bucket Map Join : Sets incorrect edge type causing execution failure
[ https://issues.apache.org/jira/browse/HIVE-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-17529: -- Attachment: HIVE-17529.3.patch Re-applying the same patch to see if bucket_map_join_tez1 fails with SparkCliDriver > Bucket Map Join : Sets incorrect edge type causing execution failure > > > Key: HIVE-17529 > URL: https://issues.apache.org/jira/browse/HIVE-17529 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17529.1.patch, HIVE-17529.2.patch, > HIVE-17529.3.patch > > > If while traversing the tree to generate tasks, a bucket mapjoin may set its > edge as CUSTOM_SIMPLE_EDGE against CUSTOM_EDGE if the bigtable is already not > traversed causing Tez to assert and fail the vertex. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17529) Bucket Map Join : Sets incorrect edge type causing execution failure
[ https://issues.apache.org/jira/browse/HIVE-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170637#comment-16170637 ] Jason Dere commented on HIVE-17529: --- Can you try re-posting the patch so we can see another pre-commit run? > Bucket Map Join : Sets incorrect edge type causing execution failure > > > Key: HIVE-17529 > URL: https://issues.apache.org/jira/browse/HIVE-17529 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17529.1.patch, HIVE-17529.2.patch > > > If while traversing the tree to generate tasks, a bucket mapjoin may set its > edge as CUSTOM_SIMPLE_EDGE against CUSTOM_EDGE if the bigtable is already not > traversed causing Tez to assert and fail the vertex. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17465) Statistics: Drill-down filters don't reduce row-counts progressively
[ https://issues.apache.org/jira/browse/HIVE-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17465: --- Fix Version/s: 3.0.0 > Statistics: Drill-down filters don't reduce row-counts progressively > > > Key: HIVE-17465 > URL: https://issues.apache.org/jira/browse/HIVE-17465 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer, Statistics >Reporter: Gopal V >Assignee: Vineet Garg > Fix For: 3.0.0 > > Attachments: HIVE-17465.1.patch, HIVE-17465.2.patch, > HIVE-17465.3.patch, HIVE-17465.4.patch, HIVE-17465.5.patch, > HIVE-17465.6.patch, HIVE-17465.7.patch > > > {code} > explain select count(d_date_sk) from date_dim where d_year=2001 ; > explain select count(d_date_sk) from date_dim where d_year=2001 and d_moy = > 9; > explain select count(d_date_sk) from date_dim where d_year=2001 and d_moy = 9 > and d_dom = 21; > {code} > All 3 queries end up with the same row-count estimates after the filter. > {code} > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: (d_year = 2001) (type: boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (d_year = 2001) (type: boolean) > Statistics: Num rows: 363 Data size: 4356 Basic stats: > COMPLETE Column stats: COMPLETE > > Map 1 > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: ((d_year = 2001) and (d_moy = 9)) (type: > boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and (d_moy = 9)) (type: > boolean) > Statistics: Num rows: 363 Data size: 5808 Basic stats: > COMPLETE Column stats: COMPLETE > Map 1 > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: ((d_year = 2001) and (d_moy = 9) and (d_dom = > 21)) (type: boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and (d_moy = 9) and (d_dom = > 21)) (type: boolean) > Statistics: Num rows: 363 Data size: 7260 Basic stats: > COMPLETE Column stats: COMPLETE > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17465) Statistics: Drill-down filters don't reduce row-counts progressively
[ https://issues.apache.org/jira/browse/HIVE-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17465: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master. Thanks for reviewing [~ashutoshc] > Statistics: Drill-down filters don't reduce row-counts progressively > > > Key: HIVE-17465 > URL: https://issues.apache.org/jira/browse/HIVE-17465 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer, Statistics >Reporter: Gopal V >Assignee: Vineet Garg > Attachments: HIVE-17465.1.patch, HIVE-17465.2.patch, > HIVE-17465.3.patch, HIVE-17465.4.patch, HIVE-17465.5.patch, > HIVE-17465.6.patch, HIVE-17465.7.patch > > > {code} > explain select count(d_date_sk) from date_dim where d_year=2001 ; > explain select count(d_date_sk) from date_dim where d_year=2001 and d_moy = > 9; > explain select count(d_date_sk) from date_dim where d_year=2001 and d_moy = 9 > and d_dom = 21; > {code} > All 3 queries end up with the same row-count estimates after the filter. > {code} > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: (d_year = 2001) (type: boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (d_year = 2001) (type: boolean) > Statistics: Num rows: 363 Data size: 4356 Basic stats: > COMPLETE Column stats: COMPLETE > > Map 1 > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: ((d_year = 2001) and (d_moy = 9)) (type: > boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and (d_moy = 9)) (type: > boolean) > Statistics: Num rows: 363 Data size: 5808 Basic stats: > COMPLETE Column stats: COMPLETE > Map 1 > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: ((d_year = 2001) and (d_moy = 9) and (d_dom = > 21)) (type: boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and (d_moy = 9) and (d_dom = > 21)) (type: boolean) > Statistics: Num rows: 363 Data size: 7260 Basic stats: > COMPLETE Column stats: COMPLETE > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170613#comment-16170613 ] Hive QA commented on HIVE-17496: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887679/HIVE-17496.6.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_createas1] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning] (batchId=169) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDeleteStagingDir (batchId=218) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6870/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6870/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6870/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 13 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887679 - PreCommit-HIVE-Build > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170591#comment-16170591 ] Sahil Takiar commented on HIVE-17112: - [~xuefuz] could you review this? What do you think about making this change? I don't think its necessary to dump the configuration at {{INFO}} level, {{DEBUG}} level seems more appropriate. > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17112.1.patch > > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. > Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17112: Status: Patch Available (was: Open) > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17112.1.patch > > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. > Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17112: Issue Type: Improvement (was: Bug) > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17112.1.patch > > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. > Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17112: Attachment: HIVE-17112.1.patch > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17112.1.patch > > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. > Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17112: Description: HiveSparkClientFactory has the following line that introduces excess logging: {code} LOG.info(String.format( "load spark property from %s (%s -> %s).", SPARK_DEFAULT_CONF_FILE, propertyName, LogUtils.maskIfPassword(propertyName,value))); {code} It basically dumps the entire configuration object to the logs, we can probably change this from INFO to DEBUG. Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} was: HiveSparkClientFactory has the following line that introduces excess logging: {code} LOG.info(String.format( "load spark property from %s (%s -> %s).", SPARK_DEFAULT_CONF_FILE, propertyName, LogUtils.maskIfPassword(propertyName,value))); {code} It basically dumps the entire configuration object to the logs, we can probably change this from INFO to DEBUG. > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. > Same thing happens in {{RemoteHiveSparkClient#logConfigurations}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17112) Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient
[ https://issues.apache.org/jira/browse/HIVE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17112: Summary: Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient (was: Reduce logging in HiveSparkClientFactory) > Reduce logging in HiveSparkClientFactory and RemoteHiveSparkClient > -- > > Key: HIVE-17112 > URL: https://issues.apache.org/jira/browse/HIVE-17112 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > > HiveSparkClientFactory has the following line that introduces excess logging: > {code} > LOG.info(String.format( > "load spark property from %s (%s -> %s).", > SPARK_DEFAULT_CONF_FILE, propertyName, > LogUtils.maskIfPassword(propertyName,value))); > {code} > It basically dumps the entire configuration object to the logs, we can > probably change this from INFO to DEBUG. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17530) ClassCastException when converting uniontype
[ https://issues.apache.org/jira/browse/HIVE-17530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-17530: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master. Thanks Anthony! > ClassCastException when converting uniontype > > > Key: HIVE-17530 > URL: https://issues.apache.org/jira/browse/HIVE-17530 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0, 3.0.0 >Reporter: Anthony Hsu >Assignee: Anthony Hsu > Fix For: 3.0.0 > > Attachments: HIVE-17530.1.patch, HIVE-17530.2.patch > > > To repro: > {noformat} > SET hive.exec.schema.evolution = false; > CREATE TABLE avro_orc_partitioned_uniontype (a uniontype) > PARTITIONED BY (b int) STORED AS ORC; > INSERT INTO avro_orc_partitioned_uniontype PARTITION (b=1) SELECT > create_union(1, true, value) FROM src LIMIT 5; > ALTER TABLE avro_orc_partitioned_uniontype SET FILEFORMAT AVRO; > SELECT * FROM avro_orc_partitioned_uniontype; > {noformat} > The exception you get is: > {code} > java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ClassCastException: java.util.ArrayList cannot be cast to > org.apache.hadoop.hive.serde2.objectinspector.UnionObject > {code} > The issue is that StandardUnionObjectInspector was creating and returning an > ArrayList rather than a UnionObject. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17529) Bucket Map Join : Sets incorrect edge type causing execution failure
[ https://issues.apache.org/jira/browse/HIVE-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170571#comment-16170571 ] Deepak Jaiswal commented on HIVE-17529: --- I ran the test several times locally and it passed all the time for me. > Bucket Map Join : Sets incorrect edge type causing execution failure > > > Key: HIVE-17529 > URL: https://issues.apache.org/jira/browse/HIVE-17529 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17529.1.patch, HIVE-17529.2.patch > > > If while traversing the tree to generate tasks, a bucket mapjoin may set its > edge as CUSTOM_SIMPLE_EDGE against CUSTOM_EDGE if the bigtable is already not > traversed causing Tez to assert and fail the vertex. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17378) CBO: HiveReduceExpressionsWithStatsRule can operate on IS_NULL and IS_NOT_NULL
[ https://issues.apache.org/jira/browse/HIVE-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-17378: --- Assignee: Zoltan Haindrich > CBO: HiveReduceExpressionsWithStatsRule can operate on IS_NULL and IS_NOT_NULL > -- > > Key: HIVE-17378 > URL: https://issues.apache.org/jira/browse/HIVE-17378 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Zoltan Haindrich > > {code} > * Currently we support the simplification of =, >=, <=, >, <, and > * IN operations. > */ > {code} > IS_NULL and IS_NOT_NULL are closely related and can be processed by this rule. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17550) Remove unreferenced q.out-s
[ https://issues.apache.org/jira/browse/HIVE-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170466#comment-16170466 ] Hive QA commented on HIVE-17550: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887676/HIVE-17550.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=231) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=231) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=235) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=235) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=216) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=216) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testJoinThriftSerializeInTasks (batchId=228) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation (batchId=228) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6869/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6869/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6869/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 13 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887676 - PreCommit-HIVE-Build > Remove unreferenced q.out-s > --- > > Key: HIVE-17550 > URL: https://issues.apache.org/jira/browse/HIVE-17550 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-17550.01.patch > > > it's sometimes a bit misleading to see q.out-s which are never even used.. > I'll also add a small utility which is able to remove them - and add a test > which will help to avoid them in the future -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17529) Bucket Map Join : Sets incorrect edge type causing execution failure
[ https://issues.apache.org/jira/browse/HIVE-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170421#comment-16170421 ] Jason Dere commented on HIVE-17529: --- I think the code changes look alright, but the latest test result has a failure in TestSparkCliDriver.testCliDriver[bucket_map_join_tez1] .. can you take a look at that? > Bucket Map Join : Sets incorrect edge type causing execution failure > > > Key: HIVE-17529 > URL: https://issues.apache.org/jira/browse/HIVE-17529 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > Attachments: HIVE-17529.1.patch, HIVE-17529.2.patch > > > If while traversing the tree to generate tasks, a bucket mapjoin may set its > edge as CUSTOM_SIMPLE_EDGE against CUSTOM_EDGE if the bigtable is already not > traversed causing Tez to assert and fail the vertex. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17552) Enable bucket map join by default
[ https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-17552: - > Enable bucket map join by default > - > > Key: HIVE-17552 > URL: https://issues.apache.org/jira/browse/HIVE-17552 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal > > Currently bucket map join is disabled by default, however, it is potentially > most optimal join we have. Need to enable it by default. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17542) Make HoS CombineEquivalentWorkResolver Configurable
[ https://issues.apache.org/jira/browse/HIVE-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170364#comment-16170364 ] Sahil Takiar commented on HIVE-17542: - [~lirui], [~pvary] could you review? > Make HoS CombineEquivalentWorkResolver Configurable > --- > > Key: HIVE-17542 > URL: https://issues.apache.org/jira/browse/HIVE-17542 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer, Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17542.1.patch, HIVE-17542.2.patch > > > The {{CombineEquivalentWorkResolver}} is run by default. We should make it > configurable so that users can disable it in case there are any issues. We > can enable it by default to preserve backwards compatibility. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17483) HS2 kill command to kill queries using query id
[ https://issues.apache.org/jira/browse/HIVE-17483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-17483: -- Attachment: HIVE-17483.3.patch Fixed wrong grammar and query ID issues. > HS2 kill command to kill queries using query id > --- > > Key: HIVE-17483 > URL: https://issues.apache.org/jira/browse/HIVE-17483 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Teddy Choi > Attachments: HIVE-17483.1.patch, HIVE-17483.2.patch, > HIVE-17483.2.patch, HIVE-17483.3.patch > > > For administrators, it is important to be able to kill queries if required. > Currently, there is no clean way to do it. > It would help to have a "kill query " command that can be run using > odbc/jdbc against a HiveServer2 instance, to kill a query with that queryid > running in that instance. > Authorization will have to be done to ensure that the user that is invoking > the API is allowed to perform this action. > In case of SQL std authorization, this would require admin role. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17317) Make Dbcp configurable using hive properties in hive-site.xml
[ https://issues.apache.org/jira/browse/HIVE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170289#comment-16170289 ] Hive QA commented on HIVE-17317: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887653/HIVE-17317.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 11046 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hive.jdbc.TestJdbcDriver2.testSelectExecAsync2 (batchId=225) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6868/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6868/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6868/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887653 - PreCommit-HIVE-Build > Make Dbcp configurable using hive properties in hive-site.xml > - > > Key: HIVE-17317 > URL: https://issues.apache.org/jira/browse/HIVE-17317 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Attachments: HIVE-17317.01.patch, HIVE-17317.02.patch, > HIVE-17317.03.patch, HIVE-17317.04.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez
[ https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170272#comment-16170272 ] Thai Bui commented on HIVE-17502: - [~thejas] I'm not sure if we are on the same page, but my intention was to not share the same session concurrently (meaning a session being used by 2 distinct query at the same time). My intention was to allow skipping of a default session and return an unused default session when a client has made a request that contains a currently used session. For my use case, it is Hue4 that is making the concurrent requests from the same session. In Hue4, each user can work on a notebook-like environment. Each notebook could have multiple snippets (text, or Hive query), with this model, a user could execute multiple Hive snippets in the same notebook. Thus, there's a need for this patch. Without the patch, HS2 will complain because the currently used session should have been returned to the pool before the second request was made. > Reuse of default session should not throw an exception in LLAP w/ Tez > - > > Key: HIVE-17502 > URL: https://issues.apache.org/jira/browse/HIVE-17502 > Project: Hive > Issue Type: Bug > Components: llap, Tez >Affects Versions: 2.1.1, 2.2.0 > Environment: HDP 2.6.1.0-129, Hue 4 >Reporter: Thai Bui >Assignee: Thai Bui > > Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be > skipped mostly because of this line > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365. > However, some clients such as Hue 4, allow multiple sessions to be used per > user. Under this configuration, a Thrift client will send a request to either > reuse or open a new session. The reuse request could include the session id > of a currently used snippet being executed in Hue, this causes HS2 to throw > an exception: > {noformat} > 2017-09-10T17:51:36,548 INFO [Thread-89]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: > hive, session user: hive > 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task > (TezTask.java:execute(232)) - Failed to execute tez graph. > org.apache.hadoop.hive.ql.metadata.HiveException: The pool session > sessionId=5b61a578-6336-41c5-860d-9838166f97fe, queueName=llap, user=hive, > doAs=false, isOpen=true, isDefault=true, expires in 591015330ms should have > been returned to the pool > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:534) > ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:544) > ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:147) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) > [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129] > {noformat} > Note that every query is issued as a single 'hive' user to share the LLAP > daemon pool, a set of pre-determined number of AMs is initialized at setup > time. Thus, HS2 should allow new sessions from a Thrift client to be used out > of the pool, or an existing session to be skipped and an unused session from > the pool to be returned. The logic to throw an exception in the > `canWorkWithSameSession` doesn't make sense to me. > I have a solution to fix this issue in my local branch at > https://github.com/thaibui/hive/commit/078a521b9d0906fe6c0323b63e567f6eee2f3a70. > When applied, the log will become like so > {noformat} > 2017-09-10T09:15:33,578 INFO [Thread-239]: tez.TezSessionPoolManager > (TezSessionPoolManager.java:canWorkWithSameSession(533)) - Skipping default > session sessionId=6638b1da-0f8a-405e-85f0-9586f484e6de, queueName=llap, > user=hive, doAs=false, isOpen=true, isDefault=true, expires in 591868732ms > since it is being used. > {noformat} > A test case is provided in my branch to demonstrate how it works. If possible > I would like this patch to be applied to version 2.1, 2.2 and master. Since > we are using 2.1 LLAP in production with Hue 4, this patch is critical to our > success. > Alternatively, if this patch is too broad in scope, I propose adding an > option to allow "skipping of currently used default sessions". With this new >
[jira] [Updated] (HIVE-17344) LocalCache element memory usage is not calculated properly.
[ https://issues.apache.org/jira/browse/HIVE-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-17344: Fix Version/s: 3.0.0 Thank you [~leftylev], I've just corrected it :) > LocalCache element memory usage is not calculated properly. > --- > > Key: HIVE-17344 > URL: https://issues.apache.org/jira/browse/HIVE-17344 > Project: Hive > Issue Type: Bug >Reporter: Janos Gub >Assignee: Janos Gub > Fix For: 3.0.0 > > Attachments: HIVE-17344.2.patch, HIVE-17344.patch > > > Orc footer cache has a calculation of memory usage: > {code:java} > public int getMemoryUsage() { > return bb.remaining() + 100; // 100 is for 2 longs, BB and java overheads > (semi-arbitrary). > } > {code} > ByteBuffer.remaining returns the remaining space in the bytebuffer, thus > allowing this cache have elements MAXWEIGHT/100 of arbitrary size. I think > the correct solution would be bb.capacity. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17541) Move testing related methods from MetaStoreUtils to some testing related utility
[ https://issues.apache.org/jira/browse/HIVE-17541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-17541: Attachment: HIVE-17541.02.patch It seems my ide have polluted the workdirs while I was checking the maven build... there were 3 missing pom.xml references to metastore/test-jar. Thank you Alan for taking a look! #2) correct pom.xml > Move testing related methods from MetaStoreUtils to some testing related > utility > > > Key: HIVE-17541 > URL: https://issues.apache.org/jira/browse/HIVE-17541 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-17541.01.patch, HIVE-17541.02.patch > > > MetaStoreUtils has a very wide range of methods...when the last time tried to > do some modularization related with it - it always came back problematic :) > The most usefull observation I made that it doesn't neccessarily needs the > {{HMSHandler}} import. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17549) Use SHA-256 for RowContainer to improve security
[ https://issues.apache.org/jira/browse/HIVE-17549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170150#comment-16170150 ] Hive QA commented on HIVE-17549: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887649/HIVE-17549.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 11041 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMergeUnpartitioned01 (batchId=282) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6867/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6867/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6867/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887649 - PreCommit-HIVE-Build > Use SHA-256 for RowContainer to improve security > > > Key: HIVE-17549 > URL: https://issues.apache.org/jira/browse/HIVE-17549 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang > Attachments: HIVE-17549.1.patch > > > Use SHA-256 to replace md5 for RowContainer to improve security -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170144#comment-16170144 ] Tao Li edited comment on HIVE-17496 at 9/18/17 3:28 PM: According to the logs below, I did not see any error from HDFS deletion for the testDeleteStagingDir test, according to change in HIVE-17496.5.patch. Looks like the file should have been deleted successfully. So I don't know why the test fails. {noformat} 2017-09-18T02:25:52,399 INFO [main] ql.Context: Deleting scratch dir: pfile:/home/hiveptest/104.198.168.233-hiveptest-0/apache-github-source-source/itests/hive-unit/target/warehouse/deletestagingdir_org_apache_hadoop_hive_ql_parse_testreplicationscenarios_1505726246743_dupe.db/unptned/.hive-staging_hive_2017-09-18_02-25-52_081_7805993483018006783-1 {noformat} But in some other repl test logs, I do see some deletion that failed. It indicates some possibility there. {noformat} 2017-09-18T02:25:49,746 INFO [main] ql.Context: Deleting scratch dir: file:/home/hiveptest/104.198.168.233-hiveptest-0/apache-github-source-source/itests/hive-unit/target/tmp/localscratchdir/6b15a44f-aec0-43f8-851d-81db07e4e329/hive_2017-09-18_02-25-49_687_551878613091072616-1/-mr-10001/.hive-staging_hive_2017-09-18_02-25-49_687_551878613091072616-1 2017-09-18T02:25:49,746 ERROR [main] ql.Context: File failes to be deleted when removing Scratch (HIVE-17496) {noformat} was (Author: taoli-hwx): According to the logs below, I did not see any error from HDFS deletion for the testDeleteStagingDir test, according to change in HIVE-17496.5.patch. Looks like the file should have been deleted successfully. So I don't know why the test fails. {noformat} 2017-09-18T02:25:52,399 INFO [main] ql.Context: Deleting scratch dir: pfile:/home/hiveptest/104.198.168.233-hiveptest-0/apache-github-source-source/itests/hive-unit/target/warehouse/deletestagingdir_org_apache_hadoop_hive_ql_parse_testreplicationscenarios_1505726246743_dupe.db/unptned/.hive-staging_hive_2017-09-18_02-25-52_081_7805993483018006783-1 {noformat} But in some other repl test logs, I do see some deletion that failed: {noformat} 2017-09-18T02:25:49,746 INFO [main] ql.Context: Deleting scratch dir: file:/home/hiveptest/104.198.168.233-hiveptest-0/apache-github-source-source/itests/hive-unit/target/tmp/localscratchdir/6b15a44f-aec0-43f8-851d-81db07e4e329/hive_2017-09-18_02-25-49_687_551878613091072616-1/-mr-10001/.hive-staging_hive_2017-09-18_02-25-49_687_551878613091072616-1 2017-09-18T02:25:49,746 ERROR [main] ql.Context: File failes to be deleted when removing Scratch (HIVE-17496) {noformat} > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Attachment: HIVE-17496.6.patch > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Status: Patch Available (was: Open) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch, HIVE-17496.6.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17496: -- Status: Open (was: Patch Available) > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170144#comment-16170144 ] Tao Li commented on HIVE-17496: --- According to the logs below, I did not see any error from HDFS deletion for the testDeleteStagingDir test, according to change in HIVE-17496.5.patch. Looks like the file should have been deleted successfully. So I don't know why the test fails. {noformat} 2017-09-18T02:25:52,399 INFO [main] ql.Context: Deleting scratch dir: pfile:/home/hiveptest/104.198.168.233-hiveptest-0/apache-github-source-source/itests/hive-unit/target/warehouse/deletestagingdir_org_apache_hadoop_hive_ql_parse_testreplicationscenarios_1505726246743_dupe.db/unptned/.hive-staging_hive_2017-09-18_02-25-52_081_7805993483018006783-1 {noformat} But in some other repl test logs, I do see some deletion that failed: {noformat} 2017-09-18T02:25:49,746 INFO [main] ql.Context: Deleting scratch dir: file:/home/hiveptest/104.198.168.233-hiveptest-0/apache-github-source-source/itests/hive-unit/target/tmp/localscratchdir/6b15a44f-aec0-43f8-851d-81db07e4e329/hive_2017-09-18_02-25-49_687_551878613091072616-1/-mr-10001/.hive-staging_hive_2017-09-18_02-25-49_687_551878613091072616-1 2017-09-18T02:25:49,746 ERROR [main] ql.Context: File failes to be deleted when removing Scratch (HIVE-17496) {noformat} > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17550) Remove unreferenced q.out-s
[ https://issues.apache.org/jira/browse/HIVE-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-17550: Status: Patch Available (was: Open) > Remove unreferenced q.out-s > --- > > Key: HIVE-17550 > URL: https://issues.apache.org/jira/browse/HIVE-17550 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-17550.01.patch > > > it's sometimes a bit misleading to see q.out-s which are never even used.. > I'll also add a small utility which is able to remove them - and add a test > which will help to avoid them in the future -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17550) Remove unreferenced q.out-s
[ https://issues.apache.org/jira/browse/HIVE-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-17550: Attachment: HIVE-17550.01.patch #1) remove unused q.out-s; add TestDanglingQOuts > Remove unreferenced q.out-s > --- > > Key: HIVE-17550 > URL: https://issues.apache.org/jira/browse/HIVE-17550 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > Attachments: HIVE-17550.01.patch > > > it's sometimes a bit misleading to see q.out-s which are never even used.. > I'll also add a small utility which is able to remove them - and add a test > which will help to avoid them in the future -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17550) Remove unreferenced q.out-s
[ https://issues.apache.org/jira/browse/HIVE-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-17550: --- > Remove unreferenced q.out-s > --- > > Key: HIVE-17550 > URL: https://issues.apache.org/jira/browse/HIVE-17550 > Project: Hive > Issue Type: Improvement > Components: Tests >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich > > it's sometimes a bit misleading to see q.out-s which are never even used.. > I'll also add a small utility which is able to remove them - and add a test > which will help to avoid them in the future -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17465) Statistics: Drill-down filters don't reduce row-counts progressively
[ https://issues.apache.org/jira/browse/HIVE-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170136#comment-16170136 ] Ashutosh Chauhan commented on HIVE-17465: - +1 > Statistics: Drill-down filters don't reduce row-counts progressively > > > Key: HIVE-17465 > URL: https://issues.apache.org/jira/browse/HIVE-17465 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer, Statistics >Reporter: Gopal V >Assignee: Vineet Garg > Attachments: HIVE-17465.1.patch, HIVE-17465.2.patch, > HIVE-17465.3.patch, HIVE-17465.4.patch, HIVE-17465.5.patch, > HIVE-17465.6.patch, HIVE-17465.7.patch > > > {code} > explain select count(d_date_sk) from date_dim where d_year=2001 ; > explain select count(d_date_sk) from date_dim where d_year=2001 and d_moy = > 9; > explain select count(d_date_sk) from date_dim where d_year=2001 and d_moy = 9 > and d_dom = 21; > {code} > All 3 queries end up with the same row-count estimates after the filter. > {code} > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: (d_year = 2001) (type: boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: (d_year = 2001) (type: boolean) > Statistics: Num rows: 363 Data size: 4356 Basic stats: > COMPLETE Column stats: COMPLETE > > Map 1 > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: ((d_year = 2001) and (d_moy = 9)) (type: > boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and (d_moy = 9)) (type: > boolean) > Statistics: Num rows: 363 Data size: 5808 Basic stats: > COMPLETE Column stats: COMPLETE > Map 1 > Map Operator Tree: > TableScan > alias: date_dim > filterExpr: ((d_year = 2001) and (d_moy = 9) and (d_dom = > 21)) (type: boolean) > Statistics: Num rows: 73049 Data size: 82034027 Basic > stats: COMPLETE Column stats: COMPLETE > Filter Operator > predicate: ((d_year = 2001) and (d_moy = 9) and (d_dom = > 21)) (type: boolean) > Statistics: Num rows: 363 Data size: 7260 Basic stats: > COMPLETE Column stats: COMPLETE > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17493) Improve PKFK cardinality estimation in Physical planning
[ https://issues.apache.org/jira/browse/HIVE-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-17493: Fix Version/s: 3.0.0 > Improve PKFK cardinality estimation in Physical planning > > > Key: HIVE-17493 > URL: https://issues.apache.org/jira/browse/HIVE-17493 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Fix For: 3.0.0 > > Attachments: HIVE-17493.1.patch, HIVE-17493.2.patch, > HIVE-17493.3.patch, HIVE-17493.4.patch > > > Cardinality estimation of a join, after PK-FK relation has been ascertained, > could be improved if parent of the join operator is LEFT outer or RIGHT outer > join. > Currently estimation is done by estimating reduction of rows occurred on PK > side, then multiplying the reduction to FK side row count. This estimation of > reduction currently doesn't distinguish b/w INNER or OUTER joins. This could > be improved to handle outer joins better. > TPC-DS query45 is impacted by this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17454) Hive - MAX Window function does not accept more than 1 sort key and does not work as expected with rows window clause
[ https://issues.apache.org/jira/browse/HIVE-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170092#comment-16170092 ] Zoltan Haindrich commented on HIVE-17454: - Hello, I was planning to look into this...but I'm not yet understand the goal of the query; could you give a smaller example possibly? By using "unbounded preceeding" you would like to get the maximum so-far - am I correct? hmm...you've written "rows unbounded preceding and unbounded following" but your query only contains unbounded preceding... Could you give a sample which can be executed? {code} create table t (a int,b int); insert into t values (1,1),(1,2),(1,3),(2,4),(2,5),(2,6),(3,7),(3,8),(3,9); select a,b,sum(b) over (partition by a order by b rows unbounded preceding) from t; {code} > Hive - MAX Window function does not accept more than 1 sort key and does not > work as expected with rows window clause > - > > Key: HIVE-17454 > URL: https://issues.apache.org/jira/browse/HIVE-17454 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.0 >Reporter: Krish B >Priority: Minor > > Hi, > I see MAX window function is throwing error if I use more than one order by > clause. But when I use window clause (rows), it works fine but the results > are not as expected as shown below. > From the data shown below, I was expecting max_individual_id should be 42562 > for all the individuals under same HICN_NBR but I am getting different > results. I tried "rows unbounded preceding and unbounded following" but > results are still not as expected. Please let me know if this is an issue? > Query: > select individual_id, MAX(individual_id) over (partition by HICN_NBR order by > eff_dt desc, cncl_dt desc, individual_id desc rows unbounded preceding) as > max_individual_id, >hicn_nbr, eff_dt, cncl_dt > from MFW_MeasureMembership_BOTH MM > where Remove_ASH <> 1 and Remove_AGB <> 1 and Remove_FARM_151 <> 1 and > Remove_JCA <> 1 and Remove_CVTY <> 1 > and trim(HICN_NBR)='15248461314T'; -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17512) Not use doAs if distcp privileged user same as user running hive
[ https://issues.apache.org/jira/browse/HIVE-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170061#comment-16170061 ] Hive QA commented on HIVE-17512: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887617/HIVE-17512.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6866/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6866/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6866/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887617 - PreCommit-HIVE-Build > Not use doAs if distcp privileged user same as user running hive > > > Key: HIVE-17512 > URL: https://issues.apache.org/jira/browse/HIVE-17512 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-17512.1.patch, HIVE-17512.2.patch, > HIVE-17512.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17454) Hive - MAX Window function does not accept more than 1 sort key and does not work as expected with rows window clause
[ https://issues.apache.org/jira/browse/HIVE-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170049#comment-16170049 ] Krish B commented on HIVE-17454: Hi, can anyone please have a look at this issue and help me with this! Much appreciated! Thank you, Krish > Hive - MAX Window function does not accept more than 1 sort key and does not > work as expected with rows window clause > - > > Key: HIVE-17454 > URL: https://issues.apache.org/jira/browse/HIVE-17454 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.0 >Reporter: Krish B >Priority: Minor > > Hi, > I see MAX window function is throwing error if I use more than one order by > clause. But when I use window clause (rows), it works fine but the results > are not as expected as shown below. > From the data shown below, I was expecting max_individual_id should be 42562 > for all the individuals under same HICN_NBR but I am getting different > results. I tried "rows unbounded preceding and unbounded following" but > results are still not as expected. Please let me know if this is an issue? > Query: > select individual_id, MAX(individual_id) over (partition by HICN_NBR order by > eff_dt desc, cncl_dt desc, individual_id desc rows unbounded preceding) as > max_individual_id, >hicn_nbr, eff_dt, cncl_dt > from MFW_MeasureMembership_BOTH MM > where Remove_ASH <> 1 and Remove_AGB <> 1 and Remove_FARM_151 <> 1 and > Remove_JCA <> 1 and Remove_CVTY <> 1 > and trim(HICN_NBR)='15248461314T'; -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17317) Make Dbcp configurable using hive properties in hive-site.xml
[ https://issues.apache.org/jira/browse/HIVE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Barna Zsombor Klara updated HIVE-17317: --- Attachment: HIVE-17317.04.patch > Make Dbcp configurable using hive properties in hive-site.xml > - > > Key: HIVE-17317 > URL: https://issues.apache.org/jira/browse/HIVE-17317 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Attachments: HIVE-17317.01.patch, HIVE-17317.02.patch, > HIVE-17317.03.patch, HIVE-17317.04.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17317) Make Dbcp configurable using hive properties in hive-site.xml
[ https://issues.apache.org/jira/browse/HIVE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Barna Zsombor Klara updated HIVE-17317: --- Attachment: (was: HIVE-17317.03.patch) > Make Dbcp configurable using hive properties in hive-site.xml > - > > Key: HIVE-17317 > URL: https://issues.apache.org/jira/browse/HIVE-17317 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Attachments: HIVE-17317.01.patch, HIVE-17317.02.patch, > HIVE-17317.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17317) Make Dbcp configurable using hive properties in hive-site.xml
[ https://issues.apache.org/jira/browse/HIVE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Barna Zsombor Klara updated HIVE-17317: --- Attachment: HIVE-17317.03.patch Rebased the patch. > Make Dbcp configurable using hive properties in hive-site.xml > - > > Key: HIVE-17317 > URL: https://issues.apache.org/jira/browse/HIVE-17317 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Attachments: HIVE-17317.01.patch, HIVE-17317.02.patch, > HIVE-17317.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17317) Make Dbcp configurable using hive properties in hive-site.xml
[ https://issues.apache.org/jira/browse/HIVE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169955#comment-16169955 ] Hive QA commented on HIVE-17317: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887618/HIVE-17317.03.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6865/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6865/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6865/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-09-18 12:42:12.178 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-6865/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-09-18 12:42:12.180 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at a51ae9c HIVE-17203: Add InterfaceAudience and InterfaceStability annotations for HCat APIs (Sahil Takiar, reviewed by Aihua Xu) + git clean -f -d Removing ql/src/java/org/apache/hadoop/hive/ql/plan/KillQueryDesc.java Removing ql/src/java/org/apache/hadoop/hive/ql/session/KillQuery.java Removing ql/src/java/org/apache/hadoop/hive/ql/session/NullKillQuery.java Removing ql/src/test/queries/clientnegative/authorization_kill_query.q Removing ql/src/test/queries/clientpositive/kill_query.q Removing ql/src/test/results/clientnegative/authorization_kill_query.q.out Removing ql/src/test/results/clientpositive/llap/kill_query.q.out Removing service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TGetQueryIdReq.java Removing service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TGetQueryIdResp.java Removing service/src/java/org/apache/hive/service/server/KillQueryImpl.java Removing standalone-metastore/src/gen/org/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at a51ae9c HIVE-17203: Add InterfaceAudience and InterfaceStability annotations for HCat APIs (Sahil Takiar, reviewed by Aihua Xu) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-09-18 12:42:16.467 + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:30 error: metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java: patch does not apply The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12887618 - PreCommit-HIVE-Build > Make Dbcp configurable using hive properties in hive-site.xml > - > > Key: HIVE-17317 > URL: https://issues.apache.org/jira/browse/HIVE-17317 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Attachments: HIVE-17317.01.patch, HIVE-17317.02.patch, > HIVE-17317.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17549) Use SHA-256 for RowContainer to improve security
[ https://issues.apache.org/jira/browse/HIVE-17549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saijin Huang updated HIVE-17549: Status: Patch Available (was: Open) > Use SHA-256 for RowContainer to improve security > > > Key: HIVE-17549 > URL: https://issues.apache.org/jira/browse/HIVE-17549 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang > Attachments: HIVE-17549.1.patch > > > Use SHA-256 to replace md5 for RowContainer to improve security -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17549) Use SHA-256 for RowContainer to improve security
[ https://issues.apache.org/jira/browse/HIVE-17549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saijin Huang updated HIVE-17549: Attachment: HIVE-17549.1.patch > Use SHA-256 for RowContainer to improve security > > > Key: HIVE-17549 > URL: https://issues.apache.org/jira/browse/HIVE-17549 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang > Attachments: HIVE-17549.1.patch > > > Use SHA-256 to replace md5 for RowContainer to improve security -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17549) Use SHA-256 for RowContainer to improve security
[ https://issues.apache.org/jira/browse/HIVE-17549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saijin Huang reassigned HIVE-17549: --- > Use SHA-256 for RowContainer to improve security > > > Key: HIVE-17549 > URL: https://issues.apache.org/jira/browse/HIVE-17549 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang > > Use SHA-256 to replace md5 for RowContainer to improve security -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17139) Conditional expressions optimization: skip the expression evaluation if the condition is not satisfied for vectorization engine.
[ https://issues.apache.org/jira/browse/HIVE-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169894#comment-16169894 ] Hive QA commented on HIVE-17139: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887603/HIVE-17139.10.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11041 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vecrow_table] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union3] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_join_filters] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_number_compare_projection] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf1] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_case] (batchId=157) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation (batchId=227) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6863/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6863/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6863/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887603 - PreCommit-HIVE-Build > Conditional expressions optimization: skip the expression evaluation if the > condition is not satisfied for vectorization engine. > > > Key: HIVE-17139 > URL: https://issues.apache.org/jira/browse/HIVE-17139 > Project: Hive > Issue Type: Improvement >Reporter: Ke Jia >Assignee: Ke Jia > Attachments: HIVE-17139.10.patch, HIVE-17139.1.patch, > HIVE-17139.2.patch, HIVE-17139.3.patch, HIVE-17139.4.patch, > HIVE-17139.5.patch, HIVE-17139.6.patch, HIVE-17139.7.patch, > HIVE-17139.8.patch, HIVE-17139.9.patch > > > The case when and if statement execution for Hive vectorization is not > optimal, which all the conditional and else expressions are evaluated for > current implementation. The optimized approach is to update the selected > array of batch parameter after the conditional expression is executed. Then > the else expression will only do the selected rows instead of all. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169824#comment-16169824 ] Hive QA commented on HIVE-17496: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887600/HIVE-17496.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDeleteStagingDir (batchId=218) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6862/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6862/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6862/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887600 - PreCommit-HIVE-Build > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169742#comment-16169742 ] Hive QA commented on HIVE-17496: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887600/HIVE-17496.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDeleteStagingDir (batchId=218) org.apache.hive.jdbc.TestJdbcDriver2.testSelectExecAsync2 (batchId=225) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6861/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6861/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6861/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887600 - PreCommit-HIVE-Build > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17317) Make Dbcp configurable using hive properties in hive-site.xml
[ https://issues.apache.org/jira/browse/HIVE-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Barna Zsombor Klara updated HIVE-17317: --- Attachment: HIVE-17317.03.patch Reuploading same patch. > Make Dbcp configurable using hive properties in hive-site.xml > - > > Key: HIVE-17317 > URL: https://issues.apache.org/jira/browse/HIVE-17317 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Attachments: HIVE-17317.01.patch, HIVE-17317.02.patch, > HIVE-17317.03.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17512) Not use doAs if distcp privileged user same as user running hive
[ https://issues.apache.org/jira/browse/HIVE-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17512: --- Attachment: HIVE-17512.2.patch reattaching the patch so build kicks off > Not use doAs if distcp privileged user same as user running hive > > > Key: HIVE-17512 > URL: https://issues.apache.org/jira/browse/HIVE-17512 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-17512.1.patch, HIVE-17512.2.patch, > HIVE-17512.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17496) Bootstrap repl is not cleaning up staging dirs
[ https://issues.apache.org/jira/browse/HIVE-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16169669#comment-16169669 ] Hive QA commented on HIVE-17496: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12887600/HIVE-17496.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=230) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=230) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_mask_hash] (batchId=28) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=156) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_table_failure2] (batchId=89) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] (batchId=234) org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] (batchId=234) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=215) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=215) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDeleteStagingDir (batchId=218) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6860/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6860/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6860/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12887600 - PreCommit-HIVE-Build > Bootstrap repl is not cleaning up staging dirs > -- > > Key: HIVE-17496 > URL: https://issues.apache.org/jira/browse/HIVE-17496 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Tao Li >Assignee: Tao Li > Attachments: HIVE-17496.1.patch, HIVE-17496.2.patch, > HIVE-17496.3.patch, HIVE-17496.4.patch, HIVE-17496.5.patch > > > This will put more pressure on the HDFS file limit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)