[jira] [Updated] (HIVE-22934) Hive server interactive log counters to error stream

2020-04-23 Thread Antal Sinkovits (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-22934:
---
Attachment: HIVE-22934.02.patch

> Hive server interactive log counters to error stream
> 
>
> Key: HIVE-22934
> URL: https://issues.apache.org/jira/browse/HIVE-22934
> Project: Hive
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-22934.01.patch, HIVE-22934.02.patch, 
> HIVE-22934.patch
>
>
> Hive server is logging the console output to system error stream.
> This need to be fixed because 
> First we do not roll the file.
> Second writing to such file is done sequential and can lead to throttle/poor 
> perf.
> {code}
> -rw-r--r--  1 hive hadoop 9.5G Feb 26 17:22 hive-server2-interactive.err
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091215#comment-17091215
 ] 

Hive QA commented on HIVE-23040:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000995/HIVE-23040.05.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17139 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcGenericUDTFGetSplits2.testGenericUDTFOrderBySplitCount1
 (batchId=213)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21908/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21908/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21908/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000995 - PreCommit-HIVE-Build

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-23 Thread Adesh Kumar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adesh Kumar Rao updated HIVE-23230:
---
Status: Patch Available  (was: Open)

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-23 Thread Adesh Kumar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adesh Kumar Rao updated HIVE-23230:
---
Status: Open  (was: Patch Available)

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-23 Thread Shubham Chaurasia (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091203#comment-17091203
 ] 

Shubham Chaurasia commented on HIVE-23230:
--

[~adeshrao] 

HIVE-23230.2.patch looks good to me for fixing limit issue however these test 
failures seem related, all of them use get_splits(). I cannot access test 
report links above. Could you please check these locally ? and also reattach 
the same patch again.


cc [~sankarh]

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091202#comment-17091202
 ] 

Hive QA commented on HIVE-23040:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 1 new + 26 unchanged - 0 fixed 
= 27 total (was 26) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} ql generated 0 new + 1528 unchanged - 2 fixed = 1528 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21908/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21908/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21908/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21908/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23031?focusedWorklogId=426857=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426857
 ]

ASF GitHub Bot logged work on HIVE-23031:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 04:58
Start Date: 24/Apr/20 04:58
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #988:
URL: https://github.com/apache/hive/pull/988#discussion_r414266001



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -2465,6 +2465,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "If the number of references to a CTE clause exceeds this threshold, 
Hive will materialize it\n" +
 "before executing the main query block. -1 will disable this 
feature."),
 
+
HIVE_OPTIMIZE_REWRITE_COUNTDISTINCT_ENABLED("hive.optimize.sketches.rewrite.countdistintct.enabled",
 false,

Review comment:
   Let's prefix all of them with `hive.optimize.bi`.
   
   Additionally, let's create a general toggle for all of them 
(`hive.optimize.bi.sketches.rewrite.enabled`?) that is `false` by default. Then 
individual ones such as 
`hive.optimize.bi.sketches.rewrite.countdistintct.enabled` are by default 
`true`.
   The idea is that users can enable the feature with a single change in their 
property values, and they disable selectively some of the transformations in 
case there are bugs, want to test anything else, etc.

##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -2465,6 +2465,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "If the number of references to a CTE clause exceeds this threshold, 
Hive will materialize it\n" +
 "before executing the main query block. -1 will disable this 
feature."),
 
+
HIVE_OPTIMIZE_REWRITE_COUNTDISTINCT_ENABLED("hive.optimize.sketches.rewrite.countdistintct.enabled",
 false,
+"Enables to rewrite COUNT(DISTINCT(X)) queries to be rewritten to use 
sketch functions."),
+
+
HIVE_OPTIMIZE_REWRITE_COUNT_DISTINCT_SKETCHCLASS("hive.optimize.sketches.rewrite.countdistintct.sketchclass",
 "hll",

Review comment:
   Let's limit the sketch classes options with `StringSet` with those that 
are valid.
   
   Additionally, can we add a comment in the description about what a 'sketch 
class' means?

##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
##
@@ -128,14 +141,26 @@ private void buildCalciteFns() {
   OperandTypes.family(),
   unionFn);
 
+
   unionSFD.setCalciteFunction(unionFn);
   sketchSFD.setCalciteFunction(sketchFn);
+  if (estimateSFD != null) {
+SqlFunction estimateFn = new HiveSqlFunction(estimateSFD.name,
+SqlKind.OTHER_FUNCTION,
+ReturnTypes.explicit(SqlTypeName.DOUBLE),

Review comment:
   If this is a UDF, we should probably dynamically generate the return 
type from it as we do for other UDFs?

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteCountDistinctToDataSketches.java
##
@@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelCollation;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.AggregateCall;
+import org.apache.calcite.rel.core.RelFactories.AggregateFactory;
+import org.apache.calcite.rel.core.RelFactories.ProjectFactory;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.hadoop.hive.conf.HiveConf;
+import 

[jira] [Updated] (HIVE-23235) Checkpointing in repl dump failing for orc format

2020-04-23 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23235:
---
Attachment: HIVE-23235.10.patch
Status: Patch Available  (was: In Progress)

> Checkpointing in repl dump failing for orc format
> -
>
> Key: HIVE-23235
> URL: https://issues.apache.org/jira/browse/HIVE-23235
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: HIVE-23235.01.patch, HIVE-23235.02.patch, 
> HIVE-23235.03.patch, HIVE-23235.04.patch, HIVE-23235.05.patch, 
> HIVE-23235.06.patch, HIVE-23235.07.patch, HIVE-23235.08.patch, 
> HIVE-23235.09.patch, HIVE-23235.10.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23235) Checkpointing in repl dump failing for orc format

2020-04-23 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23235:
---
Status: In Progress  (was: Patch Available)

> Checkpointing in repl dump failing for orc format
> -
>
> Key: HIVE-23235
> URL: https://issues.apache.org/jira/browse/HIVE-23235
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: HIVE-23235.01.patch, HIVE-23235.02.patch, 
> HIVE-23235.03.patch, HIVE-23235.04.patch, HIVE-23235.05.patch, 
> HIVE-23235.06.patch, HIVE-23235.07.patch, HIVE-23235.08.patch, 
> HIVE-23235.09.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091167#comment-17091167
 ] 

Hive QA commented on HIVE-19369:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000993/HIVE-19369.9.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17137 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21907/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21907/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21907/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000993 - PreCommit-HIVE-Build

> Locks: Add new lock implementations for always zero-wait readers
> 
>
> Key: HIVE-19369
> URL: https://issues.apache.org/jira/browse/HIVE-19369
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-19369.1.patch, HIVE-19369.2.patch, 
> HIVE-19369.3.patch, HIVE-19369.4.patch, HIVE-19369.5.patch, 
> HIVE-19369.6.patch, HIVE-19369.7.patch, HIVE-19369.8.patch, 
> HIVE-19369.9.patch, HIVE-19369.9.patch
>
>
> Hive Locking with Micro-managed and full-ACID tables needs a better locking 
> implementation which allows for no-wait readers always.
> EXCL_DROP
> EXCL_WRITE
> SHARED_WRITE
> SHARED_READ
> Short write-up
> EXCL_DROP is a "drop partition" or "drop table" and waits for all others to 
> exit
> EXCL_WRITE excludes all writes and will wait for all existing SHARED_WRITE to 
> exit.
> SHARED_WRITE allows all SHARED_WRITES to go through, but will wait for an 
> EXCL_WRITE & EXCL_DROP (waiting so that you can do drop + insert in different 
> threads).
> SHARED_READ does not wait for any lock - it fails fast for a pending 
> EXCL_DROP, because even if there is an EXCL_WRITE or SHARED_WRITE pending, 
> there's no semantic reason to wait for them to succeed before going ahead 
> with a SHARED_WRITE.
> a select * => SHARED_READ
> an insert into => SHARED_WRITE
> an insert overwrite or MERGE => EXCL_WRITE
> a drop table => EXCL_DROP
> TODO:
> The fate of the compactor needs to be added to this before it is a complete 
> description.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091164#comment-17091164
 ] 

Zoltan Haindrich commented on HIVE-23272:
-

+1 for patch#03
pending tests

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch, 
> HIVE-23272.03.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-23 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091162#comment-17091162
 ] 

Zoltan Haindrich commented on HIVE-23031:
-

[~jcamachorodriguez] Could you please take a look?

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch, HIVE-23031.03.patch, HIVE-23031.03.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23031:

Attachment: HIVE-23031.03.patch

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch, HIVE-23031.03.patch, HIVE-23031.03.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21304) Make bucketing version usage more robust

2020-04-23 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091161#comment-17091161
 ] 

Zoltan Haindrich commented on HIVE-21304:
-

[~jcamachorodriguez], [~vgarg] Could you please take a look?
https://github.com/apache/hive/pull/994

> Make bucketing version usage more robust
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch, HIVE-21304.32.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Show Bucketing version for ReduceSinkOp in explain extended plan - this 
> helps identify what hashing algorithm is being used by by ReduceSinkOp.
> * move the actually selected version to the "conf" so that it doesn't get lost
> * replace trait related logic with a separate optimizer rule
> * do version selection based on a group of operator - this is more reliable
> * skip bucketingversion selection for tables with 1 buckets
> * prefer to use version 2 if possible
> * fix operator creations which didn't set a new conf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21304) Make bucketing version usage more robust

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21304:

Description: 
* Show Bucketing version for ReduceSinkOp in explain extended plan - this helps 
identify what hashing algorithm is being used by by ReduceSinkOp.
* move the actually selected version to the "conf" so that it doesn't get lost
* replace trait related logic with a separate optimizer rule
* do version selection based on a group of operator - this is more reliable
* skip bucketingversion selection for tables with 1 buckets
* prefer to use version 2 if possible
* fix operator creations which didn't set a new conf


  was:
Show Bucketing version for ReduceSinkOp in explain extended plan.

This helps identify what hashing algorithm is being used by by ReduceSinkOp.

 

cc [~vgarg]


> Make bucketing version usage more robust
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch, HIVE-21304.32.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Show Bucketing version for ReduceSinkOp in explain extended plan - this 
> helps identify what hashing algorithm is being used by by ReduceSinkOp.
> * move the actually selected version to the "conf" so that it doesn't get lost
> * replace trait related logic with a separate optimizer rule
> * do version selection based on a group of operator - this is more reliable
> * skip bucketingversion selection for tables with 1 buckets
> * prefer to use version 2 if possible
> * fix operator creations which didn't set a new conf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091160#comment-17091160
 ] 

Hive QA commented on HIVE-19369:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
42s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} standalone-metastore/metastore-common: The patch 
generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 5 new + 504 unchanged - 33 fixed = 509 total (was 537) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 21 new + 231 unchanged - 122 
fixed = 252 total (was 353) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} metastore-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} standalone-metastore/metastore-server generated 0 
new + 189 unchanged - 1 fixed = 189 total (was 190) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21907/dev-support/hive-personality.sh
 |
| git 

[jira] [Updated] (HIVE-21304) Make bucketing version usage more robust

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21304:

Summary: Make bucketing version usage more robust  (was: Show Bucketing 
version for ReduceSinkOp in explain extended plan)

> Make bucketing version usage more robust
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch, HIVE-21304.32.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21304?focusedWorklogId=426853=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426853
 ]

ASF GitHub Bot logged work on HIVE-21304:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 04:15
Start Date: 24/Apr/20 04:15
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk opened a new pull request #994:
URL: https://github.com/apache/hive/pull/994


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426853)
Remaining Estimate: 0h
Time Spent: 10m

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch, HIVE-21304.32.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21304:

Attachment: HIVE-21304.32.patch

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch, HIVE-21304.32.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23252) Change spark related tests to be optional

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23252:

Attachment: HIVE-23252.01.patch

> Change spark related tests to be optional
> -
>
> Key: HIVE-23252
> URL: https://issues.apache.org/jira/browse/HIVE-23252
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23252.01.patch, HIVE-23252.01.patch, 
> HIVE-23252.01.patch, HIVE-23252.01.patch, HIVE-23252.01.patch
>
>
> HIVE-23137 have disabled the execution of some spark related tests; but they 
> would be still considered by a plain maven command - and the spark artifacts 
> are (unneccessarily)  still downloaded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23031:

Attachment: HIVE-23031.03.patch

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch, HIVE-23031.03.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23287) Reduce dependency on icu4j

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091130#comment-17091130
 ] 

Hive QA commented on HIVE-23287:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000988/HIVE-23287.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17110 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithMiniLlapArrow - did not produce a TEST-*.xml file (likely timed 
out) (batchId=215)
org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow.testLlapInputFormatEndToEnd
 (batchId=219)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21906/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21906/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21906/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000988 - PreCommit-HIVE-Build

> Reduce dependency on icu4j
> --
>
> Key: HIVE-23287
> URL: https://issues.apache.org/jira/browse/HIVE-23287
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-23287.patch
>
>
> Brought in transitively via druid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?focusedWorklogId=426839=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426839
 ]

ASF GitHub Bot logged work on HIVE-23216:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 03:17
Start Date: 24/Apr/20 03:17
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #990:
URL: https://github.com/apache/hive/pull/990#discussion_r414240046



##
File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
##
@@ -3835,6 +3837,62 @@ public boolean dropPartition(String dbName, String 
tableName, List parti
 return results;
   }
 
+  private List convertFromPartSpec(Iterator 
iterator, Table tbl)

Review comment:
   Can this be made `static`?

##
File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
##
@@ -3835,6 +3837,62 @@ public boolean dropPartition(String dbName, String 
tableName, List parti
 return results;
   }
 
+  private List convertFromPartSpec(Iterator 
iterator, Table tbl)
+  throws HiveException, TException {
+if(!iterator.hasNext()) {
+  return Collections.emptyList();
+}
+List results = new ArrayList<>();
+
+while (iterator.hasNext()) {
+  PartitionSpec partitionSpec = iterator.next();
+  if (partitionSpec.getPartitionList() != null) {
+// partitions outside table location
+Iterator 
externalPartItr =
+partitionSpec.getPartitionList().getPartitions().iterator();
+while(externalPartItr.hasNext()) {
+  org.apache.hadoop.hive.metastore.api.Partition msPart =
+  externalPartItr.next();
+  results.add(new Partition(tbl, msPart));
+}
+  } else {
+// partitions within table location
+for(PartitionWithoutSD 
partitionWithoutSD:partitionSpec.getSharedSDPartitionSpec().getPartitions()) {
+  org.apache.hadoop.hive.metastore.api.Partition part = new 
org.apache.hadoop.hive.metastore.api.Partition();
+  part.setTableName(partitionSpec.getTableName());
+  part.setDbName(partitionSpec.getDbName());
+  part.setCatName(partitionSpec.getCatName());
+  part.setCreateTime(partitionWithoutSD.getCreateTime());
+  part.setLastAccessTime(partitionWithoutSD.getLastAccessTime());
+  part.setParameters(partitionWithoutSD.getParameters());
+  part.setPrivileges(partitionWithoutSD.getPrivileges());
+  
part.setSd(partitionSpec.getSharedSDPartitionSpec().getSd().deepCopy());
+  String partitionLocation = null;
+  if(partitionWithoutSD.getRelativePath() == null
+  || partitionWithoutSD.getRelativePath().isEmpty()) {
+if (tbl.getDataLocation() != null) {
+  Path partPath = new Path(tbl.getDataLocation(),
+  Warehouse.makePartName(tbl.getPartCols(),
+  partitionWithoutSD.getValues()));
+  partitionLocation = partPath.toString();
+}
+  } else {
+partitionLocation = tbl.getSd().getLocation();
+partitionLocation += partitionWithoutSD.getRelativePath();
+  }
+  part.getSd().setLocation(partitionLocation);
+  part.setValues(partitionWithoutSD.getValues());
+  part.setWriteId(partitionSpec.getWriteId());
+  Partition hivePart = new Partition(tbl, part);
+  //assert(partitionWithoutSD.getRelativePath() != null);

Review comment:
   Remove the commented out code.

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java
##
@@ -1178,6 +1179,22 @@ public boolean listPartitionsByExpr(String catName, 
String dbName, String tblNam
 return result.isEmpty();
   }
 
+  @Override
+  public boolean listPartitionsSpecByExpr(String catName, String dbName, 
String tblName, byte[] expr,
+  String defaultPartitionName, short maxParts, List result) 
throws TException {
+org.apache.hadoop.hive.metastore.api.Table table = getTempTable(dbName, 
tblName);
+if (table == null) {
+  return super.listPartitionsSpecByExpr(catName, dbName, tblName, expr, 
defaultPartitionName, maxParts, result);
+}
+assert result != null;
+
+result.addAll(
+MetaStoreServerUtils.getPartitionspecsGroupedByStorageDescriptor(table,
+  getPartitionsForMaxParts(tblName, 
getPartitionedTempTable(table).listPartitionsByFilter(
+generateJDOFilter(table, expr, defaultPartitionName)), maxParts)));

Review comment:
   Indentation seems off here.

##
File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
##
@@ -1941,6 +1941,45 @@ public boolean listPartitionsByExpr(String catName, 
String db_name, String tbl_n
 return 

[jira] [Commented] (HIVE-23134) Hive & Kudu interaction not available on ARM

2020-04-23 Thread RuiChen (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091112#comment-17091112
 ] 

RuiChen commented on HIVE-23134:


[All of Kudu related test 
cases|[https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk-pipeline/4/testReport/]]
 will be passed when the patch is merged, no available ARM64 supported Kudu 
release, so we should skip these tests on ARM64.

About Kudu ARM64 support, see https://issues.apache.org/jira/browse/KUDU-3007

> Hive & Kudu interaction not available on ARM
> 
>
> Key: HIVE-23134
> URL: https://issues.apache.org/jira/browse/HIVE-23134
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zhenyu Zheng
>Assignee: Zhenyu Zheng
>Priority: Major
> Attachments: HIVE-23134.1.patch, HIVE-23134.2.patch
>
>
> Currently, we have set up an ARM CI to test out how Hive works on ARM 
> platform:
> https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/
> According to the results, Hive & Kudu interaction is not available on ARM 
> platform:
> https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.kudu/
> this is because that we use Kudu version 1.10 and that version does not come 
> with ARM workable packages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23134) Hive & Kudu interaction not available on ARM

2020-04-23 Thread RuiChen (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091112#comment-17091112
 ] 

RuiChen edited comment on HIVE-23134 at 4/24/20, 2:52 AM:
--

All of Kudu related test cases will be passed when the patch is merged, no 
available ARM64 supported Kudu release, so we should skip these tests on ARM64.

[https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk-pipeline/4/testReport/]

About Kudu ARM64 support, see https://issues.apache.org/jira/browse/KUDU-3007


was (Author: ruichen):
[All of Kudu related test 
cases|[https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk-pipeline/4/testReport/]]
 will be passed when the patch is merged, no available ARM64 supported Kudu 
release, so we should skip these tests on ARM64.

About Kudu ARM64 support, see https://issues.apache.org/jira/browse/KUDU-3007

> Hive & Kudu interaction not available on ARM
> 
>
> Key: HIVE-23134
> URL: https://issues.apache.org/jira/browse/HIVE-23134
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zhenyu Zheng
>Assignee: Zhenyu Zheng
>Priority: Major
> Attachments: HIVE-23134.1.patch, HIVE-23134.2.patch
>
>
> Currently, we have set up an ARM CI to test out how Hive works on ARM 
> platform:
> https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/
> According to the results, Hive & Kudu interaction is not available on ARM 
> platform:
> https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.kudu/
> this is because that we use Kudu version 1.10 and that version does not come 
> with ARM workable packages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23287) Reduce dependency on icu4j

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091109#comment-17091109
 ] 

Hive QA commented on HIVE-23287:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21906/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21906/yetus/patch-asflicense-problems.txt
 |
| modules | C: druid-handler U: druid-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21906/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Reduce dependency on icu4j
> --
>
> Key: HIVE-23287
> URL: https://issues.apache.org/jira/browse/HIVE-23287
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-23287.patch
>
>
> Brought in transitively via druid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091102#comment-17091102
 ] 

Hive QA commented on HIVE-23286:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000978/HIVE-23286.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17124 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_insert_partitioned]
 (batchId=98)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testShowLocksFilterOptions 
(batchId=300)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21905/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21905/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21905/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000978 - PreCommit-HIVE-Build

> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table if an insert failure aborts the 
> FileSinkOperator and the ACID direct insert it turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23207) Create integration tests for TxnManager for different rdbms metastores

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091097#comment-17091097
 ] 

Hive QA commented on HIVE-23207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} contrib in master has 11 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} itests/qtest-druid in master has 7 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
49s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
18s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 18s{color} 
| {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 513 unchanged - 12 fixed = 513 total (was 525) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} ql: The patch generated 0 new + 139 unchanged - 80 
fixed = 139 total (was 219) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} The patch contrib passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} root: The patch generated 0 new + 679 unchanged - 92 
fixed = 679 total (was 771) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} The patch hive-blobstore passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} The patch qtest-accumulo passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} The patch qtest-druid passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} The patch qtest-kudu passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} The patch util passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
8s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
10s{color} | {color:red} patch/standalone-metastore/metastore-server cannot run 
setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  6m 
18s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | 

[jira] [Updated] (HIVE-23244) Extract Create View analyzer from SemanticAnalyzer

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23244:
--
Attachment: HIVE-23244.02.patch

> Extract Create View analyzer from SemanticAnalyzer
> --
>
> Key: HIVE-23244
> URL: https://issues.apache.org/jira/browse/HIVE-23244
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23244.01.patch, HIVE-23244.02.patch
>
>
> Create View commands are not queries, but commands which have queries as a 
> part of them. Therefore a separate CreateViewAnalyzer is needed which uses 
> SemanticAnalyer to analyze it's query.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091093#comment-17091093
 ] 

Hive QA commented on HIVE-23286:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  5m 
32s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21905/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21905/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21905/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table if an insert failure aborts the 
> FileSinkOperator and the ACID direct insert it turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23219) Add test cleanup for TestHCatLoaderEncryption and TestSessionManagerMetrics

2020-04-23 Thread RuiChen (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RuiChen updated HIVE-23219:
---
Status: Patch Available  (was: Open)

Retest

> Add test cleanup for TestHCatLoaderEncryption and TestSessionManagerMetrics
> ---
>
> Key: HIVE-23219
> URL: https://issues.apache.org/jira/browse/HIVE-23219
> Project: Hive
>  Issue Type: Bug
>Reporter: RuiChen
>Assignee: RuiChen
>Priority: Minor
> Attachments: HIVE-23219.2.patch, HIVE-23219.patch
>
>
> 1.Test cases in TestHCatLoaderEncryption apply wrong test.jks in
>  hive/ql/target path, it casue tests failing when we run some ql tests,
>  then run TestHCatLoaderEncryption in local, fix it using TEST_DATA_DIR.
>  2.Add tearDown method to clean up static metrics instance, to avoid
>  impacting each other between test cases in one class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23219) Add test cleanup for TestHCatLoaderEncryption and TestSessionManagerMetrics

2020-04-23 Thread RuiChen (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RuiChen updated HIVE-23219:
---
Status: Open  (was: Patch Available)

> Add test cleanup for TestHCatLoaderEncryption and TestSessionManagerMetrics
> ---
>
> Key: HIVE-23219
> URL: https://issues.apache.org/jira/browse/HIVE-23219
> Project: Hive
>  Issue Type: Bug
>Reporter: RuiChen
>Assignee: RuiChen
>Priority: Minor
> Attachments: HIVE-23219.2.patch, HIVE-23219.patch
>
>
> 1.Test cases in TestHCatLoaderEncryption apply wrong test.jks in
>  hive/ql/target path, it casue tests failing when we run some ql tests,
>  then run TestHCatLoaderEncryption in local, fix it using TEST_DATA_DIR.
>  2.Add tearDown method to clean up static metrics instance, to avoid
>  impacting each other between test cases in one class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23207) Create integration tests for TxnManager for different rdbms metastores

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091073#comment-17091073
 ] 

Hive QA commented on HIVE-23207:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000974/HIVE-23207.9.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17124 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21904/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21904/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21904/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000974 - PreCommit-HIVE-Build

> Create integration tests for TxnManager for different rdbms metastores
> --
>
> Key: HIVE-23207
> URL: https://issues.apache.org/jira/browse/HIVE-23207
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Minor
> Attachments: HIVE-23207.1.patch, HIVE-23207.2.patch, 
> HIVE-23207.3.patch, HIVE-23207.4.patch, HIVE-23207.5.patch, 
> HIVE-23207.6.patch, HIVE-23207.7.patch, HIVE-23207.8.patch, HIVE-23207.9.patch
>
>
> Create an integration test suite that runs tests for TxnManager with the 
> metastore configured to use different kind of RDBMS-s. Use the different 
> DatabaseRule-s defined in the standalone-metastore for docker environments, 
> and use the real init schema for every database type instead of the hardwired 
> TxnDbUtil.prepDb.
> This test will be useful for easy manual validation of schema changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23270) Optimize isValidTxnListState to reduce the numbers of HMS calls

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091047#comment-17091047
 ] 

Hive QA commented on HIVE-23270:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000977/HIVE-23270.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17124 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21903/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21903/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21903/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000977 - PreCommit-HIVE-Build

> Optimize isValidTxnListState to reduce the numbers of HMS calls
> ---
>
> Key: HIVE-23270
> URL: https://issues.apache.org/jira/browse/HIVE-23270
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23270.02.patch, HIVE-23270.03.patch, 
> HIVE-23270.patch
>
>
> There are several checks which does not need a HMS call, and they can already 
> define the return value. Move them forward potentially preventing an extra 
> HMS call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23270) Optimize isValidTxnListState to reduce the numbers of HMS calls

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091025#comment-17091025
 ] 

Hive QA commented on HIVE-23270:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21903/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21903/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21903/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Optimize isValidTxnListState to reduce the numbers of HMS calls
> ---
>
> Key: HIVE-23270
> URL: https://issues.apache.org/jira/browse/HIVE-23270
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23270.02.patch, HIVE-23270.03.patch, 
> HIVE-23270.patch
>
>
> There are several checks which does not need a HMS call, and they can already 
> define the return value. Move them forward potentially preventing an extra 
> HMS call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23274:
--
Attachment: HIVE-23274.01.patch

> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23274:
--
Attachment: (was: HIVE-23274.01.patch)

> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23274:
--
Status: Patch Available  (was: Open)

> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23274:
--
Attachment: (was: HIVE-23274.01.patch)

> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-23 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23216:
---
Status: Patch Available  (was: Open)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, HIVE-23216.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-23 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23216:
---
Status: Open  (was: Patch Available)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, HIVE-23216.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-23 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23216:
---
Attachment: HIVE-23216.6.patch

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, HIVE-23216.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23201) Improve logging in locking

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091003#comment-17091003
 ] 

Hive QA commented on HIVE-23201:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000975/HIVE-23201.8.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17124 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockTimeout 
(batchId=245)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21902/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21902/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21902/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000975 - PreCommit-HIVE-Build

> Improve logging in locking
> --
>
> Key: HIVE-23201
> URL: https://issues.apache.org/jira/browse/HIVE-23201
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23201.1.patch, HIVE-23201.1.patch, 
> HIVE-23201.2.patch, HIVE-23201.2.patch, HIVE-23201.3.patch, 
> HIVE-23201.4.patch, HIVE-23201.5.patch, HIVE-23201.5.patch, 
> HIVE-23201.5.patch, HIVE-23201.5.patch, HIVE-23201.6.patch, 
> HIVE-23201.6.patch, HIVE-23201.7.patch, HIVE-23201.8.patch, HIVE-23201.8.patch
>
>
> Currently it can be quite difficult to troubleshoot issues related to 
> locking. To understand why a particular txn gave up after a while on 
> acquiring a lock, you have to connect directly to the backend DB, since we 
> are not logging right now which exact locks the txn is waiting for.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23275?focusedWorklogId=426768=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426768
 ]

ASF GitHub Bot logged work on HIVE-23275:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 22:48
Start Date: 23/Apr/20 22:48
Worklog Time Spent: 10m 
  Work Description: jcamachor opened a new pull request #993:
URL: https://github.com/apache/hive/pull/993


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426768)
Remaining Estimate: 0h
Time Spent: 10m

> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.01.patch, HIVE-23275.patch, HIVE-23275.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-23 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23275:
---
Attachment: HIVE-23275.01.patch

> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.01.patch, HIVE-23275.patch, HIVE-23275.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23201) Improve logging in locking

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090982#comment-17090982
 ] 

Hive QA commented on HIVE-23201:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
20s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 510 unchanged - 13 fixed = 511 total (was 523) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 189 unchanged - 1 fixed = 190 total (was 190) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  org.apache.hadoop.hive.metastore.txn.TxnHandler.timeOutLocks(Connection) 
may fail to clean up java.sql.ResultSet  Obligation to clean up resource 
created at TxnHandler.java:up java.sql.ResultSet  Obligation to clean up 
resource created at TxnHandler.java:[line 4763] is not discharged |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21902/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21902/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21902/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21902/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21902/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve logging in locking
> --
>
> Key: HIVE-23201
> URL: https://issues.apache.org/jira/browse/HIVE-23201
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>

[jira] [Work logged] (HIVE-23269) Unsafe comparing bigints and chars

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23269?focusedWorklogId=426759=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426759
 ]

ASF GitHub Bot logged work on HIVE-23269:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 22:14
Start Date: 23/Apr/20 22:14
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #992:
URL: https://github.com/apache/hive/pull/992#discussion_r414159331



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java
##
@@ -789,12 +791,25 @@ protected void validateUDF(ASTNode expr, boolean 
isFunction, TypeCheckCtx ctx, F
 
 LogHelper console = new LogHelper(LOG);
 
+Set unsafeConventionTyps = 
Sets.newHashSet(
+PrimitiveObjectInspector.PrimitiveCategory.STRING,
+PrimitiveObjectInspector.PrimitiveCategory.VARCHAR,
+PrimitiveObjectInspector.PrimitiveCategory.CHAR);
 // For now, if a bigint is going to be cast to a double throw an error 
or warning
-if ((oiTypeInfo0.equals(TypeInfoFactory.stringTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) ||
-(oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.stringTypeInfo))) {
+if ((oiTypeInfo0 instanceof PrimitiveTypeInfo &&

Review comment:
   Move the conditions to method unSafeCompareWithBigInt.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426759)
Time Spent: 1.5h  (was: 1h 20m)

> Unsafe comparing bigints and chars
> --
>
> Key: HIVE-23269
> URL: https://issues.apache.org/jira/browse/HIVE-23269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-23269.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Comparing bigints and varchars or chars may result to wrong result,  for 
> example:
> CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20));
> INSERT INTO  test_a VALUES ('2882303761517473127', '2882303761517473127'), 
> ('2882303761517473276','2882303761517473276');
> SET hive.strict.checks.type.safety=false;
> SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127;
> SELECT appid2 FROM test_a WHERE appid2 = 2882303761517473127;​
> Both queries will output the row: 
> ('2882303761517473276','2882303761517473276')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090967#comment-17090967
 ] 

Hive QA commented on HIVE-21304:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000969/HIVE-21304.31.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 17125 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.exec.TestUtilities.testRemoveTempOrDuplicateFilesOnMrNoDp
 (batchId=292)
org.apache.hadoop.hive.ql.exec.TestUtilities.testRemoveTempOrDuplicateFilesOnMrWithDp
 (batchId=292)
org.apache.hadoop.hive.ql.exec.TestUtilities.testRemoveTempOrDuplicateFilesOnTezNoDp
 (batchId=292)
org.apache.hadoop.hive.ql.exec.TestUtilities.testRemoveTempOrDuplicateFilesOnTezWithDp
 (batchId=292)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21901/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21901/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21901/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000969 - PreCommit-HIVE-Build

> Show Bucketing version for ReduceSinkOp in explain extended plan
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch
>
>
> Show Bucketing version for ReduceSinkOp in explain extended plan.
> This helps identify what hashing algorithm is being used by by ReduceSinkOp.
>  
> cc [~vgarg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23291) Add Hive to DatabaseType in JDBC storage handler

2020-04-23 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23291:
---
Status: Patch Available  (was: In Progress)

> Add Hive to DatabaseType in JDBC storage handler
> 
>
> Key: HIVE-23291
> URL: https://issues.apache.org/jira/browse/HIVE-23291
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23291.patch
>
>
> Inception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-23291) Add Hive to DatabaseType in JDBC storage handler

2020-04-23 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23291 started by Jesus Camacho Rodriguez.
--
> Add Hive to DatabaseType in JDBC storage handler
> 
>
> Key: HIVE-23291
> URL: https://issues.apache.org/jira/browse/HIVE-23291
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23291.patch
>
>
> Inception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23291) Add Hive to DatabaseType in JDBC storage handler

2020-04-23 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23291:
---
Attachment: HIVE-23291.patch

> Add Hive to DatabaseType in JDBC storage handler
> 
>
> Key: HIVE-23291
> URL: https://issues.apache.org/jira/browse/HIVE-23291
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23291.patch
>
>
> Inception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090953#comment-17090953
 ] 

Hive QA commented on HIVE-21304:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
0s{color} | {color:red} ql: The patch generated 9 new + 1324 unchanged - 10 
fixed = 1333 total (was 1334) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
16s{color} | {color:red} ql generated 3 new + 1530 unchanged - 0 fixed = 1533 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Suspicious comparison of Integer references in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:[line 65] |
|  |  Suspicious comparison of Integer references in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge2(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge2(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:[line 75] |
|  |  Nullcheck of table_desc at line 8232 of value previously dereferenced in 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createFileSinkDesc(String, 
TableDesc, Partition, Path, int, boolean, boolean, boolean, Path, 
SemanticAnalyzer$SortBucketRSCtx, DynamicPartitionCtx, ListBucketingCtx, 
RowSchema, boolean, Table, Long, boolean, Integer, QB, boolean)  At 
SemanticAnalyzer.java:8232 of value previously dereferenced in 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createFileSinkDesc(String, 
TableDesc, Partition, Path, int, boolean, boolean, boolean, Path, 
SemanticAnalyzer$SortBucketRSCtx, DynamicPartitionCtx, ListBucketingCtx, 
RowSchema, boolean, Table, Long, boolean, Integer, QB, boolean)  At 
SemanticAnalyzer.java:[line 8225] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21901/dev-support/hive-personality.sh
 |
| git revision | master / 

[jira] [Assigned] (HIVE-23291) Add Hive to DatabaseType in JDBC storage handler

2020-04-23 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-23291:
--


> Add Hive to DatabaseType in JDBC storage handler
> 
>
> Key: HIVE-23291
> URL: https://issues.apache.org/jira/browse/HIVE-23291
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> Inception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23048) Use sequences for TXN_ID generation

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090935#comment-17090935
 ] 

Hive QA commented on HIVE-23048:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000971/HIVE-23048.12.patch

{color:green}SUCCESS:{color} +1 due to 11 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17133 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21900/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21900/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21900/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000971 - PreCommit-HIVE-Build

> Use sequences for TXN_ID generation
> ---
>
> Key: HIVE-23048
> URL: https://issues.apache.org/jira/browse/HIVE-23048
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Varga
>Priority: Major
> Attachments: HIVE-23048.1.patch, HIVE-23048.10.patch, 
> HIVE-23048.11.patch, HIVE-23048.12.patch, HIVE-23048.2.patch, 
> HIVE-23048.3.patch, HIVE-23048.4.patch, HIVE-23048.5.patch, 
> HIVE-23048.6.patch, HIVE-23048.7.patch, HIVE-23048.8.patch, HIVE-23048.9.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23048) Use sequences for TXN_ID generation

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090930#comment-17090930
 ] 

Hive QA commented on HIVE-23048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
50s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
17s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 7 new + 594 unchanged - 26 fixed = 601 total (was 620) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
53s{color} | {color:red} ql: The patch generated 11 new + 886 unchanged - 4 
fixed = 897 total (was 890) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
33s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 189 unchanged - 1 fixed = 190 total (was 190) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} standalone-metastore_metastore-server generated 1 new 
+ 24 unchanged - 0 fixed = 25 total (was 24) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  Write to static field 
org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxnTimeOutMillis from 
instance method 
org.apache.hadoop.hive.metastore.txn.TxnHandler.setOpenTxnTimeOutMillis(long)  
At TxnHandler.java:from instance method 
org.apache.hadoop.hive.metastore.txn.TxnHandler.setOpenTxnTimeOutMillis(long)  
At TxnHandler.java:[line 885] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21900/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21900/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21900/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21900/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| javadoc | 

[jira] [Updated] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23272:
--
Attachment: HIVE-23272.03.patch

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch, 
> HIVE-23272.03.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090875#comment-17090875
 ] 

Hive QA commented on HIVE-23272:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001001/HIVE-23272.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17126 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=37)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_insert_partitioned]
 (batchId=98)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21898/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21898/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21898/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001001 - PreCommit-HIVE-Build

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23290) Remove plexus-utils transitive dependency

2020-04-23 Thread Roohi Syeda (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roohi Syeda reassigned HIVE-23290:
--


> Remove plexus-utils transitive dependency
> -
>
> Key: HIVE-23290
> URL: https://issues.apache.org/jira/browse/HIVE-23290
> Project: Hive
>  Issue Type: Bug
>Reporter: Roohi Syeda
>Assignee: Roohi Syeda
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090866#comment-17090866
 ] 

Hive QA commented on HIVE-23272:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} serde in master has 198 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
55s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21898/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21898/yetus/patch-asflicense-problems.txt
 |
| modules | C: serde ql itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21898/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23274:
--
Attachment: HIVE-23274.01.patch

> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch, HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23272:
--
Attachment: HIVE-23272.02.patch

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23272:
--
Attachment: (was: HIVE-23272.02.patch)

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23266) Remove QueryWrapper from ObjectStore

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090822#comment-17090822
 ] 

Hive QA commented on HIVE-23266:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
25s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
22s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 9 new + 268 unchanged - 28 fixed = 277 total (was 296) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} standalone-metastore/metastore-server generated 3 new 
+ 189 unchanged - 1 fixed = 192 total (was 190) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  Impossible cast from String to 
org.apache.hadoop.hive.metastore.model.MPartition in 
org.apache.hadoop.hive.metastore.ObjectStore.listPartitionsPsWithAuth(String, 
String, String, List, short, String, List)  At 
ObjectStore.java:org.apache.hadoop.hive.metastore.model.MPartition in 
org.apache.hadoop.hive.metastore.ObjectStore.listPartitionsPsWithAuth(String, 
String, String, List, short, String, List)  At ObjectStore.java:[line 3160] |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.hive.metastore.ObjectStore.getMPartitionColumnStatistics(Table,
 List, List, String)  At ObjectStore.java:is not thrown in 
org.apache.hadoop.hive.metastore.ObjectStore.getMPartitionColumnStatistics(Table,
 List, List, String)  At ObjectStore.java:[line 9118] |
|  |  
org.apache.hadoop.hive.metastore.ObjectStore.getMPartitionColumnStatistics(Table,
 List, List, String) concatenates strings using + in a loop  At 
ObjectStore.java:strings using + in a loop  At ObjectStore.java:[line 9099] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21897/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21897/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21897/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21897/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21897/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was 

[jira] [Commented] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090801#comment-17090801
 ] 

Hive QA commented on HIVE-23031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000967/HIVE-23031.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 17125 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.catalogPatternsDontWork[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.noSuchCatalog[Remote] 
(batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.tablesInDifferentCatalog[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMetaCaseSensitive[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMetaNullNoDbNoTbl[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMetaNullOrEmptyDb[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMetaNullOrEmptyTbl[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMetaNullOrEmptyTypes[Remote]
 (batchId=153)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMeta[Remote]
 (batchId=153)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21896/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21896/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21896/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000967 - PreCommit-HIVE-Build

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090789#comment-17090789
 ] 

Hive QA commented on HIVE-23031:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 3 new + 103 unchanged - 0 
fixed = 106 total (was 103) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
12s{color} | {color:red} ql generated 3 new + 1530 unchanged - 0 fixed = 1533 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Format string "%s" needs argument 2 but only 1 are provided in 
org.apache.hadoop.hive.ql.exec.DataSketchesFunctions.getSketchFunction(String, 
String)  At DataSketchesFunctions.java:2 but only 1 are provided in 
org.apache.hadoop.hive.ql.exec.DataSketchesFunctions.getSketchFunction(String, 
String)  At DataSketchesFunctions.java:[line 106] |
|  |  Dead store to f in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At 
HiveRewriteCountDistinctToDataSketches.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At HiveRewriteCountDistinctToDataSketches.java:[line 70] |
|  |  Dead store to newAggCalls in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At 
HiveRewriteCountDistinctToDataSketches.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At HiveRewriteCountDistinctToDataSketches.java:[line 68] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21896/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21896/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 

[jira] [Updated] (HIVE-23252) Change spark related tests to be optional

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23252:

Attachment: HIVE-23252.01.patch

> Change spark related tests to be optional
> -
>
> Key: HIVE-23252
> URL: https://issues.apache.org/jira/browse/HIVE-23252
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23252.01.patch, HIVE-23252.01.patch, 
> HIVE-23252.01.patch, HIVE-23252.01.patch
>
>
> HIVE-23137 have disabled the execution of some spark related tests; but they 
> would be still considered by a plain maven command - and the spark artifacts 
> are (unneccessarily)  still downloaded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23235) Checkpointing in repl dump failing for orc format

2020-04-23 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23235:
---
Attachment: HIVE-23235.09.patch
Status: Patch Available  (was: In Progress)

> Checkpointing in repl dump failing for orc format
> -
>
> Key: HIVE-23235
> URL: https://issues.apache.org/jira/browse/HIVE-23235
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: HIVE-23235.01.patch, HIVE-23235.02.patch, 
> HIVE-23235.03.patch, HIVE-23235.04.patch, HIVE-23235.05.patch, 
> HIVE-23235.06.patch, HIVE-23235.07.patch, HIVE-23235.08.patch, 
> HIVE-23235.09.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23235) Checkpointing in repl dump failing for orc format

2020-04-23 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23235:
---
Status: In Progress  (was: Patch Available)

> Checkpointing in repl dump failing for orc format
> -
>
> Key: HIVE-23235
> URL: https://issues.apache.org/jira/browse/HIVE-23235
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: HIVE-23235.01.patch, HIVE-23235.02.patch, 
> HIVE-23235.03.patch, HIVE-23235.04.patch, HIVE-23235.05.patch, 
> HIVE-23235.06.patch, HIVE-23235.07.patch, HIVE-23235.08.patch, 
> HIVE-23235.09.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-23 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-23040:
--
Attachment: HIVE-23040.05.patch

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-23 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-23040:
--
Attachment: (was: HIVE-23040.05.patch)

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-19369) Locks: Add new lock implementations for always zero-wait readers

2020-04-23 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-19369:
--
Attachment: HIVE-19369.9.patch

> Locks: Add new lock implementations for always zero-wait readers
> 
>
> Key: HIVE-19369
> URL: https://issues.apache.org/jira/browse/HIVE-19369
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-19369.1.patch, HIVE-19369.2.patch, 
> HIVE-19369.3.patch, HIVE-19369.4.patch, HIVE-19369.5.patch, 
> HIVE-19369.6.patch, HIVE-19369.7.patch, HIVE-19369.8.patch, 
> HIVE-19369.9.patch, HIVE-19369.9.patch
>
>
> Hive Locking with Micro-managed and full-ACID tables needs a better locking 
> implementation which allows for no-wait readers always.
> EXCL_DROP
> EXCL_WRITE
> SHARED_WRITE
> SHARED_READ
> Short write-up
> EXCL_DROP is a "drop partition" or "drop table" and waits for all others to 
> exit
> EXCL_WRITE excludes all writes and will wait for all existing SHARED_WRITE to 
> exit.
> SHARED_WRITE allows all SHARED_WRITES to go through, but will wait for an 
> EXCL_WRITE & EXCL_DROP (waiting so that you can do drop + insert in different 
> threads).
> SHARED_READ does not wait for any lock - it fails fast for a pending 
> EXCL_DROP, because even if there is an EXCL_WRITE or SHARED_WRITE pending, 
> there's no semantic reason to wait for them to succeed before going ahead 
> with a SHARED_WRITE.
> a select * => SHARED_READ
> an insert into => SHARED_WRITE
> an insert overwrite or MERGE => EXCL_WRITE
> a drop table => EXCL_DROP
> TODO:
> The fate of the compactor needs to be added to this before it is a complete 
> description.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090757#comment-17090757
 ] 

Hive QA commented on HIVE-23275:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000965/HIVE-23275.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 17124 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join46] 
(batchId=62)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] 
(batchId=97)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] 
(batchId=90)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_window]
 (batchId=111)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=87)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[topnkey_windowing]
 (batchId=78)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query12] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query20] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query36] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query47] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query49] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query51] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query53] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query57] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query63] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query89] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query98] 
(batchId=229)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query12]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query20]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query36]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query47]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query49]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query51]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query53]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query57]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query63]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query89]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query98]
 (batchId=228)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[mv_query67]
 (batchId=228)
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
 (batchId=240)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21895/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21895/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21895/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 30 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000965 - PreCommit-HIVE-Build

> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.patch, HIVE-23275.patch
>
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090740#comment-17090740
 ] 

Hive QA commented on HIVE-23275:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21895/dev-support/hive-personality.sh
 |
| git revision | master / 014dafc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21895/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21895/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.patch, HIVE-23275.patch
>
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23289) Rows are not removed from the skewed_string_list_values table

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23289:

Parent: HIVE-22942
Issue Type: Sub-task  (was: Bug)

> Rows are not removed from the skewed_string_list_values table
> -
>
> Key: HIVE-23289
> URL: https://issues.apache.org/jira/browse/HIVE-23289
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Priority: Major
>
> initialize sysdb as well
> {code}
> select * from sys.skewed_string_list_values;
> -- empty (no skewed stuff)
> create external table smt_sysdb_src_skew (key int) skewed by (key) on (1,2,3);
> select * from sys.skewed_string_list_values;
> -- 3 rows
> drop table smt_sysdb_src_skew;
> select * from sys.skewed_string_list_values;
> -- still..3 rows
> create external table smt_sysdb_src_skew (key int) skewed by (key) on (1,2,3);
> -- now we have 6 rows; the biggest issue is that the list_id is the same
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23088) Using Strings from log4j breaks non-log4j users

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23088:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

pushed to branch-3 and branch-3.1 as well
thank you [~dlavati]!

> Using Strings from log4j breaks non-log4j users
> ---
>
> Key: HIVE-23088
> URL: https://issues.apache.org/jira/browse/HIVE-23088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Vova Vysotskyi
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0, 3.2.0, 3.1.3
>
> Attachments: HIVE-23088.01.branch-3.patch, 
> HIVE-23088.01.branch-3.patch, HIVE-23088.01.branch-3.patch, 
> HIVE-23088.01.patch, HIVE-23088.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{HookUtils}} uses explicitly {{org.apache.logging.log4j.util.Strings}} class 
> from log4j, but it may break clients who use other loggers and should exclude 
> log4j from the classpath.
> {{commons-lang}} has class {{StringUtils}} which may be used as a replacement 
> for this one:
>  {{Strings.isBlank}} -> {{StringUtils.isBlank}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23288) Sysdb initialization fails with {LIMIT 1} error

2020-04-23 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090730#comment-17090730
 ] 

Zoltan Haindrich commented on HIVE-23288:
-

I suspect that the issue is rooted somewhere around detecting the metastore 
database type - it seems like it uses the "Generic" all the time

> Sysdb initialization fails with {LIMIT 1} error
> ---
>
> Key: HIVE-23288
> URL: https://issues.apache.org/jira/browse/HIVE-23288
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Priority: Major
>
> this was fixed some time ago; now it's broken on mysql and postgres as well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23287) Reduce dependency on icu4j

2020-04-23 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-23287:

Status: Patch Available  (was: Open)

> Reduce dependency on icu4j
> --
>
> Key: HIVE-23287
> URL: https://issues.apache.org/jira/browse/HIVE-23287
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-23287.patch
>
>
> Brought in transitively via druid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23287) Reduce dependency on icu4j

2020-04-23 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-23287:

Attachment: HIVE-23287.patch

> Reduce dependency on icu4j
> --
>
> Key: HIVE-23287
> URL: https://issues.apache.org/jira/browse/HIVE-23287
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-23287.patch
>
>
> Brought in transitively via druid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23287) Reduce dependency on icu4j

2020-04-23 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-23287:
---


> Reduce dependency on icu4j
> --
>
> Key: HIVE-23287
> URL: https://issues.apache.org/jira/browse/HIVE-23287
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
>
> Brought in transitively via druid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23269) Unsafe comparing bigints and chars

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23269?focusedWorklogId=426586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426586
 ]

ASF GitHub Bot logged work on HIVE-23269:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 15:59
Start Date: 23/Apr/20 15:59
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #992:
URL: https://github.com/apache/hive/pull/992#discussion_r413921201



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/parse/type/TestTypeCheckProcFactory.java
##
@@ -140,4 +147,37 @@ public void testWithNonZeroFraction() throws Exception {
 }
   }
 
+  @Test
+  public void testValidateUDFOnTypeCheck() throws Exception {

Review comment:
   Thank you for mentioning this, move the test to 
TestTypeCompareValidation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426586)
Time Spent: 1h 20m  (was: 1h 10m)

> Unsafe comparing bigints and chars
> --
>
> Key: HIVE-23269
> URL: https://issues.apache.org/jira/browse/HIVE-23269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-23269.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Comparing bigints and varchars or chars may result to wrong result,  for 
> example:
> CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20));
> INSERT INTO  test_a VALUES ('2882303761517473127', '2882303761517473127'), 
> ('2882303761517473276','2882303761517473276');
> SET hive.strict.checks.type.safety=false;
> SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127;
> SELECT appid2 FROM test_a WHERE appid2 = 2882303761517473127;​
> Both queries will output the row: 
> ('2882303761517473276','2882303761517473276')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23269) Unsafe comparing bigints and chars

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23269?focusedWorklogId=426585=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426585
 ]

ASF GitHub Bot logged work on HIVE-23269:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 15:57
Start Date: 23/Apr/20 15:57
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #992:
URL: https://github.com/apache/hive/pull/992#discussion_r413913032



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java
##
@@ -789,12 +791,25 @@ protected void validateUDF(ASTNode expr, boolean 
isFunction, TypeCheckCtx ctx, F
 
 LogHelper console = new LogHelper(LOG);
 
+Set unsafeConventionTyps = 
Sets.newHashSet(
+PrimitiveObjectInspector.PrimitiveCategory.STRING,
+PrimitiveObjectInspector.PrimitiveCategory.VARCHAR,
+PrimitiveObjectInspector.PrimitiveCategory.CHAR);
 // For now, if a bigint is going to be cast to a double throw an error 
or warning
-if ((oiTypeInfo0.equals(TypeInfoFactory.stringTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) ||
-(oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.stringTypeInfo))) {
+if ((oiTypeInfo0 instanceof PrimitiveTypeInfo &&
+
unsafeConventionTyps.contains(((PrimitiveTypeInfo)oiTypeInfo0).getPrimitiveCategory())
 &&
+oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) || (oiTypeInfo1 
instanceof PrimitiveTypeInfo &&
+
unsafeConventionTyps.contains(((PrimitiveTypeInfo)oiTypeInfo1).getPrimitiveCategory())
 &&
+oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo))) {
   String error = StrictChecks.checkTypeSafety(conf);
-  if (error != null) throw new UDFArgumentException(error);
-  console.printError("WARNING: Comparing a bigint and a string may 
result in a loss of precision.");
+  if (error != null) {
+throw new UDFArgumentException(error);
+  }
+  String type = oiTypeInfo0.getTypeName();
+  if (oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo)) {
+type = oiTypeInfo1.getTypeName();
+  }
+  console.printError("WARNING: Comparing a bigint and a " + type + " 
may result in a loss of precision.");

Review comment:
   Yes, the variable can be removed by this way, thank you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426585)
Time Spent: 1h 10m  (was: 1h)

> Unsafe comparing bigints and chars
> --
>
> Key: HIVE-23269
> URL: https://issues.apache.org/jira/browse/HIVE-23269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-23269.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Comparing bigints and varchars or chars may result to wrong result,  for 
> example:
> CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20));
> INSERT INTO  test_a VALUES ('2882303761517473127', '2882303761517473127'), 
> ('2882303761517473276','2882303761517473276');
> SET hive.strict.checks.type.safety=false;
> SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127;
> SELECT appid2 FROM test_a WHERE appid2 = 2882303761517473127;​
> Both queries will output the row: 
> ('2882303761517473276','2882303761517473276')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23269) Unsafe comparing bigints and chars

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23269?focusedWorklogId=426584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426584
 ]

ASF GitHub Bot logged work on HIVE-23269:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 15:56
Start Date: 23/Apr/20 15:56
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #992:
URL: https://github.com/apache/hive/pull/992#discussion_r413918717



##
File path: ql/src/test/results/clientpositive/llap/unsafe_compare.q.out
##
@@ -0,0 +1,40 @@
+PREHOOK: query: CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20))
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@test_a
+POSTHOOK: query: CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20))
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@test_a
+PREHOOK: query: INSERT INTO  test_a VALUES ('2882303761517473127', 
'2882303761517473127'), ('2882303761517473276','2882303761517473276')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@test_a
+POSTHOOK: query: INSERT INTO  test_a VALUES ('2882303761517473127', 
'2882303761517473127'), ('2882303761517473276','2882303761517473276')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@test_a
+POSTHOOK: Lineage: test_a.appid1 SCRIPT []
+POSTHOOK: Lineage: test_a.appid2 SCRIPT []
+WARNING: Comparing a bigint and a varchar(256) may result in a loss of 
precision.
+PREHOOK: query: SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127
+PREHOOK: type: QUERY
+PREHOOK: Input: default@test_a
+ A masked pattern was here 
+POSTHOOK: query: SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@test_a
+ A masked pattern was here 
+2882303761517473127
+2882303761517473276

Review comment:
   This test is now removed. The test wants to prove that comparing a 
bigint and a (var)char may result in a loss of precision,  which produces a 
confusing result.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426584)
Time Spent: 1h  (was: 50m)

> Unsafe comparing bigints and chars
> --
>
> Key: HIVE-23269
> URL: https://issues.apache.org/jira/browse/HIVE-23269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-23269.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Comparing bigints and varchars or chars may result to wrong result,  for 
> example:
> CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20));
> INSERT INTO  test_a VALUES ('2882303761517473127', '2882303761517473127'), 
> ('2882303761517473276','2882303761517473276');
> SET hive.strict.checks.type.safety=false;
> SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127;
> SELECT appid2 FROM test_a WHERE appid2 = 2882303761517473127;​
> Both queries will output the row: 
> ('2882303761517473276','2882303761517473276')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23269) Unsafe comparing bigints and chars

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23269?focusedWorklogId=426583=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426583
 ]

ASF GitHub Bot logged work on HIVE-23269:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 15:56
Start Date: 23/Apr/20 15:56
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #992:
URL: https://github.com/apache/hive/pull/992#discussion_r413918717



##
File path: ql/src/test/results/clientpositive/llap/unsafe_compare.q.out
##
@@ -0,0 +1,40 @@
+PREHOOK: query: CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20))
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@test_a
+POSTHOOK: query: CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20))
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@test_a
+PREHOOK: query: INSERT INTO  test_a VALUES ('2882303761517473127', 
'2882303761517473127'), ('2882303761517473276','2882303761517473276')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@test_a
+POSTHOOK: query: INSERT INTO  test_a VALUES ('2882303761517473127', 
'2882303761517473127'), ('2882303761517473276','2882303761517473276')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@test_a
+POSTHOOK: Lineage: test_a.appid1 SCRIPT []
+POSTHOOK: Lineage: test_a.appid2 SCRIPT []
+WARNING: Comparing a bigint and a varchar(256) may result in a loss of 
precision.
+PREHOOK: query: SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127
+PREHOOK: type: QUERY
+PREHOOK: Input: default@test_a
+ A masked pattern was here 
+POSTHOOK: query: SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@test_a
+ A masked pattern was here 
+2882303761517473127
+2882303761517473276

Review comment:
   this test is now removed. The test wants to prove that comparing a 
bigint and a (var)char may result in a loss of precision,  which produces a 
confusing result.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426583)
Time Spent: 50m  (was: 40m)

> Unsafe comparing bigints and chars
> --
>
> Key: HIVE-23269
> URL: https://issues.apache.org/jira/browse/HIVE-23269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-23269.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Comparing bigints and varchars or chars may result to wrong result,  for 
> example:
> CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20));
> INSERT INTO  test_a VALUES ('2882303761517473127', '2882303761517473127'), 
> ('2882303761517473276','2882303761517473276');
> SET hive.strict.checks.type.safety=false;
> SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127;
> SELECT appid2 FROM test_a WHERE appid2 = 2882303761517473127;​
> Both queries will output the row: 
> ('2882303761517473276','2882303761517473276')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23184) Upgrade druid to 0.17.1

2020-04-23 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090710#comment-17090710
 ] 

Ashutosh Chauhan commented on HIVE-23184:
-

[~nishantbangarwa] are failures related?

> Upgrade druid to 0.17.1
> ---
>
> Key: HIVE-23184
> URL: https://issues.apache.org/jira/browse/HIVE-23184
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23184.1.patch, HIVE-23184.2.patch, 
> HIVE-23184.3.patch, HIVE-23184.4.patch, HIVE-23184.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Upgrade to druid latest release 0.17.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23252) Change spark related tests to be optional

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090709#comment-17090709
 ] 

Hive QA commented on HIVE-23252:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13000922/HIVE-23252.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17124 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.beeline.TestBeeLineWithArgs.testRowsAffected (batchId=209)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21894/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21894/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21894/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13000922 - PreCommit-HIVE-Build

> Change spark related tests to be optional
> -
>
> Key: HIVE-23252
> URL: https://issues.apache.org/jira/browse/HIVE-23252
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23252.01.patch, HIVE-23252.01.patch, 
> HIVE-23252.01.patch
>
>
> HIVE-23137 have disabled the execution of some spark related tests; but they 
> would be still considered by a plain maven command - and the spark artifacts 
> are (unneccessarily)  still downloaded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23269) Unsafe comparing bigints and chars

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23269?focusedWorklogId=426580=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-426580
 ]

ASF GitHub Bot logged work on HIVE-23269:
-

Author: ASF GitHub Bot
Created on: 23/Apr/20 15:49
Start Date: 23/Apr/20 15:49
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #992:
URL: https://github.com/apache/hive/pull/992#discussion_r413913032



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/type/TypeCheckProcFactory.java
##
@@ -789,12 +791,25 @@ protected void validateUDF(ASTNode expr, boolean 
isFunction, TypeCheckCtx ctx, F
 
 LogHelper console = new LogHelper(LOG);
 
+Set unsafeConventionTyps = 
Sets.newHashSet(
+PrimitiveObjectInspector.PrimitiveCategory.STRING,
+PrimitiveObjectInspector.PrimitiveCategory.VARCHAR,
+PrimitiveObjectInspector.PrimitiveCategory.CHAR);
 // For now, if a bigint is going to be cast to a double throw an error 
or warning
-if ((oiTypeInfo0.equals(TypeInfoFactory.stringTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) ||
-(oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo) && 
oiTypeInfo1.equals(TypeInfoFactory.stringTypeInfo))) {
+if ((oiTypeInfo0 instanceof PrimitiveTypeInfo &&
+
unsafeConventionTyps.contains(((PrimitiveTypeInfo)oiTypeInfo0).getPrimitiveCategory())
 &&
+oiTypeInfo1.equals(TypeInfoFactory.longTypeInfo)) || (oiTypeInfo1 
instanceof PrimitiveTypeInfo &&
+
unsafeConventionTyps.contains(((PrimitiveTypeInfo)oiTypeInfo1).getPrimitiveCategory())
 &&
+oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo))) {
   String error = StrictChecks.checkTypeSafety(conf);
-  if (error != null) throw new UDFArgumentException(error);
-  console.printError("WARNING: Comparing a bigint and a string may 
result in a loss of precision.");
+  if (error != null) {
+throw new UDFArgumentException(error);
+  }
+  String type = oiTypeInfo0.getTypeName();
+  if (oiTypeInfo0.equals(TypeInfoFactory.longTypeInfo)) {
+type = oiTypeInfo1.getTypeName();
+  }
+  console.printError("WARNING: Comparing a bigint and a " + type + " 
may result in a loss of precision.");

Review comment:
   yes, the variable can be removed by this way, thank you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 426580)
Time Spent: 40m  (was: 0.5h)

> Unsafe comparing bigints and chars
> --
>
> Key: HIVE-23269
> URL: https://issues.apache.org/jira/browse/HIVE-23269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-23269.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Comparing bigints and varchars or chars may result to wrong result,  for 
> example:
> CREATE TABLE test_a (appid1 varchar(256),  appid2 char(20));
> INSERT INTO  test_a VALUES ('2882303761517473127', '2882303761517473127'), 
> ('2882303761517473276','2882303761517473276');
> SET hive.strict.checks.type.safety=false;
> SELECT appid1 FROM test_a WHERE appid1 = 2882303761517473127;
> SELECT appid2 FROM test_a WHERE appid2 = 2882303761517473127;​
> Both queries will output the row: 
> ('2882303761517473276','2882303761517473276')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23252) Change spark related tests to be optional

2020-04-23 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090706#comment-17090706
 ] 

Hive QA commented on HIVE-23252:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21894/dev-support/hive-personality.sh
 |
| git revision | master / 9299512 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21894/yetus/patch-asflicense-problems.txt
 |
| modules | C: . itests itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21894/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Change spark related tests to be optional
> -
>
> Key: HIVE-23252
> URL: https://issues.apache.org/jira/browse/HIVE-23252
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23252.01.patch, HIVE-23252.01.patch, 
> HIVE-23252.01.patch
>
>
> HIVE-23137 have disabled the execution of some spark related tests; but they 
> would be still considered by a plain maven command - and the spark artifacts 
> are (unneccessarily)  still downloaded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23088) Using Strings from log4j breaks non-log4j users

2020-04-23 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090695#comment-17090695
 ] 

Zoltan Haindrich commented on HIVE-23088:
-

pushed  [^HIVE-23088.01.patch] to master

> Using Strings from log4j breaks non-log4j users
> ---
>
> Key: HIVE-23088
> URL: https://issues.apache.org/jira/browse/HIVE-23088
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Vova Vysotskyi
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0, 3.2.0, 3.1.3
>
> Attachments: HIVE-23088.01.branch-3.patch, 
> HIVE-23088.01.branch-3.patch, HIVE-23088.01.branch-3.patch, 
> HIVE-23088.01.patch, HIVE-23088.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{HookUtils}} uses explicitly {{org.apache.logging.log4j.util.Strings}} class 
> from log4j, but it may break clients who use other loggers and should exclude 
> log4j from the classpath.
> {{commons-lang}} has class {{StringUtils}} which may be used as a replacement 
> for this one:
>  {{Strings.isBlank}} -> {{StringUtils.isBlank}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23220) PostExecOrcFileDump listing order may depend on the underlying filesystem

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23220:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you Miklos for reviewing the changes!

> PostExecOrcFileDump listing order may depend on the underlying filesystem
> -
>
> Key: HIVE-23220
> URL: https://issues.apache.org/jira/browse/HIVE-23220
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23220.01.patch, HIVE-23220.02.patch, 
> HIVE-23220.02.patch
>
>
> in case there are multiple files; the order might not be stable - and may 
> cause unstable q.outs
> https://github.com/apache/hive/blob/83f917c787d60543f171b23d28ceda44d69c235d/ql/src/java/org/apache/hadoop/hive/ql/hooks/PostExecOrcFileDump.java#L104



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23164) Server is not properly terminated because of non-daemon threads

2020-04-23 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-23164:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you [~euigeun_chung] for fixing this!

> Server is not properly terminated because of non-daemon threads
> ---
>
> Key: HIVE-23164
> URL: https://issues.apache.org/jira/browse/HIVE-23164
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23164.01.patch, HIVE-23164.02.patch, 
> HIVE-23164.03.patch, HIVE-23164.04.patch, 
> thread_dump_hiveserver2_is_not_terminated.txt
>
>
> HiveServer2 which receives the deregister command is at first preparing 
> shutdown. If there's no remaining session, HiveServer2.stop() is called to 
> shut down. But I found the case that the HiveServer2 JVM is not terminated 
> even if HiveServer2.stop() has been called and processed. The case is always 
> occurred when the local(embedded) metastore is used.
> I've attached the full thread dump describing the situation.
> [^thread_dump_hiveserver2_is_not_terminated.txt]
> In this thread dump, you can see some bunch of 'daemon' threads, NO main 
> thread, and some 'non-daemon' thread(or user thread)s. As specified by 
> [https://www.baeldung.com/java-daemon-thread], if there is at least one user 
> thread exists, JVM does not terminate. (Note that DestroyJavaVM thread is 
> non-daemon but it's special.)
>  
> {code:java}
> "pool-8-thread-1" #24 prio=5 os_prio=0 tid=0x7f52ad1fc000 nid=0x821c 
> waiting on condition [0x7f525c50]
>  java.lang.Thread.State: TIMED_WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  - parking to wait for <0x0003cfa057c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
>  - None
> {code}
> The thread above is created by the ScheduledThreadPoolExecutor(int coreSize) 
> constructor with default ThreadFactory which always makes a thread 
> non-daemon. If such thread pool is not destroyed by calling 
> ScheduledThreadPoolExecutor.shutdown() method, JVM cannot terminate! The only 
> way to kill is TERM signal. If JVM receives TERM signal, it ignores 
> non-daemon threads and terminates.
> So I have been digging modules which create ScheduledThreadPoolExecutor with 
> non-daemon threads and I got it. As you may guess, it's the local(embedded) 
> metastore. ScheduledThreadPoolExecutor is created by 
> org.apache.hadoop.hive.metastore.HiveMetaStore.HMSHandler#startAlwaysTaskThreads()
>  and ScheduledThreadPoolExecutor.shutdown() is never called.
> Plus, I found another usage of creating such ScheduledThreadPoolExecutor and 
> not calling its shutdown. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-23286:
-
Description: 
In FileSinkOperator there is a code path when the operator is aborted:
{noformat}
} else {
  // Will come here if an Exception was thrown in map() or reduce().
  // Hadoop always call close() even if an Exception was thrown in map() or
  // reduce().
  for (FSPaths fsp : valToPaths.values()) {
fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
&& !conf.isMmTable());
  }
{noformat}
In this part, the fsp.abortWritersAndUpdaters method call should consider the 
conf.isDirectInsert parameter as well. Since this parameter is missing, this 
method can delete the content of the table if an insert failure aborts the 
FileSinkOperator and the ACID direct insert it turned on.

  was:
In FileSinkOperator there is a code path when the operator is aborted:
{noformat}
} else {
  // Will come here if an Exception was thrown in map() or reduce().
  // Hadoop always call close() even if an Exception was thrown in map() or
  // reduce().
  for (FSPaths fsp : valToPaths.values()) {
fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
&& !conf.isMmTable());
  }
{noformat}
In this part, the fsp.abortWritersAndUpdaters method call should consider the 
conf.isDirectInsert parameter as well. Since this parameter is missing, this 
method can delete the content of the table if an insert failure abort the 
FileSinkOperator and the ACID direct insert it turned on.


> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table if an insert failure aborts the 
> FileSinkOperator and the ACID direct insert it turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-23 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23272:
--
Attachment: HIVE-23272.02.patch

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23266) Remove QueryWrapper from ObjectStore

2020-04-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23266:
--
Attachment: HIVE-23266.3.patch

> Remove QueryWrapper from ObjectStore
> 
>
> Key: HIVE-23266
> URL: https://issues.apache.org/jira/browse/HIVE-23266
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23266.1.patch, HIVE-23266.2.patch, 
> HIVE-23266.2.patch, HIVE-23266.3.patch
>
>
> There is currently a utility called {{QueryWrapper}} that makes a normal 
> {{Query}} auto-closable.  However, {{Query}} is now in fact already 
> auto-closing, so there is no need for this class.  In trying to remove it, I 
> realized that this wrapper was being passed around in pretty convoluted ways 
> and also it was sometimes being created in a {{try-with-resources}} block but 
> then never actually used in any way.
> Remove the {{QueryWrapper}} from the class and simplify some of the DB 
> interactions.
> https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L178



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-23286:
-
Description: 
In FileSinkOperator there is a code path when the operator is aborted:
{noformat}
} else {
  // Will come here if an Exception was thrown in map() or reduce().
  // Hadoop always call close() even if an Exception was thrown in map() or
  // reduce().
  for (FSPaths fsp : valToPaths.values()) {
fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
&& !conf.isMmTable());
  }
{noformat}
In this part, the fsp.abortWritersAndUpdaters method call should consider the 
conf.isDirectInsert parameter as well. Since this parameter is missing, this 
method can delete the content of the table if an insert failure abort the 
FileSinkOperator and the ACID direct insert it turned on.

  was:
In FileSinkOperator there is a code path when the operator is aborted:
{noformat}
} else {
  // Will come here if an Exception was thrown in map() or reduce().
  // Hadoop always call close() even if an Exception was thrown in map() or
  // reduce().
  for (FSPaths fsp : valToPaths.values()) {
fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
&& !conf.isMmTable());
  }
{noformat}
In this part, the fsp.abortWritersAndUpdaters method call should consider the 
conf.isDirectInsert parameter as well. Since this parameter is missing, this 
method can delete the content of the table.


> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table if an insert failure abort the 
> FileSinkOperator and the ACID direct insert it turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090683#comment-17090683
 ] 

Peter Vary commented on HIVE-23286:
---

+1 pending tests

> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-23286:
-
Status: Patch Available  (was: Open)

> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-23286:
-
Attachment: HIVE-23286.1.patch

> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-23286:
-
Description: 
In FileSinkOperator there is a code path when the operator is aborted:
{noformat}
} else {
  // Will come here if an Exception was thrown in map() or reduce().
  // Hadoop always call close() even if an Exception was thrown in map() or
  // reduce().
  for (FSPaths fsp : valToPaths.values()) {
fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
&& !conf.isMmTable());
  }
{noformat}
In this part, the fsp.abortWritersAndUpdaters method call should consider the 
conf.isDirectInsert parameter as well. Since this parameter is missing, this 
method can delete the content of the table.

> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-23 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora reassigned HIVE-23286:



> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23270) Optimize isValidTxnListState to reduce the numbers of HMS calls

2020-04-23 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-23270:
--
Attachment: HIVE-23270.03.patch

> Optimize isValidTxnListState to reduce the numbers of HMS calls
> ---
>
> Key: HIVE-23270
> URL: https://issues.apache.org/jira/browse/HIVE-23270
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23270.02.patch, HIVE-23270.03.patch, 
> HIVE-23270.patch
>
>
> There are several checks which does not need a HMS call, and they can already 
> define the return value. Move them forward potentially preventing an extra 
> HMS call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >