[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323877=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323877
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 05/Oct/19 05:38
Start Date: 05/Oct/19 05:38
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331733839
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -1039,8 +1136,27 @@ public static ConfVars getMetaConf(String name) {
 "More users can be added in ADMIN role later."),
 USE_SSL("metastore.use.SSL", "hive.metastore.use.SSL", false,
 "Set this to true for using SSL encryption in HMS server."),
+// We should somehow unify next two options.
 USE_THRIFT_SASL("metastore.sasl.enabled", "hive.metastore.sasl.enabled", 
false,
 "If true, the metastore Thrift interface will be secured with SASL. 
Clients must authenticate with Kerberos."),
+METASTORE_CLIENT_USE_PLAIN_AUTH("metastore.client.use.plain.auth",
 
 Review comment:
   Done. But for now, only "PLAIN" will be considered. Any other values is 
ignored. For deprecation, we will revisit it when we will rethink about the 
configs, in a short while.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323877)
Time Spent: 3h 20m  (was: 3h 10m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22294) ConditionalWork cannot be cast to MapredWork When both skew.join and auto.convert is on.

2019-10-04 Thread Qiang.Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang.Kang updated HIVE-22294:
--
Description: 
Our hive version is 1.2.1 which has merged some patches (including patches 
mentioned  in https://issues.apache.org/jira/browse/HIVE-14557, 
https://issues.apache.org/jira/browse/HIVE-16155 ) .

 

My sql query string is like this:
{code:java}
// code placeholder
set hive.auto.convert.join = true;
set hive.optimize.skewjoin=true;
 
SELECT a.*
FROM
a
JOIN b
ON a.id=b.id AND a.uid = b.uid 
LEFT JOIN c
ON b.id=c.id AND b.uid=c.uid;
 
{code}
 

And we met some error: 

FAILED: ClassCastException org.apache.hadoop.hive.ql.plan.ConditionalWork 
cannot be cast to org.apache.hadoop.hive.ql.plan.MapredWork

 

The main reason is that there is a conditional task (*MapJoin*) in the list 
tasks of another Conditional task (*SkewJoin*).  Here is the code snippet where 
it throws this exception:

`org.apache.hadoop.hive.ql.optimizer.physical.MapJoinResolver:`

 
{code:java}
// code placeholder
public Object dispatch(Node nd, Stack stack, Object... nodeOutputs)
 throws SemanticException {
 Task currTask = (Task) nd;
 // not map reduce task or not conditional task, just skip
 if (currTask.isMapRedTask()) {
 if (currTask instanceof ConditionalTask) {
 // get the list of task
 List> taskList = ((ConditionalTask) 
currTask).getListTasks();
 for (Task tsk : taskList) {
 if (tsk.isMapRedTask())
{   //  ATTENTION: tsk May be ConditionalTask !!! this.processCurrentTask(tsk, 
((ConditionalTask) currTask)); }
}
 } else
{ this.processCurrentTask(currTask, null); }
}
 return null;
 }
private void processCurrentTask(Task currTask,
 ConditionalTask conditionalTask) throws SemanticException {
 // get current mapred work and its local work
 MapredWork mapredWork = (MapredWork) currTask.getWork(); // WRONG!!
 MapredLocalWork localwork = mapredWork.getMapWork().getMapRedLocalWork();
 
{code}
 

Here is some detail Information about query plan:
 * 
 --  set hive.auto.convert.join = true; set hive.optimize.skewjoin=false;*

{code:java}
// code placeholder

Stage-1 is a root stage [a join b]
 Stage-12 [map join]depends on stages: Stage-1 , consists of Stage-13, Stage-2
 Stage-13 has a backup stage: Stage-2
 Stage-11 depends on stages: Stage-13
 Stage-8 depends on stages: Stage-2, Stage-11 , consists of Stage-5, Stage-4, 
Stage-6
 Stage-5
 Stage-0 depends on stages: Stage-5, Stage-4, Stage-7
 Stage-14 depends on stages: Stage-0
 Stage-3 depends on stages: Stage-14
 Stage-4
 Stage-6
 Stage-7 depends on stages: Stage-6
 Stage-2
 
{code}
 * 
 --  set hive.auto.convert.join = false; set hive.optimize.skewjoin=true;*

{code:java}
// code placeholder

STAGE DEPENDENCIES:
 Stage-1 is a root stage
 Stage-12 depends on stages: Stage-1 , consists of Stage-13, Stage-2
 Stage-13 [skew Join map local task]
 Stage-11 depends on stages: Stage-13
 Stage-2 depends on stages: Stage-11
 Stage-8 depends on stages: Stage-2 , consists of Stage-5, Stage-4, Stage-6
 Stage-5
 Stage-0 depends on stages: Stage-5, Stage-4, Stage-7
 Stage-14 depends on stages: Stage-0
 Stage-3 depends on stages: Stage-14
 Stage-4
 Stage-6
 Stage-7 depends on stages: Stage-6
{code}
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  was:
Our hive version is 1.2.1 which has merged some patches (including patches 
mentioned  in https://issues.apache.org/jira/browse/HIVE-14557, 
https://issues.apache.org/jira/browse/HIVE-16155 ) .

 

My sql query string is like this:

```

set hive.auto.convert.join = true;

set hive.optimize.skewjoin=true;

 

SELECT a.*

FROM

a

JOIN b

ON a.id=b.id AND a.uid = b.uid 

LEFT JOIN c

ON b.id=c.id AND b.uid=c.uid;

```

And we met some error: 

FAILED: ClassCastException org.apache.hadoop.hive.ql.plan.ConditionalWork 
cannot be cast to org.apache.hadoop.hive.ql.plan.MapredWork

 

The main reason is that there is a conditional task (*MapJoin*) in the list 
tasks of another Conditional task (*SkewJoin*).  Here is the code snippet where 
it throws this exception:

`org.apache.hadoop.hive.ql.optimizer.physical.MapJoinResolver:`

```java

public Object dispatch(Node nd, Stack stack, Object... nodeOutputs)
 throws SemanticException {
 Task currTask = (Task) nd;
 // not map reduce task or not conditional task, just skip
 if (currTask.isMapRedTask()) {
 if (currTask instanceof ConditionalTask) {
 // get the list of task
 List> taskList = ((ConditionalTask) 
currTask).getListTasks();
 for (Task tsk : taskList) {
 if (tsk.isMapRedTask()) {

 

//  ATTENTION: tsk May be ConditionalTask !!!
 this.processCurrentTask(tsk, ((ConditionalTask) currTask));
 }
 }
 } else {
 this.processCurrentTask(currTask, null);
 }
 }
 return null;
}

private void processCurrentTask(Task currTask,
 ConditionalTask conditionalTask) throws SemanticException {
 // get current mapred work and its local work
 MapredWork mapredWork = 

[jira] [Assigned] (HIVE-22294) ConditionalWork cannot be cast to MapredWork When both skew.join and auto.convert is on.

2019-10-04 Thread Qiang.Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang.Kang reassigned HIVE-22294:
-

Assignee: Rui Li

> ConditionalWork cannot be cast to MapredWork  When both skew.join and 
> auto.convert is on.  
> ---
>
> Key: HIVE-22294
> URL: https://issues.apache.org/jira/browse/HIVE-22294
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 2.3.0, 3.1.1, 2.3.4
>Reporter: Qiang.Kang
>Assignee: Rui Li
>Priority: Critical
>
> Our hive version is 1.2.1 which has merged some patches (including patches 
> mentioned  in https://issues.apache.org/jira/browse/HIVE-14557, 
> https://issues.apache.org/jira/browse/HIVE-16155 ) .
>  
> My sql query string is like this:
> ```
> set hive.auto.convert.join = true;
> set hive.optimize.skewjoin=true;
>  
> SELECT a.*
> FROM
> a
> JOIN b
> ON a.id=b.id AND a.uid = b.uid 
> LEFT JOIN c
> ON b.id=c.id AND b.uid=c.uid;
> ```
> And we met some error: 
> FAILED: ClassCastException org.apache.hadoop.hive.ql.plan.ConditionalWork 
> cannot be cast to org.apache.hadoop.hive.ql.plan.MapredWork
>  
> The main reason is that there is a conditional task (*MapJoin*) in the list 
> tasks of another Conditional task (*SkewJoin*).  Here is the code snippet 
> where it throws this exception:
> `org.apache.hadoop.hive.ql.optimizer.physical.MapJoinResolver:`
> ```java
> public Object dispatch(Node nd, Stack stack, Object... nodeOutputs)
>  throws SemanticException {
>  Task currTask = (Task) nd;
>  // not map reduce task or not conditional task, just skip
>  if (currTask.isMapRedTask()) {
>  if (currTask instanceof ConditionalTask) {
>  // get the list of task
>  List> taskList = ((ConditionalTask) 
> currTask).getListTasks();
>  for (Task tsk : taskList) {
>  if (tsk.isMapRedTask()) {
>  
> //  ATTENTION: tsk May be ConditionalTask !!!
>  this.processCurrentTask(tsk, ((ConditionalTask) currTask));
>  }
>  }
>  } else {
>  this.processCurrentTask(currTask, null);
>  }
>  }
>  return null;
> }
> private void processCurrentTask(Task currTask,
>  ConditionalTask conditionalTask) throws SemanticException {
>  // get current mapred work and its local work
>  MapredWork mapredWork = (MapredWork) currTask.getWork(); // WRONG!!
>  MapredLocalWork localwork = mapredWork.getMapWork().getMapRedLocalWork();
> ```
>  
> Here is some detail Information about query plan:
> *-  set hive.auto.convert.join = true; set hive.optimize.skewjoin=false;*
> ```
> Stage-1 is a root stage [a join b]
>  Stage-12 [map join]depends on stages: Stage-1 , consists of Stage-13, Stage-2
>  Stage-13 has a backup stage: Stage-2
>  Stage-11 depends on stages: Stage-13
>  Stage-8 depends on stages: Stage-2, Stage-11 , consists of Stage-5, Stage-4, 
> Stage-6
>  Stage-5
>  Stage-0 depends on stages: Stage-5, Stage-4, Stage-7
>  Stage-14 depends on stages: Stage-0
>  Stage-3 depends on stages: Stage-14
>  Stage-4
>  Stage-6
>  Stage-7 depends on stages: Stage-6
>  Stage-2
> ```
> *-  set hive.auto.convert.join = false; set hive.optimize.skewjoin=true;*
> ```
> STAGE DEPENDENCIES:
>  Stage-1 is a root stage
>  Stage-12 depends on stages: Stage-1 , consists of Stage-13, Stage-2
>  Stage-13 [skew Join map local task]
>  Stage-11 depends on stages: Stage-13
>  Stage-2 depends on stages: Stage-11
>  Stage-8 depends on stages: Stage-2 , consists of Stage-5, Stage-4, Stage-6
>  Stage-5
>  Stage-0 depends on stages: Stage-5, Stage-4, Stage-7
>  Stage-14 depends on stages: Stage-0
>  Stage-3 depends on stages: Stage-14
>  Stage-4
>  Stage-6
>  Stage-7 depends on stages: Stage-6
> ```
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22274) Upgrade Calcite version to 1.21.0

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944963#comment-16944963
 ] 

Hive QA commented on HIVE-22274:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982254/HIVE-22274.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 81 failed/errored test(s), 17234 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_limit] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_SortUnionTransposeRule]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join1] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[concat_op] (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[inputwherefalse] 
(batchId=94)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[limit0] (batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_3] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_limit]
 (batchId=27)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[plan_json] (batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamp] (batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join3] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join4] 
(batchId=95)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join6] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_limit] 
(batchId=40)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions]
 (batchId=198)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_extractTime]
 (batchId=198)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_floorTime]
 (batchId=198)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_semijoin]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_semijoin]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[constprog_semijoin]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[external_jdbc_table_perf]
 (batchId=185)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_only_empty_query]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[limit_join_transpose]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[limit_pushdown3]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[limit_pushdown]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[offset_limit_ppd_optimizer]
 (batchId=180)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin] 
(batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_ANY]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in_having]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_multi]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_nested_subquery]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_views]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_limit]
 (batchId=171)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_semijoin] 
(batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[limit_pushdown] 
(batchId=145)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_limit]
 (batchId=124)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[semijoin] 
(batchId=123)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_in] 
(batchId=142)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=121)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_notin] 
(batchId=145)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=119)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query10] 
(batchId=300)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] 
(batchId=300)

[jira] [Commented] (HIVE-22274) Upgrade Calcite version to 1.21.0

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944959#comment-16944959
 ] 

Hive QA commented on HIVE-22274:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
14s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 12 new + 368 unchanged - 10 
fixed = 380 total (was 378) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m  
5s{color} | {color:red} root: The patch generated 12 new + 368 unchanged - 10 
fixed = 380 total (was 378) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
23s{color} | {color:red} ql generated 6 new + 1549 unchanged - 2 fixed = 1555 
total (was 1551) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Dead store to joinInfo in 
org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories$HiveJoinFactoryImpl.createJoin(RelNode,
 RelNode, RexNode, Set, JoinRelType, boolean)  At 
HiveRelFactories.java:org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories$HiveJoinFactoryImpl.createJoin(RelNode,
 RelNode, RexNode, Set, JoinRelType, boolean)  At HiveRelFactories.java:[line 
161] |
|  |  Dead store to rightKeys in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(LogicalCorrelate)
  At 
HiveRelDecorrelator.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(LogicalCorrelate)
  At HiveRelDecorrelator.java:[line 1465] |
|  |  Dead store to leftKeys in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(LogicalCorrelate)
  At 
HiveRelDecorrelator.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(LogicalCorrelate)
  At HiveRelDecorrelator.java:[line 1464] |
|  |  instanceof will always return true for all non-null values in new 
org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdPredicates$JoinConditionBasedPredicateInference(Join,
 RexNode, RexNode), since all org.apache.calcite.rel.core.Join are instances of 
org.apache.calcite.rel.core.Join  At HiveRelMdPredicates.java:for all non-null 
values in new 
org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdPredicates$JoinConditionBasedPredicateInference(Join,
 RexNode, RexNode), since all 

[jira] [Commented] (HIVE-22248) Min value for column in stats is not set correctly for some data types

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944948#comment-16944948
 ] 

Hive QA commented on HIVE-22248:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982253/HIVE-22248.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17234 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18874/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18874/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18874/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982253 - PreCommit-HIVE-Build

> Min value for column in stats is not set correctly for some data types
> --
>
> Key: HIVE-22248
> URL: https://issues.apache.org/jira/browse/HIVE-22248
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22248.01.patch, HIVE-22248.03.patch, 
> HIVE-22248.04.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am not sure whether the problem is printing the value or in the value 
> stored in the metastore itself, but for some types (e.g. tinyint, smallint, 
> int, bigint, double or float), the min value does not seem to be set 
> correctly (set to 0).
> https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/alter_table_update_status.q.out#L342



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-10-04 Thread Dmitry Romanenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944940#comment-16944940
 ] 

Dmitry Romanenko edited comment on HIVE-21987 at 10/5/19 12:51 AM:
---

Any chance this will be backported to 3.x tree? This seems like quite major 
problem affecting multiple trees. The fact that its not available in current 
release, and may not be available anytime soon (since 4.x. is question of 
roadmap) is not that satisfying.


was (Author: dimon222):
Any chance this will be backported to 3.x tree? This seems like quite major 
problem affecting multiple trees.

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22248) Min value for column in stats is not set correctly for some data types

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944941#comment-16944941
 ] 

Hive QA commented on HIVE-22248:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} standalone-metastore/metastore-server in master has 
170 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 2 new + 12 unchanged - 0 fixed = 14 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18874/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18874/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18874/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Min value for column in stats is not set correctly for some data types
> --
>
> Key: HIVE-22248
> URL: https://issues.apache.org/jira/browse/HIVE-22248
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22248.01.patch, HIVE-22248.03.patch, 
> HIVE-22248.04.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am not sure whether the problem is printing the value or in the value 
> stored in the metastore itself, but for some types (e.g. tinyint, smallint, 
> int, bigint, double or float), the min value does not seem to be set 
> correctly (set to 0).
> 

[jira] [Commented] (HIVE-21987) Hive is unable to read Parquet int32 annotated with decimal

2019-10-04 Thread Dmitry Romanenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944940#comment-16944940
 ] 

Dmitry Romanenko commented on HIVE-21987:
-

Any chance this will be backported to 3.x tree? This seems like quite major 
problem affecting multiple trees.

> Hive is unable to read Parquet int32 annotated with decimal
> ---
>
> Key: HIVE-21987
> URL: https://issues.apache.org/jira/browse/HIVE-21987
> Project: Hive
>  Issue Type: Improvement
>Reporter: Nándor Kollár
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21987.1.patch, HIVE-21987.2.patch, 
> HIVE-21987.3.patch, HIVE-21987.4.patch, HIVE-21987.5.patch, 
> part-0-e5287735-8dcf-4dda-9c6e-4d5c98dc15f2-c000.snappy.parquet
>
>
> When I tried to read a Parquet file from a Hive (with Tez execution engine) 
> table with a small decimal column, I got the following exception:
> {code}
> Caused by: java.lang.UnsupportedOperationException: 
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$1
>   at 
> org.apache.parquet.io.api.PrimitiveConverter.addInt(PrimitiveConverter.java:98)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:248)
>   at 
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
>   at 
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
>   at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
>   ... 28 more
> {code}
> Steps to reproduce:
> - Create a Hive table with a single decimal(4, 2) column
> - Create a Parquet file with int32 column annotated with decimal(4, 2) 
> logical type, put it into the previously created table location (or use the 
> attached parquet file, in this case the column should be named as 'd', to 
> match the Hive schema with the Parquet schema in the file)
> - Execute a {{select *}} on this table
> Also, I'm afraid that similar problems can happen with int64 decimals too. 
> [Parquet specification | 
> https://github.com/apache/parquet-format/blob/master/LogicalTypes.md] allows 
> both of these cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944925#comment-16944925
 ] 

Hive QA commented on HIVE-21924:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982251/HIVE-21924.5.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17237 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
 (batchId=274)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths 
(batchId=274)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18873/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18873/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18873/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982251 - PreCommit-HIVE-Build

> Split text files even if header/footer exists
> -
>
> Key: HIVE-21924
> URL: https://issues.apache.org/jira/browse/HIVE-21924
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.4.0, 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21924.2.patch, HIVE-21924.3.patch, 
> HIVE-21924.4.patch, HIVE-21924.5.patch, HIVE-21924.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
>  
> {code}
> int headerCount = 0;
> int footerCount = 0;
> if (table != null) {
>   headerCount = Utilities.getHeaderCount(table);
>   footerCount = Utilities.getFooterCount(table, conf);
>   if (headerCount != 0 || footerCount != 0) {
> // Input file has header or footer, cannot be splitted.
> HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, 
> Long.MAX_VALUE);
>   }
> }
> {code}
> this piece of code makes the CSV (or any text files with header/footer) files 
> not splittable if header or footer is present. 
> If only header is present, we can find the offset after first line break and 
> use that to split. Similarly for footer, may be read few KB's of data at the 
> end and find the last line break offset. Use that to determine the data range 
> which can be used for splitting. Few reads during split generation are 
> cheaper than not splitting the file at all.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944920#comment-16944920
 ] 

Hive QA commented on HIVE-21924:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
25s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18873/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql . itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18873/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Split text files even if header/footer exists
> -
>
> Key: HIVE-21924
> URL: https://issues.apache.org/jira/browse/HIVE-21924
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.4.0, 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21924.2.patch, HIVE-21924.3.patch, 
> HIVE-21924.4.patch, HIVE-21924.5.patch, HIVE-21924.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
>  
> {code}
> int headerCount = 0;
> int footerCount = 0;
> if (table != null) {
>   headerCount = Utilities.getHeaderCount(table);
>   footerCount = Utilities.getFooterCount(table, conf);
>   if (headerCount != 0 || footerCount != 0) {
> // Input file has header or footer, cannot be splitted.
> HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, 
> Long.MAX_VALUE);
>   }
> }
> {code}
> this piece of code makes the CSV (or any text files with header/footer) files 
> not splittable if header or footer is present. 
> If only header 

[jira] [Commented] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944885#comment-16944885
 ] 

Hive QA commented on HIVE-22250:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982234/HIVE-22250.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17234 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[desc_function] 
(batchId=14)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18872/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18872/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18872/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982234 - PreCommit-HIVE-Build

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, 
> HIVE-22250.3.patch, HIVE-22250.4.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-14302) Tez: Optimized Hashtable can support DECIMAL keys of same precision

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-14302?focusedWorklogId=323762=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323762
 ]

ASF GitHub Bot logged work on HIVE-14302:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 22:16
Start Date: 04/Oct/19 22:16
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #803: HIVE-14302
URL: https://github.com/apache/hive/pull/803#discussion_r331704100
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/llap/mapjoin_decimal_vectorized.q.out
 ##
 @@ -0,0 +1,686 @@
+PREHOOK: query: CREATE TABLE over1k_n5(t tinyint,
+   si smallint,
+   i int,
+   b bigint,
+   f float,
+   d double,
+   bo boolean,
+   s string,
+   ts timestamp,
+   `dec` decimal(4,2),
+   bin binary)
+ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
+STORED AS TEXTFILE
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@over1k_n5
+POSTHOOK: query: CREATE TABLE over1k_n5(t tinyint,
+   si smallint,
+   i int,
+   b bigint,
+   f float,
+   d double,
+   bo boolean,
+   s string,
+   ts timestamp,
+   `dec` decimal(4,2),
+   bin binary)
+ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'
+STORED AS TEXTFILE
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@over1k_n5
+PREHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/over1k' OVERWRITE 
INTO TABLE over1k_n5
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@over1k_n5
+POSTHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/over1k' OVERWRITE 
INTO TABLE over1k_n5
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@over1k_n5
+PREHOOK: query: CREATE TABLE t1_n95(`dec` decimal(4,2)) STORED AS ORC
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@t1_n95
+POSTHOOK: query: CREATE TABLE t1_n95(`dec` decimal(4,2)) STORED AS ORC
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@t1_n95
+PREHOOK: query: INSERT INTO TABLE t1_n95 select `dec` from over1k_n5
+PREHOOK: type: QUERY
+PREHOOK: Input: default@over1k_n5
+PREHOOK: Output: default@t1_n95
+POSTHOOK: query: INSERT INTO TABLE t1_n95 select `dec` from over1k_n5
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@over1k_n5
+POSTHOOK: Output: default@t1_n95
+POSTHOOK: Lineage: t1_n95.dec SIMPLE 
[(over1k_n5)over1k_n5.FieldSchema(name:dec, type:decimal(4,2), comment:null), ]
+PREHOOK: query: CREATE TABLE t2_n59(`dec` decimal(4,0)) STORED AS ORC
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@t2_n59
+POSTHOOK: query: CREATE TABLE t2_n59(`dec` decimal(4,0)) STORED AS ORC
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@t2_n59
+PREHOOK: query: INSERT INTO TABLE t2_n59 select `dec` from over1k_n5
+PREHOOK: type: QUERY
+PREHOOK: Input: default@over1k_n5
+PREHOOK: Output: default@t2_n59
+POSTHOOK: query: INSERT INTO TABLE t2_n59 select `dec` from over1k_n5
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@over1k_n5
+POSTHOOK: Output: default@t2_n59
+POSTHOOK: Lineage: t2_n59.dec EXPRESSION 
[(over1k_n5)over1k_n5.FieldSchema(name:dec, type:decimal(4,2), comment:null), ]
+PREHOOK: query: explain vectorization detail select t1_n95.`dec`, t2_n59.`dec` 
from t1_n95 join t2_n59 on (t1_n95.`dec`=t2_n59.`dec`) order by t1_n95.`dec`
+PREHOOK: type: QUERY
+PREHOOK: Input: default@t1_n95
+PREHOOK: Input: default@t2_n59
+ A masked pattern was here 
+POSTHOOK: query: explain vectorization detail select t1_n95.`dec`, 
t2_n59.`dec` from t1_n95 join t2_n59 on (t1_n95.`dec`=t2_n59.`dec`) order by 
t1_n95.`dec`
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@t1_n95
+POSTHOOK: Input: default@t2_n59
+ A masked pattern was here 
+PLAN VECTORIZATION:
+  enabled: true
+  enabledConditionsMet: [hive.vectorized.execution.enabled IS true]
+
+STAGE DEPENDENCIES:
+  Stage-1 is a root stage
+  Stage-0 depends on stages: Stage-1
+
+STAGE PLANS:
+  Stage: Stage-1
+Tez
+ A masked pattern was here 
+  Edges:
+Map 1 <- Map 3 (BROADCAST_EDGE)
+Reducer 2 <- Map 1 (SIMPLE_EDGE)
+ A masked pattern was here 
+  Vertices:
+Map 1 
+Map Operator Tree:
+TableScan
+  alias: t1_n95
+  filterExpr: dec is not null (type: boolean)
+  Statistics: Num rows: 1049 Data size: 117488 Basic stats: 
COMPLETE Column stats: COMPLETE
+  TableScan Vectorization:
+  native: true
+  vectorizationSchemaColumns: 
[0:dec:decimal(4,2)/DECIMAL_64, 

[jira] [Commented] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944856#comment-16944856
 ] 

Hive QA commented on HIVE-22250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 10 new + 265 unchanged - 47 
fixed = 275 total (was 312) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18872/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18872/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18872/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, 
> HIVE-22250.3.patch, HIVE-22250.4.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira

[jira] [Commented] (HIVE-22284) Improve LLAP CacheContentsTracker to collect and display correct statistics

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944834#comment-16944834
 ] 

Hive QA commented on HIVE-22284:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982214/HIVE-22284.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 17236 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_dp] 
(batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_struct_type_vectorization]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_complex_types_vectorization]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_map_type_vectorization]
 (batchId=159)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18871/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18871/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18871/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982214 - PreCommit-HIVE-Build

> Improve LLAP CacheContentsTracker to collect and display correct statistics
> ---
>
> Key: HIVE-22284
> URL: https://issues.apache.org/jira/browse/HIVE-22284
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22284.0.patch, HIVE-22284.1.patch, 
> HIVE-22284.2.patch
>
>
> When keeping track of which buffers correspond to what Hive objects, 
> CacheContentsTracker relies on cache tags.
> Currently a tag is a simple String that ideally holds DB and table name, and 
> a partition spec concatenated by . and / . The information here is derived 
> from the Path of the file that is getting cached. Needless to say sometimes 
> this produces a wrong tag especially for external tables.
> Also there's a bug when calculating aggregated stats for a 'parent' tag 
> (corresponding to the table of the partition) because the overall maxCount 
> and maxSize do not add up to the sum of those in the partitions. This happens 
> when buffers get removed from the cache.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22284) Improve LLAP CacheContentsTracker to collect and display correct statistics

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944811#comment-16944811
 ] 

Hive QA commented on HIVE-22284:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} llap-common in master has 90 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
33s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} llap-server in master has 90 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} storage-api: The patch generated 2 new + 4 unchanged - 
0 fixed = 6 total (was 4) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 3 new + 165 unchanged - 2 
fixed = 168 total (was 167) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} llap-server: The patch generated 1 new + 252 unchanged 
- 13 fixed = 253 total (was 265) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18871/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18871/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18871/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18871/yetus/diff-checkstyle-llap-server.txt
 |
| modules | C: storage-api llap-common ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18871/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve LLAP CacheContentsTracker to collect and display correct statistics
> 

[jira] [Updated] (HIVE-22274) Upgrade Calcite version to 1.21.0

2019-10-04 Thread Steve Carlin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Carlin updated HIVE-22274:

Attachment: HIVE-22274.1.patch

> Upgrade Calcite version to 1.21.0
> -
>
> Key: HIVE-22274
> URL: https://issues.apache.org/jira/browse/HIVE-22274
> Project: Hive
>  Issue Type: Task
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Steve Carlin
>Priority: Major
> Attachments: HIVE-22274.1.patch, HIVE-22274.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944794#comment-16944794
 ] 

Hive QA commented on HIVE-22230:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982218/HIVE-22230.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18870/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18870/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18870/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12982218/HIVE-22230.02.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982218 - PreCommit-HIVE-Build

> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch, HIVE-22230.02.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse listPartitionValues(PartitionValuesRequest 
> request)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22270) Upgrade commons-io to 2.6

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944793#comment-16944793
 ] 

Hive QA commented on HIVE-22270:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982202/HIVE-22270.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17234 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=111)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18869/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18869/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18869/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982202 - PreCommit-HIVE-Build

> Upgrade commons-io to 2.6
> -
>
> Key: HIVE-22270
> URL: https://issues.apache.org/jira/browse/HIVE-22270
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22270.01.patch, HIVE-22270.01.patch, 
> HIVE-22270.01.patch, HIVE-22270.patch, HIVE-22270.patch, HIVE-22270.patch, 
> HIVE-22270.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive's currently using commons-io 2.4 and according to HIVE-21273, a number 
> of issues are present in it, which can be resolved by upgrading to 2.6:
> IOUtils copyLarge() and skip() methods are performance hogs
>  affectsVersions:2.3;2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-355?filter=allopenissues]
>  CharSequenceInputStream#reset() behaves incorrectly in case when buffer size 
> is not dividable by data size
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-356?filter=allopenissues]
>  [Tailer] InterruptedException while the thead is sleeping is silently ignored
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-357?filter=allopenissues]
>  IOUtils.contentEquals* methods returns false if input1 == input2; should 
> return true
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-362?filter=allopenissues]
>  Apache Commons - standard links for documents are failing
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-369?filter=allopenissues]
>  FileUtils.sizeOfDirectoryAsBigInteger can overflow
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-390?filter=allopenissues]
>  Regression in FileUtils.readFileToString from 2.0.1
>  affectsVersions:2.1;2.2;2.3;2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-453?filter=allopenissues]
>  Correct exception message in FileUtils.getFile(File; String...)
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-479?filter=allopenissues]
>  org.apache.commons.io.FileUtils#waitFor waits too long
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-481?filter=allopenissues]
>  FilenameUtils should handle embedded null bytes
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-484?filter=allopenissues]
>  Exceptions are suppressed incorrectly when copying files.
>  affectsVersions:2.4;2.5
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-502?filter=allopenissues]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22248) Min value for column in stats is not set correctly for some data types

2019-10-04 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944792#comment-16944792
 ] 

Jesus Camacho Rodriguez commented on HIVE-22248:


+1 (pending tests)

> Min value for column in stats is not set correctly for some data types
> --
>
> Key: HIVE-22248
> URL: https://issues.apache.org/jira/browse/HIVE-22248
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22248.01.patch, HIVE-22248.03.patch, 
> HIVE-22248.04.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am not sure whether the problem is printing the value or in the value 
> stored in the metastore itself, but for some types (e.g. tinyint, smallint, 
> int, bigint, double or float), the min value does not seem to be set 
> correctly (set to 0).
> https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/alter_table_update_status.q.out#L342



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22248) Min value for column in stats is not set correctly for some data types

2019-10-04 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22248:
--
Attachment: HIVE-22248.04.patch

> Min value for column in stats is not set correctly for some data types
> --
>
> Key: HIVE-22248
> URL: https://issues.apache.org/jira/browse/HIVE-22248
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22248.01.patch, HIVE-22248.03.patch, 
> HIVE-22248.04.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am not sure whether the problem is printing the value or in the value 
> stored in the metastore itself, but for some types (e.g. tinyint, smallint, 
> int, bigint, double or float), the min value does not seem to be set 
> correctly (set to 0).
> https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/alter_table_update_status.q.out#L342



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-14302) Tez: Optimized Hashtable can support DECIMAL keys of same precision

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-14302?focusedWorklogId=323665=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323665
 ]

ASF GitHub Bot logged work on HIVE-14302:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 19:37
Start Date: 04/Oct/19 19:37
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #803: HIVE-14302
URL: https://github.com/apache/hive/pull/803#discussion_r331656250
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinKey.java
 ##
 @@ -85,6 +84,7 @@ public static MapJoinKey read(Output output, 
MapJoinObjectSerDeContext context,
 SUPPORTED_PRIMITIVES.add(PrimitiveCategory.BINARY);
 SUPPORTED_PRIMITIVES.add(PrimitiveCategory.VARCHAR);
 SUPPORTED_PRIMITIVES.add(PrimitiveCategory.CHAR);
+SUPPORTED_PRIMITIVES.add(PrimitiveCategory.DECIMAL);
 
 Review comment:
   @mustafaiman , do we need to include the check on the scale/precision for 
decimal type?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323665)
Time Spent: 20m  (was: 10m)

> Tez: Optimized Hashtable can support DECIMAL keys of same precision
> ---
>
> Key: HIVE-14302
> URL: https://issues.apache.org/jira/browse/HIVE-14302
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Affects Versions: 2.2.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-14302.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Decimal support in the optimized hashtable was decided on the basis of the 
> fact that Decimal(10,1) == Decimal(10, 2) when both contain "1.0" and "1.00".
> However, the joins now don't have any issues with decimal precision because 
> they cast to common.
> {code}
> create temporary table x (a decimal(10,2), b decimal(10,1)) stored as orc;
> insert into x values (1.0, 1.0);
> > explain logical select count(1) from x, x x1 where x.a = x1.b;
> OK  
> LOGICAL PLAN:
> $hdt$_0:$hdt$_0:x
>   TableScan (TS_0)
> alias: x
> filterExpr: (a is not null and true) (type: boolean)
> Filter Operator (FIL_18)
>   predicate: (a is not null and true) (type: boolean)
>   Select Operator (SEL_2)
> expressions: a (type: decimal(10,2))
> outputColumnNames: _col0
> Reduce Output Operator (RS_6)
>   key expressions: _col0 (type: decimal(11,2))
>   sort order: +
>   Map-reduce partition columns: _col0 (type: decimal(11,2))
>   Join Operator (JOIN_8)
> condition map:
>  Inner Join 0 to 1
> keys:
>   0 _col0 (type: decimal(11,2))
>   1 _col0 (type: decimal(11,2))
> Group By Operator (GBY_11)
>   aggregations: count(1)
>   mode: hash
>   outputColumnNames: _col0
> {code}
> See cast up to Decimal(11, 2) in the plan, which normalizes both sides of the 
> join to be able to compare HiveDecimal as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22270) Upgrade commons-io to 2.6

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944772#comment-16944772
 ] 

Hive QA commented on HIVE-22270:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18869/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18869/yetus/patch-asflicense-problems.txt
 |
| modules | C: . testutils/ptest2 U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18869/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade commons-io to 2.6
> -
>
> Key: HIVE-22270
> URL: https://issues.apache.org/jira/browse/HIVE-22270
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22270.01.patch, HIVE-22270.01.patch, 
> HIVE-22270.01.patch, HIVE-22270.patch, HIVE-22270.patch, HIVE-22270.patch, 
> HIVE-22270.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive's currently using commons-io 2.4 and according to HIVE-21273, a number 
> of issues are present in it, which can be resolved by upgrading to 2.6:
> IOUtils copyLarge() and skip() methods are performance hogs
>  affectsVersions:2.3;2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-355?filter=allopenissues]
>  CharSequenceInputStream#reset() behaves incorrectly in case when buffer size 
> is not dividable by data size
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-356?filter=allopenissues]
>  [Tailer] InterruptedException while the thead is sleeping is silently ignored
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-357?filter=allopenissues]
>  IOUtils.contentEquals* methods returns false if input1 == input2; should 
> return true
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-362?filter=allopenissues]
>  Apache Commons - standard links for documents are failing
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-369?filter=allopenissues]
>  FileUtils.sizeOfDirectoryAsBigInteger can overflow
>  

[jira] [Work logged] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?focusedWorklogId=323656=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323656
 ]

ASF GitHub Bot logged work on HIVE-21924:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 18:46
Start Date: 04/Oct/19 18:46
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #791: HIVE-21924
URL: https://github.com/apache/hive/pull/791#discussion_r331638453
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/SkippingTextInputFormat.java
 ##
 @@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.io;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.FileSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.TextInputFormat;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * SkippingInputFormat is a header/footer aware input format. It truncates
+ * splits identified by TextInputFormat. Header and footers are removed
+ * from the splits.
+ */
+public class SkippingTextInputFormat extends TextInputFormat {
+
+  private final Map startIndexMap = new ConcurrentHashMap();
+  private final Map endIndexMap = new ConcurrentHashMap();
+  private JobConf conf;
+  private int headerCount;
+  private int footerCount;
+
+  @Override
+  public void configure(JobConf conf) {
+this.conf = conf;
+super.configure(conf);
+  }
+
+  public void configure(JobConf conf, int headerCount, int footerCount) {
+configure(conf);
+this.headerCount = headerCount;
+this.footerCount = footerCount;
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts) {
+return makeSplitInternal(file, start, length, hosts, null);
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts, String[] inMemoryHosts) {
+return makeSplitInternal(file, start, length, hosts, inMemoryHosts);
+  }
+
+  private FileSplit makeSplitInternal(Path file, long start, long length, 
String[] hosts, String[] inMemoryHosts) {
+long cachedStart;
+long cachedEnd;
+try {
+  cachedStart = getCachedStartIndex(file);
+  cachedEnd = getCachedEndIndex(file);
+} catch (IOException e) {
+  LOG.warn("Could not detect header/footer", e);
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start + length) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start) {
+  length = length - (cachedStart - start);
+  start = cachedStart;
+}
+if (cachedEnd < start) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedEnd < start + length) {
+  length = cachedEnd - start;
+}
+if (inMemoryHosts == null) {
+  return super.makeSplit(file, start, length, hosts);
+} else {
+  return super.makeSplit(file, start, length, hosts, inMemoryHosts);
+}
+  }
+
+  private long getCachedStartIndex(Path path) throws IOException {
+if (headerCount == 0) {
+  return 0;
+}
+Long startIndexForFile = startIndexMap.get(path);
+if (startIndexForFile == null) {
+  FileSystem fileSystem;
+  FSDataInputStream fis = null;
+  fileSystem = path.getFileSystem(conf);
+  try {
+fis = fileSystem.open(path);
+for (int j = 0; j < headerCount; j++) {
+  if (fis.readLine() == null) {
+startIndexMap.put(path, Long.MAX_VALUE);
+return Long.MAX_VALUE;
+  }
+}
+// back 1 byte because readers skip the entire first row if split 
start is not 0
+startIndexForFile = fis.getPos() - 1;
+  } finally {
+if (fis != null) {
+  fis.close();
+}
+  }
+  startIndexMap.put(path, startIndexForFile);
+}
+return startIndexForFile;

[jira] [Work logged] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?focusedWorklogId=323657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323657
 ]

ASF GitHub Bot logged work on HIVE-21924:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 18:46
Start Date: 04/Oct/19 18:46
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #791: HIVE-21924
URL: https://github.com/apache/hive/pull/791#discussion_r331632718
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/SkippingTextInputFormat.java
 ##
 @@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.io;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.FileSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.TextInputFormat;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * SkippingInputFormat is a header/footer aware input format. It truncates
+ * splits identified by TextInputFormat. Header and footers are removed
+ * from the splits.
+ */
+public class SkippingTextInputFormat extends TextInputFormat {
+
+  private final Map startIndexMap = new ConcurrentHashMap();
+  private final Map endIndexMap = new ConcurrentHashMap();
+  private JobConf conf;
+  private int headerCount;
+  private int footerCount;
+
+  @Override
+  public void configure(JobConf conf) {
+this.conf = conf;
+super.configure(conf);
+  }
+
+  public void configure(JobConf conf, int headerCount, int footerCount) {
+configure(conf);
+this.headerCount = headerCount;
+this.footerCount = footerCount;
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts) {
+return makeSplitInternal(file, start, length, hosts, null);
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts, String[] inMemoryHosts) {
+return makeSplitInternal(file, start, length, hosts, inMemoryHosts);
+  }
+
+  private FileSplit makeSplitInternal(Path file, long start, long length, 
String[] hosts, String[] inMemoryHosts) {
+long cachedStart;
+long cachedEnd;
+try {
+  cachedStart = getCachedStartIndex(file);
+  cachedEnd = getCachedEndIndex(file);
+} catch (IOException e) {
+  LOG.warn("Could not detect header/footer", e);
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start + length) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start) {
+  length = length - (cachedStart - start);
+  start = cachedStart;
+}
+if (cachedEnd < start) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedEnd < start + length) {
+  length = cachedEnd - start;
+}
+if (inMemoryHosts == null) {
+  return super.makeSplit(file, start, length, hosts);
+} else {
+  return super.makeSplit(file, start, length, hosts, inMemoryHosts);
+}
+  }
+
+  private long getCachedStartIndex(Path path) throws IOException {
+if (headerCount == 0) {
+  return 0;
+}
+Long startIndexForFile = startIndexMap.get(path);
+if (startIndexForFile == null) {
+  FileSystem fileSystem;
+  FSDataInputStream fis = null;
+  fileSystem = path.getFileSystem(conf);
+  try {
+fis = fileSystem.open(path);
+for (int j = 0; j < headerCount; j++) {
+  if (fis.readLine() == null) {
+startIndexMap.put(path, Long.MAX_VALUE);
+return Long.MAX_VALUE;
+  }
+}
+// back 1 byte because readers skip the entire first row if split 
start is not 0
+startIndexForFile = fis.getPos() - 1;
+  } finally {
+if (fis != null) {
+  fis.close();
+}
+  }
+  startIndexMap.put(path, startIndexForFile);
+}
+return startIndexForFile;

[jira] [Updated] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-21924:

Attachment: HIVE-21924.5.patch
Status: Patch Available  (was: In Progress)

> Split text files even if header/footer exists
> -
>
> Key: HIVE-21924
> URL: https://issues.apache.org/jira/browse/HIVE-21924
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.4.0, 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21924.2.patch, HIVE-21924.3.patch, 
> HIVE-21924.4.patch, HIVE-21924.5.patch, HIVE-21924.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
>  
> {code}
> int headerCount = 0;
> int footerCount = 0;
> if (table != null) {
>   headerCount = Utilities.getHeaderCount(table);
>   footerCount = Utilities.getFooterCount(table, conf);
>   if (headerCount != 0 || footerCount != 0) {
> // Input file has header or footer, cannot be splitted.
> HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, 
> Long.MAX_VALUE);
>   }
> }
> {code}
> this piece of code makes the CSV (or any text files with header/footer) files 
> not splittable if header or footer is present. 
> If only header is present, we can find the offset after first line break and 
> use that to split. Similarly for footer, may be read few KB's of data at the 
> end and find the last line break offset. Use that to determine the data range 
> which can be used for splitting. Few reads during split generation are 
> cheaper than not splitting the file at all.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-21924:

Status: In Progress  (was: Patch Available)

> Split text files even if header/footer exists
> -
>
> Key: HIVE-21924
> URL: https://issues.apache.org/jira/browse/HIVE-21924
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.4.0, 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21924.2.patch, HIVE-21924.3.patch, 
> HIVE-21924.4.patch, HIVE-21924.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
>  
> {code}
> int headerCount = 0;
> int footerCount = 0;
> if (table != null) {
>   headerCount = Utilities.getHeaderCount(table);
>   footerCount = Utilities.getFooterCount(table, conf);
>   if (headerCount != 0 || footerCount != 0) {
> // Input file has header or footer, cannot be splitted.
> HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, 
> Long.MAX_VALUE);
>   }
> }
> {code}
> this piece of code makes the CSV (or any text files with header/footer) files 
> not splittable if header or footer is present. 
> If only header is present, we can find the offset after first line break and 
> use that to split. Similarly for footer, may be read few KB's of data at the 
> end and find the last line break offset. Use that to determine the data range 
> which can be used for splitting. Few reads during split generation are 
> cheaper than not splitting the file at all.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22278) Upgrade log4j to 2.12.1

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944748#comment-16944748
 ] 

Hive QA commented on HIVE-22278:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
34s{color} | {color:blue} standalone-metastore/metastore-common in master has 
32 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
22s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} testutils/ptest2 in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
51s{color} | {color:red} patch/standalone-metastore/metastore-common cannot run 
setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
39s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} patch/itests/hive-unit cannot run setBugDatabaseInfo 
from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} patch/testutils/ptest2 cannot run setBugDatabaseInfo 
from findbugs {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  6m 
49s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
23s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18868/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18868/yetus/patch-findbugs-standalone-metastore_metastore-common.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18868/yetus/patch-findbugs-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18868/yetus/patch-findbugs-itests_hive-unit.txt
 |
| findbugs | 

[jira] [Work logged] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?focusedWorklogId=323650=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323650
 ]

ASF GitHub Bot logged work on HIVE-21924:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 18:31
Start Date: 04/Oct/19 18:31
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #791: HIVE-21924
URL: https://github.com/apache/hive/pull/791#discussion_r331632718
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/SkippingTextInputFormat.java
 ##
 @@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.io;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.FileSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.TextInputFormat;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * SkippingInputFormat is a header/footer aware input format. It truncates
+ * splits identified by TextInputFormat. Header and footers are removed
+ * from the splits.
+ */
+public class SkippingTextInputFormat extends TextInputFormat {
+
+  private final Map startIndexMap = new ConcurrentHashMap();
+  private final Map endIndexMap = new ConcurrentHashMap();
+  private JobConf conf;
+  private int headerCount;
+  private int footerCount;
+
+  @Override
+  public void configure(JobConf conf) {
+this.conf = conf;
+super.configure(conf);
+  }
+
+  public void configure(JobConf conf, int headerCount, int footerCount) {
+configure(conf);
+this.headerCount = headerCount;
+this.footerCount = footerCount;
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts) {
+return makeSplitInternal(file, start, length, hosts, null);
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts, String[] inMemoryHosts) {
+return makeSplitInternal(file, start, length, hosts, inMemoryHosts);
+  }
+
+  private FileSplit makeSplitInternal(Path file, long start, long length, 
String[] hosts, String[] inMemoryHosts) {
+long cachedStart;
+long cachedEnd;
+try {
+  cachedStart = getCachedStartIndex(file);
+  cachedEnd = getCachedEndIndex(file);
+} catch (IOException e) {
+  LOG.warn("Could not detect header/footer", e);
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start + length) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start) {
+  length = length - (cachedStart - start);
+  start = cachedStart;
+}
+if (cachedEnd < start) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedEnd < start + length) {
+  length = cachedEnd - start;
+}
+if (inMemoryHosts == null) {
+  return super.makeSplit(file, start, length, hosts);
+} else {
+  return super.makeSplit(file, start, length, hosts, inMemoryHosts);
+}
+  }
+
+  private long getCachedStartIndex(Path path) throws IOException {
+if (headerCount == 0) {
+  return 0;
+}
+Long startIndexForFile = startIndexMap.get(path);
+if (startIndexForFile == null) {
+  FileSystem fileSystem;
+  FSDataInputStream fis = null;
+  fileSystem = path.getFileSystem(conf);
+  try {
+fis = fileSystem.open(path);
+for (int j = 0; j < headerCount; j++) {
+  if (fis.readLine() == null) {
+startIndexMap.put(path, Long.MAX_VALUE);
+return Long.MAX_VALUE;
+  }
+}
+// back 1 byte because readers skip the entire first row if split 
start is not 0
+startIndexForFile = fis.getPos() - 1;
+  } finally {
+if (fis != null) {
+  fis.close();
+}
+  }
+  startIndexMap.put(path, startIndexForFile);
+}
+return startIndexForFile;

[jira] [Commented] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944744#comment-16944744
 ] 

Jesus Camacho Rodriguez commented on HIVE-22250:


+1 (pending tests)

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, 
> HIVE-22250.3.patch, HIVE-22250.4.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?focusedWorklogId=323646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323646
 ]

ASF GitHub Bot logged work on HIVE-21924:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 18:21
Start Date: 04/Oct/19 18:21
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #791: HIVE-21924
URL: https://github.com/apache/hive/pull/791#discussion_r331628387
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/SkippingTextInputFormat.java
 ##
 @@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.io;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.FileSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.TextInputFormat;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * SkippingInputFormat is a header/footer aware input format. It truncates
+ * splits identified by TextInputFormat. Header and footers are removed
+ * from the splits.
+ */
+public class SkippingTextInputFormat extends TextInputFormat {
+
+  private final Map startIndexMap = new ConcurrentHashMap();
+  private final Map endIndexMap = new ConcurrentHashMap();
+  private JobConf conf;
+  private int headerCount;
+  private int footerCount;
+
+  @Override
+  public void configure(JobConf conf) {
+this.conf = conf;
+super.configure(conf);
+  }
+
+  public void configure(JobConf conf, int headerCount, int footerCount) {
+configure(conf);
+this.headerCount = headerCount;
+this.footerCount = footerCount;
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts) {
+return makeSplitInternal(file, start, length, hosts, null);
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts, String[] inMemoryHosts) {
+return makeSplitInternal(file, start, length, hosts, inMemoryHosts);
+  }
+
+  private FileSplit makeSplitInternal(Path file, long start, long length, 
String[] hosts, String[] inMemoryHosts) {
+long cachedStart;
+long cachedEnd;
+try {
+  cachedStart = getCachedStartIndex(file);
+  cachedEnd = getCachedEndIndex(file);
+} catch (IOException e) {
+  LOG.warn("Could not detect header/footer", e);
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start + length) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start) {
+  length = length - (cachedStart - start);
+  start = cachedStart;
+}
+if (cachedEnd < start) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedEnd < start + length) {
+  length = cachedEnd - start;
+}
+if (inMemoryHosts == null) {
+  return super.makeSplit(file, start, length, hosts);
+} else {
+  return super.makeSplit(file, start, length, hosts, inMemoryHosts);
+}
+  }
+
+  private long getCachedStartIndex(Path path) throws IOException {
+if (headerCount == 0) {
+  return 0;
+}
+Long startIndexForFile = startIndexMap.get(path);
+if (startIndexForFile == null) {
+  FileSystem fileSystem;
+  FSDataInputStream fis = null;
+  fileSystem = path.getFileSystem(conf);
+  try {
+fis = fileSystem.open(path);
+for (int j = 0; j < headerCount; j++) {
+  if (fis.readLine() == null) {
+startIndexMap.put(path, Long.MAX_VALUE);
+return Long.MAX_VALUE;
+  }
+}
+// back 1 byte because readers skip the entire first row if split 
start is not 0
+startIndexForFile = fis.getPos() - 1;
+  } finally {
+if (fis != null) {
+  fis.close();
+}
+  }
+  startIndexMap.put(path, startIndexForFile);
+}
+return startIndexForFile;

[jira] [Commented] (HIVE-22278) Upgrade log4j to 2.12.1

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944741#comment-16944741
 ] 

Hive QA commented on HIVE-22278:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982203/HIVE-22278.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17234 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning]
 (batchId=193)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18868/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18868/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18868/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982203 - PreCommit-HIVE-Build

> Upgrade log4j to 2.12.1
> ---
>
> Key: HIVE-22278
> URL: https://issues.apache.org/jira/browse/HIVE-22278
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22278.02.patch, HIVE-22278.02.patch, 
> HIVE-22278.02.patch, HIVE-22278.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive's currently using log4j 2.10.0 and according to HIVE-21273, a number of 
> issues are present in it, which can be resolved by upgrading to 2.12.1:
> Curly braces in parameters are treated as placeholders
>  affectsVersions:2.8.2;2.9.0;2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2032?filter=allopenissues]
>  Remove Log4J API dependency on Management APIs
>  affectsVersions:2.9.1;2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2126?filter=allopenissues]
>  Log4j2 throws NoClassDefFoundError in Java 9
>  affectsVersions:2.10.0;2.11.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2129?filter=allopenissues]
>  ThreadContext map is cleared => entries are only available for one log event
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2158?filter=allopenissues]
>  Objects held in SortedArrayStringMap cannot be filtered during serialization
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2163?filter=allopenissues]
>  NullPointerException at 
> org.apache.logging.log4j.util.Activator.loadProvider(Activator.java:81) in 
> log4j 2.10.0
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2182?filter=allopenissues]
>  MarkerFilter onMismatch invalid attribute in .properties
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2202?filter=allopenissues]
>  Configuration builder classes should look for "onMismatch"; not "onMisMatch".
>  
> affectsVersions:2.4;2.4.1;2.5;2.6;2.6.1;2.6.2;2.7;2.8;2.8.1;2.8.2;2.9.0;2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2219?filter=allopenissues]
>  Empty Automatic-Module-Name Header
>  affectsVersions:2.10.0;2.11.0;3.0.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2254?filter=allopenissues]
>  ConcurrentModificationException from 
> org.apache.logging.log4j.status.StatusLogger.(StatusLogger.java:71)
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2276?filter=allopenissues]
>  Allow SystemPropertiesPropertySource to run with a SecurityManager that 
> rejects system property access
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2279?filter=allopenissues]
>  ParserConfigurationException when using Log4j with 
> oracle.xml.jaxp.JXDocumentBuilderFactory
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2283?filter=allopenissues]
>  Log4j 2.10+not working with SLF4J 1.8 in OSGI environment
>  affectsVersions:2.10.0;2.11.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2305?filter=allopenissues]
>  fix the CacheEntry map in ThrowableProxy#toExtendedStackTrace to be put and 
> gotten with same key
>  affectsVersions:2.6.2;2.7;2.8;2.8.1;2.8.2;2.9.0;2.9.1;2.10.0;2.11.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2389?filter=allopenissues]
>  

[jira] [Work logged] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?focusedWorklogId=323645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323645
 ]

ASF GitHub Bot logged work on HIVE-21924:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 18:19
Start Date: 04/Oct/19 18:19
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #791: HIVE-21924
URL: https://github.com/apache/hive/pull/791#discussion_r331627809
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/SkippingTextInputFormat.java
 ##
 @@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.io;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.FileSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.TextInputFormat;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * SkippingInputFormat is a header/footer aware input format. It truncates
+ * splits identified by TextInputFormat. Header and footers are removed
+ * from the splits.
+ */
+public class SkippingTextInputFormat extends TextInputFormat {
+
+  private final Map startIndexMap = new ConcurrentHashMap();
+  private final Map endIndexMap = new ConcurrentHashMap();
+  private JobConf conf;
+  private int headerCount;
+  private int footerCount;
+
+  @Override
+  public void configure(JobConf conf) {
+this.conf = conf;
+super.configure(conf);
+  }
+
+  public void configure(JobConf conf, int headerCount, int footerCount) {
+configure(conf);
+this.headerCount = headerCount;
+this.footerCount = footerCount;
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts) {
+return makeSplitInternal(file, start, length, hosts, null);
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts, String[] inMemoryHosts) {
+return makeSplitInternal(file, start, length, hosts, inMemoryHosts);
+  }
+
+  private FileSplit makeSplitInternal(Path file, long start, long length, 
String[] hosts, String[] inMemoryHosts) {
+long cachedStart;
+long cachedEnd;
+try {
+  cachedStart = getCachedStartIndex(file);
+  cachedEnd = getCachedEndIndex(file);
+} catch (IOException e) {
+  LOG.warn("Could not detect header/footer", e);
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start + length) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start) {
+  length = length - (cachedStart - start);
+  start = cachedStart;
+}
+if (cachedEnd < start) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedEnd < start + length) {
+  length = cachedEnd - start;
+}
+if (inMemoryHosts == null) {
+  return super.makeSplit(file, start, length, hosts);
+} else {
+  return super.makeSplit(file, start, length, hosts, inMemoryHosts);
+}
+  }
+
+  private long getCachedStartIndex(Path path) throws IOException {
+if (headerCount == 0) {
+  return 0;
+}
+Long startIndexForFile = startIndexMap.get(path);
+if (startIndexForFile == null) {
+  FileSystem fileSystem;
+  FSDataInputStream fis = null;
+  fileSystem = path.getFileSystem(conf);
+  try {
+fis = fileSystem.open(path);
+for (int j = 0; j < headerCount; j++) {
+  if (fis.readLine() == null) {
+startIndexMap.put(path, Long.MAX_VALUE);
+return Long.MAX_VALUE;
+  }
+}
+// back 1 byte because readers skip the entire first row if split 
start is not 0
+startIndexForFile = fis.getPos() - 1;
+  } finally {
+if (fis != null) {
+  fis.close();
+}
+  }
+  startIndexMap.put(path, startIndexForFile);
+}
+return startIndexForFile;

[jira] [Work logged] (HIVE-21924) Split text files even if header/footer exists

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?focusedWorklogId=323641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323641
 ]

ASF GitHub Bot logged work on HIVE-21924:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 18:15
Start Date: 04/Oct/19 18:15
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #791: HIVE-21924
URL: https://github.com/apache/hive/pull/791#discussion_r331626102
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/io/SkippingTextInputFormat.java
 ##
 @@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.io;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.FileSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.TextInputFormat;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+import java.util.Map;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * SkippingInputFormat is a header/footer aware input format. It truncates
+ * splits identified by TextInputFormat. Header and footers are removed
+ * from the splits.
+ */
+public class SkippingTextInputFormat extends TextInputFormat {
+
+  private final Map startIndexMap = new ConcurrentHashMap();
+  private final Map endIndexMap = new ConcurrentHashMap();
+  private JobConf conf;
+  private int headerCount;
+  private int footerCount;
+
+  @Override
+  public void configure(JobConf conf) {
+this.conf = conf;
+super.configure(conf);
+  }
+
+  public void configure(JobConf conf, int headerCount, int footerCount) {
+configure(conf);
+this.headerCount = headerCount;
+this.footerCount = footerCount;
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts) {
+return makeSplitInternal(file, start, length, hosts, null);
+  }
+
+  @Override
+  protected FileSplit makeSplit(Path file, long start, long length, String[] 
hosts, String[] inMemoryHosts) {
+return makeSplitInternal(file, start, length, hosts, inMemoryHosts);
+  }
+
+  private FileSplit makeSplitInternal(Path file, long start, long length, 
String[] hosts, String[] inMemoryHosts) {
+long cachedStart;
+long cachedEnd;
+try {
+  cachedStart = getCachedStartIndex(file);
+  cachedEnd = getCachedEndIndex(file);
+} catch (IOException e) {
+  LOG.warn("Could not detect header/footer", e);
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start + length) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedStart > start) {
+  length = length - (cachedStart - start);
+  start = cachedStart;
+}
+if (cachedEnd < start) {
+  return new NullRowsInputFormat.DummyInputSplit(file);
+}
+if (cachedEnd < start + length) {
+  length = cachedEnd - start;
+}
+if (inMemoryHosts == null) {
+  return super.makeSplit(file, start, length, hosts);
+} else {
+  return super.makeSplit(file, start, length, hosts, inMemoryHosts);
+}
+  }
+
+  private long getCachedStartIndex(Path path) throws IOException {
+if (headerCount == 0) {
+  return 0;
+}
+Long startIndexForFile = startIndexMap.get(path);
+if (startIndexForFile == null) {
+  FileSystem fileSystem;
+  FSDataInputStream fis = null;
+  fileSystem = path.getFileSystem(conf);
+  try {
+fis = fileSystem.open(path);
+for (int j = 0; j < headerCount; j++) {
+  if (fis.readLine() == null) {
+startIndexMap.put(path, Long.MAX_VALUE);
+return Long.MAX_VALUE;
+  }
+}
+// back 1 byte because readers skip the entire first row if split 
start is not 0
+startIndexForFile = fis.getPos() - 1;
 
 Review comment:
   I tried to explain this in the comment above. LineRecordReader from hadoop 
always skips the first line if the start index of the split is not 0. Here, we 

[jira] [Commented] (HIVE-22291) HMS Translation: Limit translation to hive default catalog only

2019-10-04 Thread Naveen Gangam (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944683#comment-16944683
 ] 

Naveen Gangam commented on HIVE-22291:
--

[~samuelan] Could you review this? I added you to the RB as well at 
https://reviews.apache.org/r/71582/. Thanks

> HMS Translation: Limit translation to hive default catalog only
> ---
>
> Key: HIVE-22291
> URL: https://issues.apache.org/jira/browse/HIVE-22291
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22291.patch
>
>
> HMS Translation should only be limited to a single catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22281) Create table statement fails with "not supported NULLS LAST for ORDER BY in ASC order"

2019-10-04 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944667#comment-16944667
 ] 

Jesus Camacho Rodriguez commented on HIVE-22281:


[~kkasa], can we change the error message? It seems it refers incorrectly to 
ORDER BY.

{{create/alter table: not supported NULLS LAST for ORDER BY in ASC order}} -> 
{{create/alter bucketed table: not supported NULLS LAST for SORTED BY in ASC 
order}} (and the same for the other error message).

> Create table statement fails with "not supported NULLS LAST for ORDER BY in 
> ASC order"
> --
>
> Key: HIVE-22281
> URL: https://issues.apache.org/jira/browse/HIVE-22281
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22281.1.patch, HIVE-22281.1.patch, 
> HIVE-22281.2.patch, HIVE-22281.2.patch, HIVE-22281.2.patch
>
>
> {code}
> CREATE TABLE table_core2c4ywq7yjx ( k1 STRING, f1 STRING, 
> sequence_num BIGINT, create_bsk BIGINT, change_bsk BIGINT, 
> op_code STRING ) PARTITIONED BY (run_id BIGINT) CLUSTERED BY (k1) SORTED BY 
> (k1, change_bsk, sequence_num) INTO 4 BUCKETS STORED AS ORC
> {code}
> {code}
> Error while compiling statement: FAILED: SemanticException create/alter 
> table: not supported NULLS LAST for ORDER BY in ASC order
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22284) Improve LLAP CacheContentsTracker to collect and display correct statistics

2019-10-04 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944666#comment-16944666
 ] 

Gopal Vijayaraghavan commented on HIVE-22284:
-

Strictly from a memory use perspetive, the CacheTag is better served as an 
abstract class with 3 impls - TableCacheTag, PartitionCacheTag and 
DeepPartitionsCacheTag (for no partition, 1 partition and >1 partitions).

{code}
+  part.getPartSpec().entrySet().stream()
+  .map(e -> e.getKey() + "=" + 
e.getValue()).collect(toCollection(LinkedList::new))
{code}

is where the other allocation is hidden, both the String concat and the new 
LinkedList.

> Improve LLAP CacheContentsTracker to collect and display correct statistics
> ---
>
> Key: HIVE-22284
> URL: https://issues.apache.org/jira/browse/HIVE-22284
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22284.0.patch, HIVE-22284.1.patch, 
> HIVE-22284.2.patch
>
>
> When keeping track of which buffers correspond to what Hive objects, 
> CacheContentsTracker relies on cache tags.
> Currently a tag is a simple String that ideally holds DB and table name, and 
> a partition spec concatenated by . and / . The information here is derived 
> from the Path of the file that is getting cached. Needless to say sometimes 
> this produces a wrong tag especially for external tables.
> Also there's a bug when calculating aggregated stats for a 'parent' tag 
> (corresponding to the table of the partition) because the overall maxCount 
> and maxSize do not add up to the sum of those in the partitions. This happens 
> when buffers get removed from the cache.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22281) Create table statement fails with "not supported NULLS LAST for ORDER BY in ASC order"

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944664#comment-16944664
 ] 

Hive QA commented on HIVE-22281:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982195/HIVE-22281.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17235 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18867/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18867/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18867/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982195 - PreCommit-HIVE-Build

> Create table statement fails with "not supported NULLS LAST for ORDER BY in 
> ASC order"
> --
>
> Key: HIVE-22281
> URL: https://issues.apache.org/jira/browse/HIVE-22281
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22281.1.patch, HIVE-22281.1.patch, 
> HIVE-22281.2.patch, HIVE-22281.2.patch, HIVE-22281.2.patch
>
>
> {code}
> CREATE TABLE table_core2c4ywq7yjx ( k1 STRING, f1 STRING, 
> sequence_num BIGINT, create_bsk BIGINT, change_bsk BIGINT, 
> op_code STRING ) PARTITIONED BY (run_id BIGINT) CLUSTERED BY (k1) SORTED BY 
> (k1, change_bsk, sequence_num) INTO 4 BUCKETS STORED AS ORC
> {code}
> {code}
> Error while compiling statement: FAILED: SemanticException create/alter 
> table: not supported NULLS LAST for ORDER BY in ASC order
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22281) Create table statement fails with "not supported NULLS LAST for ORDER BY in ASC order"

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944625#comment-16944625
 ] 

Hive QA commented on HIVE-22281:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18867/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18867/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create table statement fails with "not supported NULLS LAST for ORDER BY in 
> ASC order"
> --
>
> Key: HIVE-22281
> URL: https://issues.apache.org/jira/browse/HIVE-22281
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22281.1.patch, HIVE-22281.1.patch, 
> HIVE-22281.2.patch, HIVE-22281.2.patch, HIVE-22281.2.patch
>
>
> {code}
> CREATE TABLE table_core2c4ywq7yjx ( k1 STRING, f1 STRING, 
> sequence_num BIGINT, create_bsk BIGINT, change_bsk BIGINT, 
> op_code STRING ) PARTITIONED BY (run_id BIGINT) CLUSTERED BY (k1) SORTED BY 
> (k1, change_bsk, sequence_num) INTO 4 BUCKETS STORED AS ORC
> {code}
> {code}
> Error while compiling statement: FAILED: SemanticException create/alter 
> table: not supported NULLS LAST for ORDER BY in ASC order
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22291) HMS Translation: Limit translation to hive default catalog only

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944596#comment-16944596
 ] 

Hive QA commented on HIVE-22291:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982198/HIVE-22291.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17235 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18866/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18866/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18866/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982198 - PreCommit-HIVE-Build

> HMS Translation: Limit translation to hive default catalog only
> ---
>
> Key: HIVE-22291
> URL: https://issues.apache.org/jira/browse/HIVE-22291
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22291.patch
>
>
> HMS Translation should only be limited to a single catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-14302) Tez: Optimized Hashtable can support DECIMAL keys of same precision

2019-10-04 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-14302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-14302:

Status: In Progress  (was: Patch Available)

> Tez: Optimized Hashtable can support DECIMAL keys of same precision
> ---
>
> Key: HIVE-14302
> URL: https://issues.apache.org/jira/browse/HIVE-14302
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Affects Versions: 2.2.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-14302.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Decimal support in the optimized hashtable was decided on the basis of the 
> fact that Decimal(10,1) == Decimal(10, 2) when both contain "1.0" and "1.00".
> However, the joins now don't have any issues with decimal precision because 
> they cast to common.
> {code}
> create temporary table x (a decimal(10,2), b decimal(10,1)) stored as orc;
> insert into x values (1.0, 1.0);
> > explain logical select count(1) from x, x x1 where x.a = x1.b;
> OK  
> LOGICAL PLAN:
> $hdt$_0:$hdt$_0:x
>   TableScan (TS_0)
> alias: x
> filterExpr: (a is not null and true) (type: boolean)
> Filter Operator (FIL_18)
>   predicate: (a is not null and true) (type: boolean)
>   Select Operator (SEL_2)
> expressions: a (type: decimal(10,2))
> outputColumnNames: _col0
> Reduce Output Operator (RS_6)
>   key expressions: _col0 (type: decimal(11,2))
>   sort order: +
>   Map-reduce partition columns: _col0 (type: decimal(11,2))
>   Join Operator (JOIN_8)
> condition map:
>  Inner Join 0 to 1
> keys:
>   0 _col0 (type: decimal(11,2))
>   1 _col0 (type: decimal(11,2))
> Group By Operator (GBY_11)
>   aggregations: count(1)
>   mode: hash
>   outputColumnNames: _col0
> {code}
> See cast up to Decimal(11, 2) in the plan, which normalizes both sides of the 
> join to be able to compare HiveDecimal as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22291) HMS Translation: Limit translation to hive default catalog only

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944550#comment-16944550
 ] 

Hive QA commented on HIVE-22291:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
14s{color} | {color:blue} standalone-metastore/metastore-server in master has 
170 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 407 unchanged - 1 fixed = 407 total (was 408) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 37 new + 152 
unchanged - 5 fixed = 189 total (was 157) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18866/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18866/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: standalone-metastore/metastore-server itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18866/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HMS Translation: Limit translation to hive default catalog only
> ---
>
> Key: HIVE-22291
> URL: https://issues.apache.org/jira/browse/HIVE-22291
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22291.patch
>
>
> HMS Translation should only be limited to a single catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944524#comment-16944524
 ] 

Hive QA commented on HIVE-22230:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982218/HIVE-22230.02.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17330 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18865/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18865/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18865/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982218 - PreCommit-HIVE-Build

> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch, HIVE-22230.02.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse listPartitionValues(PartitionValuesRequest 
> request)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323417=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323417
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 13:19
Start Date: 04/Oct/19 13:19
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331495984
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreConfigAuthenticationProviderImpl.java
 ##
 @@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import javax.security.sasl.AuthenticationException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This authentication provider implements the {@code CONFIG} authentication. 
It allows a {@link
+ * MetaStorePasswdAuthenticationProvider} to be specified at configuration 
time which may
+ * additionally
+ * implement {@link org.apache.hadoop.conf.Configurable Configurable} to grab 
HMS's {@link
+ * org.apache.hadoop.conf.Configuration Configuration}.
+ */
+public class MetaStoreConfigAuthenticationProviderImpl implements 
MetaStorePasswdAuthenticationProvider {
+  private final String userName;
+  private final String password;
+  protected static final Logger LOG = 
LoggerFactory.getLogger(MetaStoreConfigAuthenticationProviderImpl.class);
+
+  @SuppressWarnings("unchecked")
+  MetaStoreConfigAuthenticationProviderImpl(Configuration conf) throws 
AuthenticationException {
+userName = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+password = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_PASSWORD);
+
+if (null == userName || userName.isEmpty()) {
+  throw new AuthenticationException("No username specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+
+if (null == password || password.isEmpty()) {
 
 Review comment:
   Fixed. Allows empty password.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323417)
Time Spent: 3h 10m  (was: 3h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323416=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323416
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 13:18
Start Date: 04/Oct/19 13:18
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331495474
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreConfigAuthenticationProviderImpl.java
 ##
 @@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import javax.security.sasl.AuthenticationException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This authentication provider implements the {@code CONFIG} authentication. 
It allows a {@link
+ * MetaStorePasswdAuthenticationProvider} to be specified at configuration 
time which may
+ * additionally
+ * implement {@link org.apache.hadoop.conf.Configurable Configurable} to grab 
HMS's {@link
+ * org.apache.hadoop.conf.Configuration Configuration}.
+ */
+public class MetaStoreConfigAuthenticationProviderImpl implements 
MetaStorePasswdAuthenticationProvider {
+  private final String userName;
+  private final String password;
+  protected static final Logger LOG = 
LoggerFactory.getLogger(MetaStoreConfigAuthenticationProviderImpl.class);
+
+  @SuppressWarnings("unchecked")
+  MetaStoreConfigAuthenticationProviderImpl(Configuration conf) throws 
AuthenticationException {
+userName = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+password = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_PASSWORD);
+
+if (null == userName || userName.isEmpty()) {
+  throw new AuthenticationException("No username specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+
+if (null == password || password.isEmpty()) {
+  throw new AuthenticationException("No password specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+  }
+
+  @Override
+  public void Authenticate(String authUser, String authPassword) throws 
AuthenticationException {
+if (!userName.equals(authUser)) {
+  throw new AuthenticationException("Invalid user " + authUser);
 
 Review comment:
   Done. Added debug logs and "invalid credentials" as exception.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323416)
Time Spent: 3h  (was: 2h 50m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323415
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 13:18
Start Date: 04/Oct/19 13:18
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331495474
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreConfigAuthenticationProviderImpl.java
 ##
 @@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import javax.security.sasl.AuthenticationException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This authentication provider implements the {@code CONFIG} authentication. 
It allows a {@link
+ * MetaStorePasswdAuthenticationProvider} to be specified at configuration 
time which may
+ * additionally
+ * implement {@link org.apache.hadoop.conf.Configurable Configurable} to grab 
HMS's {@link
+ * org.apache.hadoop.conf.Configuration Configuration}.
+ */
+public class MetaStoreConfigAuthenticationProviderImpl implements 
MetaStorePasswdAuthenticationProvider {
+  private final String userName;
+  private final String password;
+  protected static final Logger LOG = 
LoggerFactory.getLogger(MetaStoreConfigAuthenticationProviderImpl.class);
+
+  @SuppressWarnings("unchecked")
+  MetaStoreConfigAuthenticationProviderImpl(Configuration conf) throws 
AuthenticationException {
+userName = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+password = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_PASSWORD);
+
+if (null == userName || userName.isEmpty()) {
+  throw new AuthenticationException("No username specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+
+if (null == password || password.isEmpty()) {
+  throw new AuthenticationException("No password specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+  }
+
+  @Override
+  public void Authenticate(String authUser, String authPassword) throws 
AuthenticationException {
+if (!userName.equals(authUser)) {
+  throw new AuthenticationException("Invalid user " + authUser);
 
 Review comment:
   Done. Added debug logs and invalid credentials exception.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323415)
Time Spent: 2h 50m  (was: 2h 40m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944486#comment-16944486
 ] 

Hive QA commented on HIVE-22230:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
7s{color} | {color:blue} standalone-metastore/metastore-server in master has 
170 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 19 unchanged - 1 fixed 
= 20 total (was 20) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18865/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18865/yetus/diff-checkstyle-ql.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18865/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch, HIVE-22230.02.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse 

[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323408
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 12:59
Start Date: 04/Oct/19 12:59
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331487455
 
 

 ##
 File path: 
itests/hive-unit-hadoop2/src/test/java/org/apache/hadoop/hive/metastore/security/TestHadoopAuthBridge23.java
 ##
 @@ -94,16 +94,15 @@ public Server() throws TTransportException {
 super();
   }
   @Override
-  public TTransportFactory createTransportFactory(Map 
saslProps)
-  throws TTransportException {
+  public TSaslServerTransport.Factory 
createSaslServerTransportFactory(Map saslProps) {
 TSaslServerTransport.Factory transFactory =
   new TSaslServerTransport.Factory();
 transFactory.addServerDefinition(AuthMethod.DIGEST.getMechanismName(),
 null, SaslRpcServer.SASL_DEFAULT_REALM,
 saslProps,
 new SaslDigestCallbackHandler(secretManager));
 
-return new TUGIAssumingTransportFactory(transFactory, realUgi);
 
 Review comment:
   Earlier HiveMetaStore was using createTransportFactory to create a transport 
factory. But now we are not using it. Instead, we are using 
createSaslServerTransportFactory, which should not create a wrapped transport.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323408)
Time Spent: 2.5h  (was: 2h 20m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323409
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 13:00
Start Date: 04/Oct/19 13:00
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331487594
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreAuthenticationProviderFactory.java
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import org.apache.hadoop.conf.Configuration;
+
+import javax.security.sasl.AuthenticationException;
+
+// This file is copies from 
org.apache.hive.service.auth.AuthenticationProviderFactory. Need to
+// deduplicate this code.
+/**
+ * This class helps select a {@link MetaStorePasswdAuthenticationProvider} for 
a given {@code
+ * AuthMethod}.
+ */
+public final class MetaStoreAuthenticationProviderFactory {
+
+  public enum AuthMethods {
+LDAP("LDAP"),
+PAM("PAM"),
+CUSTOM("CUSTOM"),
+NONE("NONE"),
+CONFIG("CONFIG");
+
+private final String authMethod;
+
+AuthMethods(String authMethod) {
+  this.authMethod = authMethod;
+}
+
+public String getAuthMethod() {
+  return authMethod;
+}
+
+public static AuthMethods getValidAuthMethod(String authMethodStr)
+  throws AuthenticationException {
+  for (AuthMethods auth : AuthMethods.values()) {
+if (authMethodStr.equals(auth.getAuthMethod())) {
+  return auth;
+}
+  }
+  throw new AuthenticationException("Not a valid authentication method");
+}
+  }
+
+  private MetaStoreAuthenticationProviderFactory() {
+  }
+
+  public static MetaStorePasswdAuthenticationProvider 
getAuthenticationProvider(AuthMethods authMethod)
+throws AuthenticationException {
+return getAuthenticationProvider(new Configuration(), authMethod);
+  }
+
+  public static MetaStorePasswdAuthenticationProvider 
getAuthenticationProvider(Configuration conf, AuthMethods authMethod)
+throws AuthenticationException {
+if (authMethod == AuthMethods.LDAP) {
+  return new MetaStoreLdapAuthenticationProviderImpl(conf);
+} else if (authMethod == AuthMethods.CUSTOM) {
+  return new MetaStoreCustomAuthenticationProviderImpl(conf);
 
 Review comment:
   No. No PAM support.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323409)
Time Spent: 2h 40m  (was: 2.5h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22250:
--
Attachment: HIVE-22250.4.patch

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, 
> HIVE-22250.3.patch, HIVE-22250.4.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22250:
--
Status: Patch Available  (was: Open)

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, 
> HIVE-22250.3.patch, HIVE-22250.4.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22250:
--
Status: Open  (was: Patch Available)

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, 
> HIVE-22250.3.patch, HIVE-22250.4.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323399=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323399
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 12:43
Start Date: 04/Oct/19 12:43
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331481151
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -629,6 +635,97 @@ public static ConfVars getMetaConf(String name) {
 "hive-metastore/_h...@example.com",
 "The service principal for the metastore Thrift server. \n" +
 "The special string _HOST will be replaced automatically with the 
correct host name."),
+THRIFT_METASTORE_AUTHENTICATION("metastore.authentication", 
"hive.metastore.authentication",
+"NOSASL",
+  new StringSetValidator("NOSASL", "NONE", "LDAP", "KERBEROS", "CUSTOM"),
+"Client authentication types.\n" +
+"  NONE: no authentication check\n" +
+"  LDAP: LDAP/AD based authentication\n" +
+"  KERBEROS: Kerberos/GSSAPI authentication\n" +
+"  CUSTOM: Custom authentication provider\n" +
+"  (Use with property 
metastore.custom.authentication.class)\n" +
+"  CONFIG: username and password is specified in the config" +
+"  NOSASL:  Raw transport"),
+THRIFT_CUSTOM_AUTHENTICATION_CLASS("metastore.custom.authentication.class",
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323399)
Time Spent: 2h 20m  (was: 2h 10m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323398
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 12:41
Start Date: 04/Oct/19 12:41
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331480455
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 ##
 @@ -590,7 +592,30 @@ private void open() throws MetaException {
 transport = new TSocket(store.getHost(), store.getPort(), 
clientSocketTimeout);
   }
 
-  if (useSasl) {
+  if (usePasswordAuth) {
+// we are using PLAIN Sasl connection with user/password
+LOG.debug("HMSC::open(): Creating plain authentication thrift 
connection.");
+String userName = MetastoreConf.getVar(conf, 
ConfVars.METASTORE_CLIENT_PLAIN_USERNAME);
+// The password is not directly provided. It should be obtained 
from a keystore pointed
+// by configuration "hadoop.security.credential.provider.path".
+try {
+  String passwd = null;
+  char[] pwdCharArray = conf.getPassword(userName);
+  if (null != pwdCharArray) {
+   passwd = new String(pwdCharArray);
+  }
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323398)
Time Spent: 2h 10m  (was: 2h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944463#comment-16944463
 ] 

Hive QA commented on HIVE-22250:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982194/HIVE-22250.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 17234 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udaf_percentile_cont] 
(batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udaf_percentile_disc] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_bigint] (batchId=93)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_boolean] (batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_double] (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_float] (batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_int] (batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_smallint] 
(batchId=99)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_tinyint] (batchId=17)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18864/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18864/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18864/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982194 - PreCommit-HIVE-Build

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, HIVE-22250.3.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323379
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 12:20
Start Date: 04/Oct/19 12:20
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331472872
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 ##
 @@ -590,7 +592,30 @@ private void open() throws MetaException {
 transport = new TSocket(store.getHost(), store.getPort(), 
clientSocketTimeout);
   }
 
-  if (useSasl) {
+  if (usePasswordAuth) {
+// we are using PLAIN Sasl connection with user/password
+LOG.debug("HMSC::open(): Creating plain authentication thrift 
connection.");
+String userName = MetastoreConf.getVar(conf, 
ConfVars.METASTORE_CLIENT_PLAIN_USERNAME);
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323379)
Time Spent: 2h  (was: 1h 50m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944431#comment-16944431
 ] 

Hive QA commented on HIVE-22250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
10s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 26 new + 289 unchanged - 23 
fixed = 315 total (was 312) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18864/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18864/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18864/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, HIVE-22250.3.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-22230:
-
Attachment: HIVE-22230.02.patch

> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch, HIVE-22230.02.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse listPartitionValues(PartitionValuesRequest 
> request)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944408#comment-16944408
 ] 

Hive QA commented on HIVE-22230:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12982209/HIVE-22230.01.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17330 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18863/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18863/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18863/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12982209 - PreCommit-HIVE-Build

> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse listPartitionValues(PartitionValuesRequest 
> request)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944382#comment-16944382
 ] 

Hive QA commented on HIVE-22230:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
12s{color} | {color:blue} standalone-metastore/metastore-server in master has 
170 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
15s{color} | {color:blue} ql in master has 1551 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 11 new + 19 unchanged - 1 
fixed = 30 total (was 20) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18863/dev-support/hive-personality.sh
 |
| git revision | master / 9524a0b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18863/yetus/diff-checkstyle-ql.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18863/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse 

[jira] [Comment Edited] (HIVE-21436) "Malformed ORC file. Invalid postscript length 17" when only one data-file in external table directory

2019-10-04 Thread Piotr Findeisen (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944357#comment-16944357
 ] 

Piotr Findeisen edited comment on HIVE-21436 at 10/4/19 10:14 AM:
--

First-time select works:
{code:java}
jdbc:hive2://localhost:1/default> SELECT * FROM t;
...
+--+
| 42   |
+--+ {code}
 

But all subsequent fail:
{code:java}
jdbc:hive2://localhost:1/default> SELECT * FROM t;
going to print operations logs
printed operations logs
Getting log thread is interrupted, since query is done!
INFO  : Compiling 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768): 
SELECT * FROM t
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t.a, 
type:bigint, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768); Time 
taken: 0.24 seconds
INFO  : Executing 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768): 
SELECT * FROM t
INFO  : Completed executing 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768); Time 
taken: 0.0 seconds
INFO  : OK
Error: java.io.IOException: java.lang.RuntimeException: ORC split generation 
failed with exception: Malformed ORC file. Invalid postscript length 17 
(state=,code=0)
org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
java.lang.RuntimeException: ORC split generation failed with exception: 
Malformed ORC file. Invalid postscript length 17
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:300)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:286)
at 
org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:379)
at org.apache.hive.beeline.BufferedRows.(BufferedRows.java:56)
at 
org.apache.hive.beeline.IncrementalRowsWithNormalization.(IncrementalRowsWithNormalization.java:50)
at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2305)
at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026)
at org.apache.hive.beeline.Commands.execute(Commands.java:1201)
at org.apache.hive.beeline.Commands.sql(Commands.java:1130)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1480)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1342)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1126)
at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:546)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:528)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
java.lang.RuntimeException: ORC split generation failed with exception: 
Malformed ORC file. Invalid postscript length 17
at 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:478)
at 
org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy50.fetchResults(Unknown Source)
at 
org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:792)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)

[jira] [Commented] (HIVE-21436) "Malformed ORC file. Invalid postscript length 17" when only one data-file in external table directory

2019-10-04 Thread Piotr Findeisen (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944379#comment-16944379
 ] 

Piotr Findeisen commented on HIVE-21436:


The {{org.apache.hadoop.hive.ql.io.orc.LocalCache#cache}} gets populated from 
{{OrcTail}} objects in 
{{org.apache.hadoop.hive.ql.io.orc.LocalCache#put(org.apache.hadoop.fs.Path, 
org.apache.orc.impl.OrcTail)}}.

I don't see yet why the cache entries are re-validated checking whether they 
are valid ORC footer/tail, but this is the place where the exception is thrown.

> "Malformed ORC file. Invalid postscript length 17" when only one data-file in 
> external table directory
> --
>
> Key: HIVE-21436
> URL: https://issues.apache.org/jira/browse/HIVE-21436
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: archon gum
>Priority: Blocker
> Attachments: 1.jpg, 2.jpg, hive-insert-into.orc, 
> org-apache-orc-java-code.orc, presto-insert-into.orc
>
>
> h1. env
>  * Presto 305
>  * Hive 3.1.0
>  
> h1. step
>  
> {code:java}
> -- create external table using hiveserver2
> CREATE EXTERNAL TABLE `dw.dim_date2`(
>   `d` date
> )
> STORED AS ORC
> LOCATION
>   'hdfs://datacenter1:8020/user/hive/warehouse/dw.db/dim_date2'
> ;
> -- upload the 'presto-insert-into.orc' file from attachments
> -- OR
> -- insert one row using presto
> insert into dim_date2 values (current_date);
> {code}
>  
>  
> when using `hiveserver2` to query, it works only at the first query and error 
> after then
> !1.jpg!
>  
> If I insert another row, it works
> {code:java}
> -- upload the 'presto-insert-into.orc' file from attachments
> -- OR
> -- insert one row using presto
> insert into dim_date2 values (current_date);
> {code}
> !2.jpg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22284) Improve LLAP CacheContentsTracker to collect and display correct statistics

2019-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22284:
--
Attachment: HIVE-22284.2.patch

> Improve LLAP CacheContentsTracker to collect and display correct statistics
> ---
>
> Key: HIVE-22284
> URL: https://issues.apache.org/jira/browse/HIVE-22284
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22284.0.patch, HIVE-22284.1.patch, 
> HIVE-22284.2.patch
>
>
> When keeping track of which buffers correspond to what Hive objects, 
> CacheContentsTracker relies on cache tags.
> Currently a tag is a simple String that ideally holds DB and table name, and 
> a partition spec concatenated by . and / . The information here is derived 
> from the Path of the file that is getting cached. Needless to say sometimes 
> this produces a wrong tag especially for external tables.
> Also there's a bug when calculating aggregated stats for a 'parent' tag 
> (corresponding to the table of the partition) because the overall maxCount 
> and maxSize do not add up to the sum of those in the partitions. This happens 
> when buffers get removed from the cache.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22284) Improve LLAP CacheContentsTracker to collect and display correct statistics

2019-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22284:
--
Status: Patch Available  (was: In Progress)

> Improve LLAP CacheContentsTracker to collect and display correct statistics
> ---
>
> Key: HIVE-22284
> URL: https://issues.apache.org/jira/browse/HIVE-22284
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22284.0.patch, HIVE-22284.1.patch, 
> HIVE-22284.2.patch
>
>
> When keeping track of which buffers correspond to what Hive objects, 
> CacheContentsTracker relies on cache tags.
> Currently a tag is a simple String that ideally holds DB and table name, and 
> a partition spec concatenated by . and / . The information here is derived 
> from the Path of the file that is getting cached. Needless to say sometimes 
> this produces a wrong tag especially for external tables.
> Also there's a bug when calculating aggregated stats for a 'parent' tag 
> (corresponding to the table of the partition) because the overall maxCount 
> and maxSize do not add up to the sum of those in the partitions. This happens 
> when buffers get removed from the cache.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22284) Improve LLAP CacheContentsTracker to collect and display correct statistics

2019-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22284:
--
Status: In Progress  (was: Patch Available)

> Improve LLAP CacheContentsTracker to collect and display correct statistics
> ---
>
> Key: HIVE-22284
> URL: https://issues.apache.org/jira/browse/HIVE-22284
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22284.0.patch, HIVE-22284.1.patch, 
> HIVE-22284.2.patch
>
>
> When keeping track of which buffers correspond to what Hive objects, 
> CacheContentsTracker relies on cache tags.
> Currently a tag is a simple String that ideally holds DB and table name, and 
> a partition spec concatenated by . and / . The information here is derived 
> from the Path of the file that is getting cached. Needless to say sometimes 
> this produces a wrong tag especially for external tables.
> Also there's a bug when calculating aggregated stats for a 'parent' tag 
> (corresponding to the table of the partition) because the overall maxCount 
> and maxSize do not add up to the sum of those in the partitions. This happens 
> when buffers get removed from the cache.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-21436) "Malformed ORC file. Invalid postscript length 17" when only one data-file in external table directory

2019-10-04 Thread Piotr Findeisen (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944357#comment-16944357
 ] 

Piotr Findeisen edited comment on HIVE-21436 at 10/4/19 9:42 AM:
-

First-time select works:
{code:java}
jdbc:hive2://localhost:1/default> SELECT * FROM t;
...
+--+
| 42   |
+--+ {code}
 

But all subsequent fail:
{code:java}
jdbc:hive2://localhost:1/default> SELECT * FROM t;
going to print operations logs
printed operations logs
Getting log thread is interrupted, since query is done!
INFO  : Compiling 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768): 
SELECT * FROM t
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t.a, 
type:bigint, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768); Time 
taken: 0.24 seconds
INFO  : Executing 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768): 
SELECT * FROM t
INFO  : Completed executing 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768); Time 
taken: 0.0 seconds
INFO  : OK
Error: java.io.IOException: java.lang.RuntimeException: ORC split generation 
failed with exception: Malformed ORC file. Invalid postscript length 17 
(state=,code=0)
org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
java.lang.RuntimeException: ORC split generation failed with exception: 
Malformed ORC file. Invalid postscript length 17
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:300)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:286)
at 
org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:379)
at org.apache.hive.beeline.BufferedRows.(BufferedRows.java:56)
at 
org.apache.hive.beeline.IncrementalRowsWithNormalization.(IncrementalRowsWithNormalization.java:50)
at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2305)
at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026)
at org.apache.hive.beeline.Commands.execute(Commands.java:1201)
at org.apache.hive.beeline.Commands.sql(Commands.java:1130)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1480)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1342)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1126)
at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:546)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:528)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
java.lang.RuntimeException: ORC split generation failed with exception: 
Malformed ORC file. Invalid postscript length 17
at 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:478)
at 
org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy50.fetchResults(Unknown Source)
at 
org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:792)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)

[jira] [Commented] (HIVE-21436) "Malformed ORC file. Invalid postscript length 17" when only one data-file in external table directory

2019-10-04 Thread Piotr Findeisen (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944357#comment-16944357
 ] 

Piotr Findeisen commented on HIVE-21436:


First-time select works:
{code:java}
jdbc:hive2://localhost:1/default> SELECT * FROM t;
...
+--+
| 42   |
+--+ {code}
 

But all subsequent fail:
{code:java}
jdbc:hive2://localhost:1/default> SELECT * FROM t;
going to print operations logs
printed operations logs
Getting log thread is interrupted, since query is done!
INFO  : Compiling 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768): 
SELECT * FROM t
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t.a, 
type:bigint, comment:null)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768); Time 
taken: 0.24 seconds
INFO  : Executing 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768): 
SELECT * FROM t
INFO  : Completed executing 
command(queryId=hive_20191004151730_e7c48562-51c8-4d39-9622-62231a499768); Time 
taken: 0.0 seconds
INFO  : OK
Error: java.io.IOException: java.lang.RuntimeException: ORC split generation 
failed with exception: Malformed ORC file. Invalid postscript length 17 
(state=,code=0)
org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
java.lang.RuntimeException: ORC split generation failed with exception: 
Malformed ORC file. Invalid postscript length 17
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:300)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:286)
at 
org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:379)
at org.apache.hive.beeline.BufferedRows.(BufferedRows.java:56)
at 
org.apache.hive.beeline.IncrementalRowsWithNormalization.(IncrementalRowsWithNormalization.java:50)
at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2305)
at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026)
at org.apache.hive.beeline.Commands.execute(Commands.java:1201)
at org.apache.hive.beeline.Commands.sql(Commands.java:1130)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1480)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1342)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1126)
at 
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:546)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:528)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
java.lang.RuntimeException: ORC split generation failed with exception: 
Malformed ORC file. Invalid postscript length 17
at 
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:478)
at 
org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:328)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy50.fetchResults(Unknown Source)
at 
org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:564)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:792)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1837)
at 

[jira] [Updated] (HIVE-21436) "Malformed ORC file. Invalid postscript length 17" when only one data-file in external table directory

2019-10-04 Thread Piotr Findeisen (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Findeisen updated HIVE-21436:
---
Summary: "Malformed ORC file. Invalid postscript length 17" when only one 
data-file in external table directory  (was: "Malformed ORC file" when only one 
data-file in external table directory)

> "Malformed ORC file. Invalid postscript length 17" when only one data-file in 
> external table directory
> --
>
> Key: HIVE-21436
> URL: https://issues.apache.org/jira/browse/HIVE-21436
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: archon gum
>Priority: Blocker
> Attachments: 1.jpg, 2.jpg, hive-insert-into.orc, 
> org-apache-orc-java-code.orc, presto-insert-into.orc
>
>
> h1. env
>  * Presto 305
>  * Hive 3.1.0
>  
> h1. step
>  
> {code:java}
> -- create external table using hiveserver2
> CREATE EXTERNAL TABLE `dw.dim_date2`(
>   `d` date
> )
> STORED AS ORC
> LOCATION
>   'hdfs://datacenter1:8020/user/hive/warehouse/dw.db/dim_date2'
> ;
> -- upload the 'presto-insert-into.orc' file from attachments
> -- OR
> -- insert one row using presto
> insert into dim_date2 values (current_date);
> {code}
>  
>  
> when using `hiveserver2` to query, it works only at the first query and error 
> after then
> !1.jpg!
>  
> If I insert another row, it works
> {code:java}
> -- upload the 'presto-insert-into.orc' file from attachments
> -- OR
> -- insert one row using presto
> insert into dim_date2 values (current_date);
> {code}
> !2.jpg!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-22230:
-
Status: Patch Available  (was: Open)

> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse listPartitionValues(PartitionValuesRequest 
> request)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22230) Add support for filtering partitions on temporary tables

2019-10-04 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-22230:
-
Attachment: HIVE-22230.01.patch

> Add support for filtering partitions on temporary tables
> 
>
> Key: HIVE-22230
> URL: https://issues.apache.org/jira/browse/HIVE-22230
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-22230.01.patch
>
>
> We need support for filtering partitions on temporary tables. In order to 
> achieve this, SessionHiveMetastoreClient must implement the following methods:
> {code:java}
> public List listPartitionsByFilter(String catName, String dbName, 
> String tableName,String filter, int maxParts)
> public int getNumPartitionsByFilter(String catName, String dbName, String 
> tableName, String filter)
> public PartitionSpecProxy listPartitionSpecsByFilter(String catName, String 
> dbName, String tblName, String filter, int maxParts)
> public PartitionValuesResponse listPartitionValues(PartitionValuesRequest 
> request)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22282) Obtain LLAP delegation token only when LLAP is configured for Kerberos authentication

2019-10-04 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-22282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944324#comment-16944324
 ] 

Ádám Szita commented on HIVE-22282:
---

Looking good, Denys

+1

> Obtain LLAP delegation token only when LLAP is configured for Kerberos 
> authentication
> -
>
> Key: HIVE-22282
> URL: https://issues.apache.org/jira/browse/HIVE-22282
> Project: Hive
>  Issue Type: Improvement
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22282.1.patch, HIVE-22282.2.patch, 
> HIVE-22282.3.patch
>
>
> Contains also Kerberos related Zookeeper configuration changes after refactor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22212) Implement append partition related methods on temporary tables

2019-10-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22212:
--
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to master (thanks for +1 received from Peter Vary). Thanks Laszlo for 
this change.

> Implement append partition related methods on temporary tables
> --
>
> Key: HIVE-22212
> URL: https://issues.apache.org/jira/browse/HIVE-22212
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22212.01.patch, HIVE-22212.02.patch, 
> HIVE-22212.03.patch
>
>
> The following methods must be implemented in SessionHiveMetastoreClient, in 
> order to support partition append on temporary tables:
> {code:java}
>   Partition appendPartition(String dbName, String tableName, List 
> partVals)
>   throws InvalidObjectException, AlreadyExistsException, MetaException, 
> TException;
>   Partition appendPartition(String catName, String dbName, String tableName, 
> List partVals)
>   throws InvalidObjectException, AlreadyExistsException, MetaException, 
> TException;
>   Partition appendPartition(String dbName, String tableName, String name)
>   throws InvalidObjectException, AlreadyExistsException, MetaException, 
> TException;
>   Partition appendPartition(String catName, String dbName, String tableName, 
> String name)
>   throws InvalidObjectException, AlreadyExistsException, MetaException, 
> TException;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22270) Upgrade commons-io to 2.6

2019-10-04 Thread David Lavati (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-22270:

Attachment: HIVE-22270.01.patch

> Upgrade commons-io to 2.6
> -
>
> Key: HIVE-22270
> URL: https://issues.apache.org/jira/browse/HIVE-22270
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22270.01.patch, HIVE-22270.01.patch, 
> HIVE-22270.01.patch, HIVE-22270.patch, HIVE-22270.patch, HIVE-22270.patch, 
> HIVE-22270.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive's currently using commons-io 2.4 and according to HIVE-21273, a number 
> of issues are present in it, which can be resolved by upgrading to 2.6:
> IOUtils copyLarge() and skip() methods are performance hogs
>  affectsVersions:2.3;2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-355?filter=allopenissues]
>  CharSequenceInputStream#reset() behaves incorrectly in case when buffer size 
> is not dividable by data size
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-356?filter=allopenissues]
>  [Tailer] InterruptedException while the thead is sleeping is silently ignored
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-357?filter=allopenissues]
>  IOUtils.contentEquals* methods returns false if input1 == input2; should 
> return true
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-362?filter=allopenissues]
>  Apache Commons - standard links for documents are failing
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-369?filter=allopenissues]
>  FileUtils.sizeOfDirectoryAsBigInteger can overflow
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-390?filter=allopenissues]
>  Regression in FileUtils.readFileToString from 2.0.1
>  affectsVersions:2.1;2.2;2.3;2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-453?filter=allopenissues]
>  Correct exception message in FileUtils.getFile(File; String...)
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-479?filter=allopenissues]
>  org.apache.commons.io.FileUtils#waitFor waits too long
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-481?filter=allopenissues]
>  FilenameUtils should handle embedded null bytes
>  affectsVersions:2.4
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-484?filter=allopenissues]
>  Exceptions are suppressed incorrectly when copying files.
>  affectsVersions:2.4;2.5
>  
> [https://issues.apache.org/jira/projects/IO/issues/IO-502?filter=allopenissues]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22278) Upgrade log4j to 2.12.1

2019-10-04 Thread David Lavati (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-22278:

Attachment: HIVE-22278.02.patch

> Upgrade log4j to 2.12.1
> ---
>
> Key: HIVE-22278
> URL: https://issues.apache.org/jira/browse/HIVE-22278
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22278.02.patch, HIVE-22278.02.patch, 
> HIVE-22278.02.patch, HIVE-22278.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive's currently using log4j 2.10.0 and according to HIVE-21273, a number of 
> issues are present in it, which can be resolved by upgrading to 2.12.1:
> Curly braces in parameters are treated as placeholders
>  affectsVersions:2.8.2;2.9.0;2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2032?filter=allopenissues]
>  Remove Log4J API dependency on Management APIs
>  affectsVersions:2.9.1;2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2126?filter=allopenissues]
>  Log4j2 throws NoClassDefFoundError in Java 9
>  affectsVersions:2.10.0;2.11.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2129?filter=allopenissues]
>  ThreadContext map is cleared => entries are only available for one log event
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2158?filter=allopenissues]
>  Objects held in SortedArrayStringMap cannot be filtered during serialization
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2163?filter=allopenissues]
>  NullPointerException at 
> org.apache.logging.log4j.util.Activator.loadProvider(Activator.java:81) in 
> log4j 2.10.0
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2182?filter=allopenissues]
>  MarkerFilter onMismatch invalid attribute in .properties
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2202?filter=allopenissues]
>  Configuration builder classes should look for "onMismatch"; not "onMisMatch".
>  
> affectsVersions:2.4;2.4.1;2.5;2.6;2.6.1;2.6.2;2.7;2.8;2.8.1;2.8.2;2.9.0;2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2219?filter=allopenissues]
>  Empty Automatic-Module-Name Header
>  affectsVersions:2.10.0;2.11.0;3.0.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2254?filter=allopenissues]
>  ConcurrentModificationException from 
> org.apache.logging.log4j.status.StatusLogger.(StatusLogger.java:71)
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2276?filter=allopenissues]
>  Allow SystemPropertiesPropertySource to run with a SecurityManager that 
> rejects system property access
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2279?filter=allopenissues]
>  ParserConfigurationException when using Log4j with 
> oracle.xml.jaxp.JXDocumentBuilderFactory
>  affectsVersions:2.10.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2283?filter=allopenissues]
>  Log4j 2.10+not working with SLF4J 1.8 in OSGI environment
>  affectsVersions:2.10.0;2.11.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2305?filter=allopenissues]
>  fix the CacheEntry map in ThrowableProxy#toExtendedStackTrace to be put and 
> gotten with same key
>  affectsVersions:2.6.2;2.7;2.8;2.8.1;2.8.2;2.9.0;2.9.1;2.10.0;2.11.0
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2389?filter=allopenissues]
>  NullPointerException when closing never used RollingRandomAccessFileAppender
>  affectsVersions:2.10.0;2.11.1
>  
> [https://issues.apache.org/jira/projects/LOG4J2/issues/LOG4J2-2418?filter=allopenissues]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22292) Implement Hypothetical-Set Aggregate Functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa reassigned HIVE-22292:
-


> Implement Hypothetical-Set Aggregate Functions
> --
>
> Key: HIVE-22292
> URL: https://issues.apache.org/jira/browse/HIVE-22292
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
>
> {code}
>  ::=
>
>
>   
>  ::=
>   RANK
>   | DENSE_RANK
>   | PERCENT_RANK
>   | CUME_DIST
> {code}
> Example:
> {code}
> CREATE TABLE table1 (column1 int);
> INSERT INTO table1 VALUES (NULL), (3), (8), (13), (7), (6), (20), (NULL), 
> (NULL), (10), (7), (15), (16), (8), (7), (8), (NULL);
> {code}
> {code}
> SELECT rank(6) WITHIN GROUP (ORDER BY column1) FROM table1;
> {code}
> {code}
> 2
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22291) HMS Translation: Limit translation to hive default catalog only

2019-10-04 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22291:
-
Attachment: HIVE-22291.patch

> HMS Translation: Limit translation to hive default catalog only
> ---
>
> Key: HIVE-22291
> URL: https://issues.apache.org/jira/browse/HIVE-22291
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22291.patch
>
>
> HMS Translation should only be limited to a single catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22291) HMS Translation: Limit translation to hive default catalog only

2019-10-04 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22291:
-
Status: Patch Available  (was: Open)

> HMS Translation: Limit translation to hive default catalog only
> ---
>
> Key: HIVE-22291
> URL: https://issues.apache.org/jira/browse/HIVE-22291
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>
> HMS Translation should only be limited to a single catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22291) HMS Translation: Limit translation to hive default catalog only

2019-10-04 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-22291:



> HMS Translation: Limit translation to hive default catalog only
> ---
>
> Key: HIVE-22291
> URL: https://issues.apache.org/jira/browse/HIVE-22291
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>
> HMS Translation should only be limited to a single catalog.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323237=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323237
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:46
Start Date: 04/Oct/19 06:46
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331355698
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreConfigAuthenticationProviderImpl.java
 ##
 @@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import javax.security.sasl.AuthenticationException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This authentication provider implements the {@code CONFIG} authentication. 
It allows a {@link
+ * MetaStorePasswdAuthenticationProvider} to be specified at configuration 
time which may
+ * additionally
+ * implement {@link org.apache.hadoop.conf.Configurable Configurable} to grab 
HMS's {@link
+ * org.apache.hadoop.conf.Configuration Configuration}.
+ */
+public class MetaStoreConfigAuthenticationProviderImpl implements 
MetaStorePasswdAuthenticationProvider {
+  private final String userName;
+  private final String password;
+  protected static final Logger LOG = 
LoggerFactory.getLogger(MetaStoreConfigAuthenticationProviderImpl.class);
+
+  @SuppressWarnings("unchecked")
+  MetaStoreConfigAuthenticationProviderImpl(Configuration conf) throws 
AuthenticationException {
+userName = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+password = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_PASSWORD);
+
+if (null == userName || userName.isEmpty()) {
+  throw new AuthenticationException("No username specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+
+if (null == password || password.isEmpty()) {
+  throw new AuthenticationException("No password specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+  }
+
+  @Override
+  public void Authenticate(String authUser, String authPassword) throws 
AuthenticationException {
+if (!userName.equals(authUser)) {
+  throw new AuthenticationException("Invalid user " + authUser);
 
 Review comment:
   dont put username and password in exception ..add it as a debug log 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323237)
Time Spent: 1h 50m  (was: 1h 40m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323236=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323236
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:46
Start Date: 04/Oct/19 06:46
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331355159
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreAuthenticationProviderFactory.java
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import org.apache.hadoop.conf.Configuration;
+
+import javax.security.sasl.AuthenticationException;
+
+// This file is copies from 
org.apache.hive.service.auth.AuthenticationProviderFactory. Need to
+// deduplicate this code.
+/**
+ * This class helps select a {@link MetaStorePasswdAuthenticationProvider} for 
a given {@code
+ * AuthMethod}.
+ */
+public final class MetaStoreAuthenticationProviderFactory {
+
+  public enum AuthMethods {
+LDAP("LDAP"),
+PAM("PAM"),
+CUSTOM("CUSTOM"),
+NONE("NONE"),
+CONFIG("CONFIG");
+
+private final String authMethod;
+
+AuthMethods(String authMethod) {
+  this.authMethod = authMethod;
+}
+
+public String getAuthMethod() {
+  return authMethod;
+}
+
+public static AuthMethods getValidAuthMethod(String authMethodStr)
+  throws AuthenticationException {
+  for (AuthMethods auth : AuthMethods.values()) {
+if (authMethodStr.equals(auth.getAuthMethod())) {
+  return auth;
+}
+  }
+  throw new AuthenticationException("Not a valid authentication method");
+}
+  }
+
+  private MetaStoreAuthenticationProviderFactory() {
+  }
+
+  public static MetaStorePasswdAuthenticationProvider 
getAuthenticationProvider(AuthMethods authMethod)
+throws AuthenticationException {
+return getAuthenticationProvider(new Configuration(), authMethod);
+  }
+
+  public static MetaStorePasswdAuthenticationProvider 
getAuthenticationProvider(Configuration conf, AuthMethods authMethod)
+throws AuthenticationException {
+if (authMethod == AuthMethods.LDAP) {
+  return new MetaStoreLdapAuthenticationProviderImpl(conf);
+} else if (authMethod == AuthMethods.CUSTOM) {
+  return new MetaStoreCustomAuthenticationProviderImpl(conf);
 
 Review comment:
   PAM is not supported ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323236)
Time Spent: 1h 50m  (was: 1h 40m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323238=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323238
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:46
Start Date: 04/Oct/19 06:46
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331355497
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreConfigAuthenticationProviderImpl.java
 ##
 @@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.metastore;
+
+import javax.security.sasl.AuthenticationException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This authentication provider implements the {@code CONFIG} authentication. 
It allows a {@link
+ * MetaStorePasswdAuthenticationProvider} to be specified at configuration 
time which may
+ * additionally
+ * implement {@link org.apache.hadoop.conf.Configurable Configurable} to grab 
HMS's {@link
+ * org.apache.hadoop.conf.Configuration Configuration}.
+ */
+public class MetaStoreConfigAuthenticationProviderImpl implements 
MetaStorePasswdAuthenticationProvider {
+  private final String userName;
+  private final String password;
+  protected static final Logger LOG = 
LoggerFactory.getLogger(MetaStoreConfigAuthenticationProviderImpl.class);
+
+  @SuppressWarnings("unchecked")
+  MetaStoreConfigAuthenticationProviderImpl(Configuration conf) throws 
AuthenticationException {
+userName = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+password = MetastoreConf.getVar(conf, 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_PASSWORD);
+
+if (null == userName || userName.isEmpty()) {
+  throw new AuthenticationException("No username specified in " + 
MetastoreConf.ConfVars.THRIFT_AUTH_CONFIG_USERNAME);
+}
+
+if (null == password || password.isEmpty()) {
 
 Review comment:
   password can be empty 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323238)
Time Spent: 1h 50m  (was: 1h 40m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323235=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323235
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:44
Start Date: 04/Oct/19 06:44
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331359501
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -1039,8 +1136,27 @@ public static ConfVars getMetaConf(String name) {
 "More users can be added in ADMIN role later."),
 USE_SSL("metastore.use.SSL", "hive.metastore.use.SSL", false,
 "Set this to true for using SSL encryption in HMS server."),
+// We should somehow unify next two options.
 USE_THRIFT_SASL("metastore.sasl.enabled", "hive.metastore.sasl.enabled", 
false,
 "If true, the metastore Thrift interface will be secured with SASL. 
Clients must authenticate with Kerberos."),
+METASTORE_CLIENT_USE_PLAIN_AUTH("metastore.client.use.plain.auth",
+"metastore.client.use.plain.auth", false,
+"If true, clients will authenticate using plain authentication, by 
providing username" +
+" and password."),
+METASTORE_CLIENT_PLAIN_USERNAME("metastore.client.plain.username",
+"metastore.client.plain.username",  "",
+"The username used by the metastore client when " +
+METASTORE_CLIENT_USE_PLAIN_AUTH + " is true. The password 
is obtained from " +
+CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH + " 
using username as the " +
+"alias."),
+THRIFT_AUTH_CONFIG_USERNAME("metastore.authentication.config.username",
 
 Review comment:
   The first one is client side config. It will be used whenever there's PLAIN 
authentication. That specifies what username to provide to the server.
   
   The other is server side and used only when CONFIG authentication is used. 
It's the username that will be matched against the username provided by the 
client.
   
   They can't be combined.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323235)
Time Spent: 1h 40m  (was: 1.5h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22281) Create table statement fails with "not supported NULLS LAST for ORDER BY in ASC order"

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22281:
--
Status: Open  (was: Patch Available)

> Create table statement fails with "not supported NULLS LAST for ORDER BY in 
> ASC order"
> --
>
> Key: HIVE-22281
> URL: https://issues.apache.org/jira/browse/HIVE-22281
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22281.1.patch, HIVE-22281.1.patch, 
> HIVE-22281.2.patch, HIVE-22281.2.patch, HIVE-22281.2.patch
>
>
> {code}
> CREATE TABLE table_core2c4ywq7yjx ( k1 STRING, f1 STRING, 
> sequence_num BIGINT, create_bsk BIGINT, change_bsk BIGINT, 
> op_code STRING ) PARTITIONED BY (run_id BIGINT) CLUSTERED BY (k1) SORTED BY 
> (k1, change_bsk, sequence_num) INTO 4 BUCKETS STORED AS ORC
> {code}
> {code}
> Error while compiling statement: FAILED: SemanticException create/alter 
> table: not supported NULLS LAST for ORDER BY in ASC order
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22281) Create table statement fails with "not supported NULLS LAST for ORDER BY in ASC order"

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22281:
--
Status: Patch Available  (was: Open)

> Create table statement fails with "not supported NULLS LAST for ORDER BY in 
> ASC order"
> --
>
> Key: HIVE-22281
> URL: https://issues.apache.org/jira/browse/HIVE-22281
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22281.1.patch, HIVE-22281.1.patch, 
> HIVE-22281.2.patch, HIVE-22281.2.patch, HIVE-22281.2.patch
>
>
> {code}
> CREATE TABLE table_core2c4ywq7yjx ( k1 STRING, f1 STRING, 
> sequence_num BIGINT, create_bsk BIGINT, change_bsk BIGINT, 
> op_code STRING ) PARTITIONED BY (run_id BIGINT) CLUSTERED BY (k1) SORTED BY 
> (k1, change_bsk, sequence_num) INTO 4 BUCKETS STORED AS ORC
> {code}
> {code}
> Error while compiling statement: FAILED: SemanticException create/alter 
> table: not supported NULLS LAST for ORDER BY in ASC order
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323234=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323234
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:43
Start Date: 04/Oct/19 06:43
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331359501
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -1039,8 +1136,27 @@ public static ConfVars getMetaConf(String name) {
 "More users can be added in ADMIN role later."),
 USE_SSL("metastore.use.SSL", "hive.metastore.use.SSL", false,
 "Set this to true for using SSL encryption in HMS server."),
+// We should somehow unify next two options.
 USE_THRIFT_SASL("metastore.sasl.enabled", "hive.metastore.sasl.enabled", 
false,
 "If true, the metastore Thrift interface will be secured with SASL. 
Clients must authenticate with Kerberos."),
+METASTORE_CLIENT_USE_PLAIN_AUTH("metastore.client.use.plain.auth",
+"metastore.client.use.plain.auth", false,
+"If true, clients will authenticate using plain authentication, by 
providing username" +
+" and password."),
+METASTORE_CLIENT_PLAIN_USERNAME("metastore.client.plain.username",
+"metastore.client.plain.username",  "",
+"The username used by the metastore client when " +
+METASTORE_CLIENT_USE_PLAIN_AUTH + " is true. The password 
is obtained from " +
+CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH + " 
using username as the " +
+"alias."),
+THRIFT_AUTH_CONFIG_USERNAME("metastore.authentication.config.username",
 
 Review comment:
   The first one is client side config. It will be used whenever there's PLAIN 
authentication. That specifies what username to provide to the server.
   
   The other is server side and used only when CONFIG authentication is used. 
It's the username that will be matched against the username provided by the 
client.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323234)
Time Spent: 1.5h  (was: 1h 20m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22281) Create table statement fails with "not supported NULLS LAST for ORDER BY in ASC order"

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22281:
--
Attachment: HIVE-22281.2.patch

> Create table statement fails with "not supported NULLS LAST for ORDER BY in 
> ASC order"
> --
>
> Key: HIVE-22281
> URL: https://issues.apache.org/jira/browse/HIVE-22281
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22281.1.patch, HIVE-22281.1.patch, 
> HIVE-22281.2.patch, HIVE-22281.2.patch, HIVE-22281.2.patch
>
>
> {code}
> CREATE TABLE table_core2c4ywq7yjx ( k1 STRING, f1 STRING, 
> sequence_num BIGINT, create_bsk BIGINT, change_bsk BIGINT, 
> op_code STRING ) PARTITIONED BY (run_id BIGINT) CLUSTERED BY (k1) SORTED BY 
> (k1, change_bsk, sequence_num) INTO 4 BUCKETS STORED AS ORC
> {code}
> {code}
> Error while compiling statement: FAILED: SemanticException create/alter 
> table: not supported NULLS LAST for ORDER BY in ASC order
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22250:
--
Status: Open  (was: Patch Available)

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, HIVE-22250.3.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22250:
--
Status: Patch Available  (was: Open)

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, HIVE-22250.3.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22250) Describe function does not provide description for rank functions

2019-10-04 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22250:
--
Attachment: HIVE-22250.3.patch

> Describe function does not provide description for rank functions
> -
>
> Key: HIVE-22250
> URL: https://issues.apache.org/jira/browse/HIVE-22250
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22250.1.patch, HIVE-22250.1.patch, 
> HIVE-22250.1.patch, HIVE-22250.2.patch, HIVE-22250.3.patch, HIVE-22250.3.patch
>
>
> {code}
> @WindowFunctionDescription(
>   description = @Description(
> name = "dense_rank",
> value = "_FUNC_(x) The difference between RANK and DENSE_RANK is that 
> DENSE_RANK leaves no " +
> "gaps in ranking sequence when there are ties. That is, if you 
> were " +
> "ranking a competition using DENSE_RANK and had three people tie 
> for " +
> "second place, you would say that all three were in second place 
> and " +
> "that the next person came in third."
>   ),
>   supportsWindow = false,
>   pivotResult = true,
>   rankingFunction = true,
>   impliesOrder = true
> )
> {code}
> {code}
> DESC FUNCTION dense_rank;
> {code}
> {code}
> PREHOOK: query: DESC FUNCTION dense_rank
> PREHOOK: type: DESCFUNCTION
> POSTHOOK: query: DESC FUNCTION dense_rank
> POSTHOOK: type: DESCFUNCTION
> There is no documentation for function 'dense_rank'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323233=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323233
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:40
Start Date: 04/Oct/19 06:40
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #793: 
HIVE-22267 : Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331358594
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -629,6 +635,97 @@ public static ConfVars getMetaConf(String name) {
 "hive-metastore/_h...@example.com",
 "The service principal for the metastore Thrift server. \n" +
 "The special string _HOST will be replaced automatically with the 
correct host name."),
+THRIFT_METASTORE_AUTHENTICATION("metastore.authentication", 
"hive.metastore.authentication",
+"NOSASL",
+  new StringSetValidator("NOSASL", "NONE", "LDAP", "KERBEROS", "CUSTOM"),
+"Client authentication types.\n" +
+"  NONE: no authentication check\n" +
+"  LDAP: LDAP/AD based authentication\n" +
+"  KERBEROS: Kerberos/GSSAPI authentication\n" +
 
 Review comment:
   none means that the SASL protocol will be executed but there will be no 
authentication check. The user will be simply accepted as is.
   
   NoSASL means there's no SASL protocol, no user, just plain transport.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323233)
Time Spent: 1h 20m  (was: 1h 10m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22275) OperationManager.queryIdOperation does not properly clean up multiple queryIds

2019-10-04 Thread Jason Dere (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22275:
--
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to master

> OperationManager.queryIdOperation does not properly clean up multiple queryIds
> --
>
> Key: HIVE-22275
> URL: https://issues.apache.org/jira/browse/HIVE-22275
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22275.1.patch, HIVE-22275.2.patch
>
>
> In the case that multiple statements are run by a single Session before being 
> cleaned up, it appears that OperationManager.queryIdOperation is not cleaned 
> up properly.
> See the log statements below - with the exception of the first "Removed 
> queryId:" log line, the queryId listed during cleanup is the same, when each 
> of these handles should have their own queryId. Looks like only the last 
> queryId executed is being cleaned up.
> As a result, HS2 can run out of memory as OperationManager.queryIdOperation 
> grows and never cleans these queryIds/Operations up.
> {noformat}
> 2019-09-13T08:37:36,785 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=dfed4c18-a284-4640-9f4a-1a20527105f9]
> 2019-09-13T08:37:38,432 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Removed queryId: hive_20190913083736_c49cf3cc-cfe8-48a1-bd22-8b924dfb0396 
> corresponding to operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=dfed4c18-a284-4640-9f4a-1a20527105f9] with tag: null
> 2019-09-13T08:37:38,469 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=24d0030c-0e49-45fb-a918-2276f0941cfb]
> 2019-09-13T08:37:52,662 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=b983802c-1dec-4fa0-8680-d05ab555321b]
> 2019-09-13T08:37:56,239 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=75dbc531-2964-47b2-84d7-85b59f88999c]
> 2019-09-13T08:38:02,551 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=72c79076-9d67-4894-a526-c233fa5450b2]
> 2019-09-13T08:38:10,558 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=17b30a62-612d-4b70-9ba7-4287d2d9229b]
> 2019-09-13T08:38:16,930 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=ea97e99d-cc77-470b-b49a-b869c73a4615]
> 2019-09-13T08:38:20,440 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=a277b789-ebb8-4925-878f-6728d3e8c5fb]
> 2019-09-13T08:38:26,303 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=9a023ab8-aa80-45db-af88-94790cc83033]
> 2019-09-13T08:38:30,791 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=b697c801-7da0-4544-bcfa-442eb1d3bd77]
> 2019-09-13T08:39:10,187 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - 
> Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, 
> getHandleIdentifier()=bda93c8f-0822-4592-a61c-4701720a1a5c]
> 2019-09-13T08:39:15,471 INFO  [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a 
> HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) 

[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323227=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323227
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:17
Start Date: 04/Oct/19 06:17
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331353699
 
 

 ##
 File path: 
itests/hive-unit-hadoop2/src/test/java/org/apache/hadoop/hive/metastore/security/TestHadoopAuthBridge23.java
 ##
 @@ -94,16 +94,15 @@ public Server() throws TTransportException {
 super();
   }
   @Override
-  public TTransportFactory createTransportFactory(Map 
saslProps)
-  throws TTransportException {
+  public TSaslServerTransport.Factory 
createSaslServerTransportFactory(Map saslProps) {
 TSaslServerTransport.Factory transFactory =
   new TSaslServerTransport.Factory();
 transFactory.addServerDefinition(AuthMethod.DIGEST.getMechanismName(),
 null, SaslRpcServer.SASL_DEFAULT_REALM,
 saslProps,
 new SaslDigestCallbackHandler(secretManager));
 
-return new TUGIAssumingTransportFactory(transFactory, realUgi);
 
 Review comment:
   why this change is required ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323227)
Time Spent: 1h 10m  (was: 1h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323218=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323218
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331081442
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -1039,8 +1136,27 @@ public static ConfVars getMetaConf(String name) {
 "More users can be added in ADMIN role later."),
 USE_SSL("metastore.use.SSL", "hive.metastore.use.SSL", false,
 "Set this to true for using SSL encryption in HMS server."),
+// We should somehow unify next two options.
 USE_THRIFT_SASL("metastore.sasl.enabled", "hive.metastore.sasl.enabled", 
false,
 "If true, the metastore Thrift interface will be secured with SASL. 
Clients must authenticate with Kerberos."),
+METASTORE_CLIENT_USE_PLAIN_AUTH("metastore.client.use.plain.auth",
+"metastore.client.use.plain.auth", false,
+"If true, clients will authenticate using plain authentication, by 
providing username" +
+" and password."),
+METASTORE_CLIENT_PLAIN_USERNAME("metastore.client.plain.username",
+"metastore.client.plain.username",  "",
+"The username used by the metastore client when " +
+METASTORE_CLIENT_USE_PLAIN_AUTH + " is true. The password 
is obtained from " +
+CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH + " 
using username as the " +
+"alias."),
+THRIFT_AUTH_CONFIG_USERNAME("metastore.authentication.config.username",
 
 Review comment:
   The two config METASTORE_CLIENT_PLAIN_USERNAME and 
THRIFT_AUTH_CONFIG_USERNAME can be one. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323218)
Time Spent: 0.5h  (was: 20m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323221=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323221
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331331862
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStoreCustomAuth.java
 ##
 @@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.experimental.categories.Category;
+
+import javax.security.sasl.AuthenticationException;
+import java.util.HashMap;
+import java.util.Map;
+
+@Category(MetastoreCheckinTest.class)
+public class TestRemoteHiveMetaStoreCustomAuth extends TestRemoteHiveMetaStore 
{
+  private static String correctUser = "correct_user";
+  private static String correctPassword = "correct_passwd";
+  private static String wrongPassword = "wrong_password";
+  private static String wrongUser = "wrong_user";
+
+  @Before
+  public void setUp() throws Exception {
+initConf();
+MetastoreConf.setVar(conf, ConfVars.THRIFT_METASTORE_AUTHENTICATION, 
"CUSTOM");
+MetastoreConf.setVar(conf, ConfVars.THRIFT_CUSTOM_AUTHENTICATION_CLASS,
+
"org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreCustomAuth$SimpleAuthenticationProviderImpl");
+MetastoreConf.setBoolVar(conf, ConfVars.EXECUTE_SET_UGI, false);
+super.setUp();
+  }
+
+  @Override
+  protected HiveMetaStoreClient createClient() throws Exception {
+boolean gotException = false;
+MetastoreConf.setVar(conf, ConfVars.THRIFT_URIS, "thrift://localhost:" + 
port);
+MetastoreConf.setBoolVar(conf, ConfVars.METASTORE_CLIENT_USE_PLAIN_AUTH, 
true);
+String tmpDir = System.getProperty("build.dir");
+String credentialsPath = "jceks://file" + tmpDir + 
"/test-classes/creds/hms_plain_auth_test.jceks";
+conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH, 
credentialsPath);
+
+try {
+  MetastoreConf.setVar(conf, ConfVars.METASTORE_CLIENT_PLAIN_USERNAME, 
wrongUser);
+  HiveMetaStoreClient tmpClient = new HiveMetaStoreClient(conf);
+} catch (Exception e) {
+  gotException = true;
+}
+// Trying to log in using wrong username and password should fail
+Assert.assertTrue(gotException);
 
 Review comment:
   Add one more test for correct user with wrong password.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323221)
Time Spent: 40m  (was: 0.5h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>   

[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323217=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323217
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331014847
 
 

 ##
 File path: 
itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/RemoteHiveMetaStoreDualAuthTest.java
 ##
 @@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hive.minikdc;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
+import org.junit.Before;
+
+public class RemoteHiveMetaStoreDualAuthTest extends TestRemoteHiveMetaStore {
+  // These names are tied with the .jceks file used by the subclasses. So, do 
not change those.
 
 Review comment:
   you can create a jck file programmatically using CredentialProviderFactory.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323217)
Time Spent: 20m  (was: 10m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323222=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323222
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331079727
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -629,6 +635,97 @@ public static ConfVars getMetaConf(String name) {
 "hive-metastore/_h...@example.com",
 "The service principal for the metastore Thrift server. \n" +
 "The special string _HOST will be replaced automatically with the 
correct host name."),
+THRIFT_METASTORE_AUTHENTICATION("metastore.authentication", 
"hive.metastore.authentication",
+"NOSASL",
+  new StringSetValidator("NOSASL", "NONE", "LDAP", "KERBEROS", "CUSTOM"),
+"Client authentication types.\n" +
+"  NONE: no authentication check\n" +
+"  LDAP: LDAP/AD based authentication\n" +
+"  KERBEROS: Kerberos/GSSAPI authentication\n" +
 
 Review comment:
   what the diff between none and nosasl ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323222)
Time Spent: 40m  (was: 0.5h)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323220=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323220
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331078880
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java
 ##
 @@ -629,6 +635,97 @@ public static ConfVars getMetaConf(String name) {
 "hive-metastore/_h...@example.com",
 "The service principal for the metastore Thrift server. \n" +
 "The special string _HOST will be replaced automatically with the 
correct host name."),
+THRIFT_METASTORE_AUTHENTICATION("metastore.authentication", 
"hive.metastore.authentication",
+"NOSASL",
+  new StringSetValidator("NOSASL", "NONE", "LDAP", "KERBEROS", "CUSTOM"),
+"Client authentication types.\n" +
+"  NONE: no authentication check\n" +
+"  LDAP: LDAP/AD based authentication\n" +
+"  KERBEROS: Kerberos/GSSAPI authentication\n" +
+"  CUSTOM: Custom authentication provider\n" +
+"  (Use with property 
metastore.custom.authentication.class)\n" +
+"  CONFIG: username and password is specified in the config" +
+"  NOSASL:  Raw transport"),
+THRIFT_CUSTOM_AUTHENTICATION_CLASS("metastore.custom.authentication.class",
 
 Review comment:
   it should be named METASTORE_CUSTOM_AUTHENTICATION_CLASS
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323220)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323225
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331331767
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStoreCustomAuth.java
 ##
 @@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.metastore;
+
+import org.apache.hadoop.hive.metastore.annotation.MetastoreCheckinTest;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.experimental.categories.Category;
+
+import javax.security.sasl.AuthenticationException;
+import java.util.HashMap;
+import java.util.Map;
+
+@Category(MetastoreCheckinTest.class)
+public class TestRemoteHiveMetaStoreCustomAuth extends TestRemoteHiveMetaStore 
{
+  private static String correctUser = "correct_user";
+  private static String correctPassword = "correct_passwd";
+  private static String wrongPassword = "wrong_password";
+  private static String wrongUser = "wrong_user";
+
+  @Before
+  public void setUp() throws Exception {
+initConf();
+MetastoreConf.setVar(conf, ConfVars.THRIFT_METASTORE_AUTHENTICATION, 
"CUSTOM");
+MetastoreConf.setVar(conf, ConfVars.THRIFT_CUSTOM_AUTHENTICATION_CLASS,
+
"org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreCustomAuth$SimpleAuthenticationProviderImpl");
+MetastoreConf.setBoolVar(conf, ConfVars.EXECUTE_SET_UGI, false);
+super.setUp();
+  }
+
+  @Override
+  protected HiveMetaStoreClient createClient() throws Exception {
+boolean gotException = false;
+MetastoreConf.setVar(conf, ConfVars.THRIFT_URIS, "thrift://localhost:" + 
port);
+MetastoreConf.setBoolVar(conf, ConfVars.METASTORE_CLIENT_USE_PLAIN_AUTH, 
true);
+String tmpDir = System.getProperty("build.dir");
+String credentialsPath = "jceks://file" + tmpDir + 
"/test-classes/creds/hms_plain_auth_test.jceks";
+conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH, 
credentialsPath);
+
+try {
+  MetastoreConf.setVar(conf, ConfVars.METASTORE_CLIENT_PLAIN_USERNAME, 
wrongUser);
+  HiveMetaStoreClient tmpClient = new HiveMetaStoreClient(conf);
+} catch (Exception e) {
 
 Review comment:
   use some specific exception class to validate. The exception string should 
also be checked.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323225)
Time Spent: 1h  (was: 50m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in 

[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323219=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323219
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331051489
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 ##
 @@ -590,7 +592,30 @@ private void open() throws MetaException {
 transport = new TSocket(store.getHost(), store.getPort(), 
clientSocketTimeout);
   }
 
-  if (useSasl) {
+  if (usePasswordAuth) {
+// we are using PLAIN Sasl connection with user/password
+LOG.debug("HMSC::open(): Creating plain authentication thrift 
connection.");
+String userName = MetastoreConf.getVar(conf, 
ConfVars.METASTORE_CLIENT_PLAIN_USERNAME);
 
 Review comment:
   check for userName nullity is required 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323219)
Time Spent: 0.5h  (was: 20m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22267) Support password based authentication in HMS

2019-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22267?focusedWorklogId=323224=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-323224
 ]

ASF GitHub Bot logged work on HIVE-22267:
-

Author: ASF GitHub Bot
Created on: 04/Oct/19 06:15
Start Date: 04/Oct/19 06:15
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #793: HIVE-22267 : 
Support password based authentication for HMS along-side kerberos 
authentication.
URL: https://github.com/apache/hive/pull/793#discussion_r331052667
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 ##
 @@ -590,7 +592,30 @@ private void open() throws MetaException {
 transport = new TSocket(store.getHost(), store.getPort(), 
clientSocketTimeout);
   }
 
-  if (useSasl) {
+  if (usePasswordAuth) {
+// we are using PLAIN Sasl connection with user/password
+LOG.debug("HMSC::open(): Creating plain authentication thrift 
connection.");
+String userName = MetastoreConf.getVar(conf, 
ConfVars.METASTORE_CLIENT_PLAIN_USERNAME);
+// The password is not directly provided. It should be obtained 
from a keystore pointed
+// by configuration "hadoop.security.credential.provider.path".
+try {
+  String passwd = null;
+  char[] pwdCharArray = conf.getPassword(userName);
+  if (null != pwdCharArray) {
+   passwd = new String(pwdCharArray);
+  }
 
 Review comment:
   i think it should throw exception , if password is not giveen
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 323224)
Time Spent: 50m  (was: 40m)

> Support password based authentication in HMS
> 
>
> Key: HIVE-22267
> URL: https://issues.apache.org/jira/browse/HIVE-22267
> Project: Hive
>  Issue Type: New Feature
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22267.00.patch, HIVE-22267.01.patch, 
> HIVE-22267.02.patch, HIVE-22267.03.patch, HIVE-22267.04.patch, 
> HIVE-22267.05.patch, HIVE-22267.06.patch, HIVE-22267.07.patch, 
> HIVE-22267.08.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Similar to HS2, support password based authentication in HMS.
> Right now we provide LDAP and CONFIG based options. The later allows to set 
> user and password in config and is used only for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >