[jira] [Updated] (HIVE-21915) Hive with TEZ UNION ALL and UDTF results in data loss

2019-06-25 Thread Wei Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhang updated HIVE-21915:
-
Attachment: HIVE-21915.04.patch

> Hive with TEZ UNION ALL and UDTF results in data loss
> -
>
> Key: HIVE-21915
> URL: https://issues.apache.org/jira/browse/HIVE-21915
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Wei Zhang
>Assignee: Wei Zhang
>Priority: Major
> Attachments: HIVE-21915.01.patch, HIVE-21915.02.patch, 
> HIVE-21915.03.patch, HIVE-21915.04.patch
>
>
> The HQL syntax is like this:
> CREATE TEMPORARY TABLE tez_union_all_loss_data AS
> SELECT xxx, yyy, zzz,1 as tag
> FROM ods_1
> UNION ALL
> SELECT xxx, yyy, zzz, tag
> FROM
> (
> SELECT xxx
> ,get_json_object(get_json_object(tb,'$.a'),'$.b') AS yyy
> ,zzz
> ,2 as tag
> FROM ods_2
> LATERAL VIEW EXPLODE(some_udf(uuu)) team_number AS tb
> ) tbl 
> ;
>  
> With above HQL, we are expecting that rows with both tag = 2 and tag = 1 
> appear. In our case however, all the rows with tag = 1 are lost.
> Dig deeper we can find that the generated two maps have identical task tmp 
> paths. And that results from when UDTF is present, the FileSinkOperator will 
> be processed twice generating the tmp path in 
> GenTezUtils.removeUnionOperators();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21886) REPL - With table list - Handle rename events during replace policy

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872942#comment-16872942
 ] 

Hive QA commented on HIVE-21886:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 8 unchanged - 0 fixed 
= 9 total (was 8) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 25 new + 0 
unchanged - 0 fixed = 25 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17743/dev-support/hive-personality.sh
 |
| git revision | master / 967a1cc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17743/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17743/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17743/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> REPL - With table list - Handle rename events during replace policy
> ---
>
> Key: HIVE-21886
> URL: https://issues.apache.org/jira/browse/HIVE-21886
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21886.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If some rename events are found to be dumped and replayed while replace 
> policy is getting executed, it needs to take care of the policy inclusion in 
> both the 

[jira] [Commented] (HIVE-21846) Create a thread in TezAM which periodically fetches LlapDaemon metrics

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872912#comment-16872912
 ] 

Hive QA commented on HIVE-21846:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972882/HIVE-21846.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16349 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17742/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17742/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17742/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972882 - PreCommit-HIVE-Build

> Create a thread in TezAM which periodically fetches LlapDaemon metrics
> --
>
> Key: HIVE-21846
> URL: https://issues.apache.org/jira/browse/HIVE-21846
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Antal Sinkovits
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21846.01.patch, HIVE-21846.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> LlapTaskSchedulerService should start a thread which periodically fetches the 
> LlapDaemon metrics and stores them in the NodeInfo object.
> This should be just the first implementation - later we should find a way 
> where we do not need NxM requests between N TezAM and M LlapDaemon



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21846) Create a thread in TezAM which periodically fetches LlapDaemon metrics

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872899#comment-16872899
 ] 

Hive QA commented on HIVE-21846:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} llap-tez: The patch generated 5 new + 71 unchanged - 0 
fixed = 76 total (was 71) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17742/dev-support/hive-personality.sh
 |
| git revision | master / 967a1cc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17742/yetus/diff-checkstyle-llap-tez.txt
 |
| modules | C: common llap-tez U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17742/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create a thread in TezAM which periodically fetches LlapDaemon metrics
> --
>
> Key: HIVE-21846
> URL: https://issues.apache.org/jira/browse/HIVE-21846
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Antal Sinkovits
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21846.01.patch, HIVE-21846.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> LlapTaskSchedulerService should start a thread which periodically fetches the 
> LlapDaemon metrics and stores them in the NodeInfo object.
> This should be just the first implementation - later we should find a way 
> where we do not need NxM requests between N TezAM and M LlapDaemon



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON/SORTED ON support for materialized views

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18842:
---
Description: 
We should support defining a {{CLUSTERED ON/DISTRIBUTED ON/SORTED ON}} 
specification for materialized views. 

The syntax should be extended as follows:

{code:sql}
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
  [COMMENT materialized_view_comment]
  [PARTITIONED ON (col_name, ...)]
  [CLUSTERED ON (col_name, ...) | ( [DISTRIBUTED ON (col_name, ...)] [SORTED ON 
(col_name, ...)] ) ] -- NEW!
  [
   [ROW FORMAT row_format] 
   [STORED AS file_format]
 | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
  ]
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]
  AS select_statement;
{code}

  was:
We should support defining a {{CLUSTER BY/DISTRIBUTE BY/SORT BY}} specification 
for materialized views. 

The syntax should be extended as follows:

{code:sql}
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
  [COMMENT materialized_view_comment]
  [CLUSTER BY (col_name, ...) | ( [DISTRIBUTE BY (col_name, ...)] [SORT BY 
(col_name, ...)] ) ] -- NEW!
  [
   [ROW FORMAT row_format] 
   [STORED AS file_format]
 | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
  ]
  [LOCATION hdfs_path]
  [TBLPROPERTIES (property_name=property_value, ...)]
  AS select_statement;
{code}


> CLUSTERED ON/DISTRIBUTED ON/SORTED ON support for materialized views
> 
>
> Key: HIVE-18842
> URL: https://issues.apache.org/jira/browse/HIVE-18842
> Project: Hive
>  Issue Type: New Feature
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> We should support defining a {{CLUSTERED ON/DISTRIBUTED ON/SORTED ON}} 
> specification for materialized views. 
> The syntax should be extended as follows:
> {code:sql}
> CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
>   [COMMENT materialized_view_comment]
>   [PARTITIONED ON (col_name, ...)]
>   [CLUSTERED ON (col_name, ...) | ( [DISTRIBUTED ON (col_name, ...)] [SORTED 
> ON (col_name, ...)] ) ] -- NEW!
>   [
>[ROW FORMAT row_format] 
>[STORED AS file_format]
>  | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
>   ]
>   [LOCATION hdfs_path]
>   [TBLPROPERTIES (property_name=property_value, ...)]
>   AS select_statement;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON/SORTED ON support for materialized views

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18842:
---
Summary: CLUSTERED ON/DISTRIBUTED ON/SORTED ON support for materialized 
views  (was: CLUSTER BY/DISTRIBUTE BY/SORT BY support for materialized views)

> CLUSTERED ON/DISTRIBUTED ON/SORTED ON support for materialized views
> 
>
> Key: HIVE-18842
> URL: https://issues.apache.org/jira/browse/HIVE-18842
> Project: Hive
>  Issue Type: New Feature
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> We should support defining a {{CLUSTER BY/DISTRIBUTE BY/SORT BY}} 
> specification for materialized views. 
> The syntax should be extended as follows:
> {code:sql}
> CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name
>   [COMMENT materialized_view_comment]
>   [CLUSTER BY (col_name, ...) | ( [DISTRIBUTE BY (col_name, ...)] [SORT BY 
> (col_name, ...)] ) ] -- NEW!
>   [
>[ROW FORMAT row_format] 
>[STORED AS file_format]
>  | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)]
>   ]
>   [LOCATION hdfs_path]
>   [TBLPROPERTIES (property_name=property_value, ...)]
>   AS select_statement;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872891#comment-16872891
 ] 

Hive QA commented on HIVE-15177:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972880/HIVE-15177.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16340 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17741/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17741/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17741/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972880 - PreCommit-HIVE-Build

> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST
> -
>
> Key: HIVE-15177
> URL: https://issues.apache.org/jira/browse/HIVE-15177
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Subrahmanya
>Assignee: Oliver Draese
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HIVE-15177.1.patch, HIVE-15177.2.patch
>
>
> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST.
> When auth type is set to fromSubject, _HOST in principal is not resolved to 
> the actual host name even though the correct host name is available. This 
> leads to connection failure. If auth type is not set to fromSubject host 
> resolution is done correctly.
> The problem is in getKerberosTransport method of 
> org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is 
> true host name in the principal is not resolved. When it is false, host name 
> is passed on to HadoopThriftAuthBridge, which takes care of resolving the 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872874#comment-16872874
 ] 

Hive QA commented on HIVE-15177:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17741/dev-support/hive-personality.sh
 |
| git revision | master / 967a1cc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: service U: service |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17741/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST
> -
>
> Key: HIVE-15177
> URL: https://issues.apache.org/jira/browse/HIVE-15177
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Subrahmanya
>Assignee: Oliver Draese
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HIVE-15177.1.patch, HIVE-15177.2.patch
>
>
> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST.
> When auth type is set to fromSubject, _HOST in principal is not resolved to 
> the actual host name even though the correct host name is available. This 
> leads to connection failure. If auth type is not set to fromSubject host 
> resolution is done correctly.
> The problem is in getKerberosTransport method of 
> org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is 
> true host name in the principal is not resolved. When it is false, host name 
> is passed on to HadoopThriftAuthBridge, which takes care of resolving the 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872869#comment-16872869
 ] 

Hive QA commented on HIVE-21867:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972878/HIVE-21867.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 16340 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=110)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query32] 
(batchId=287)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query92] 
(batchId=287)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query32]
 (batchId=287)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query92]
 (batchId=287)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17740/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17740/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17740/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972878 - PreCommit-HIVE-Build

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, 
> HIVE-21867.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21924) Split text files even if header/footer exists

2019-06-25 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21924:
-
Summary: Split text files even if header/footer exists  (was: Split text 
files if only header/footer is present)

> Split text files even if header/footer exists
> -
>
> Key: HIVE-21924
> URL: https://issues.apache.org/jira/browse/HIVE-21924
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.4.0, 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Priority: Major
>
> https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
>  
> {code}
> int headerCount = 0;
> int footerCount = 0;
> if (table != null) {
>   headerCount = Utilities.getHeaderCount(table);
>   footerCount = Utilities.getFooterCount(table, conf);
>   if (headerCount != 0 || footerCount != 0) {
> // Input file has header or footer, cannot be splitted.
> HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, 
> Long.MAX_VALUE);
>   }
> }
> {code}
> this piece of code makes the CSV (or any text files with header/footer) files 
> not splittable if header or footer is present. 
> If only header is present, we can find the offset after first line break and 
> use that to split. Similarly for footer, may be read few KB's of data at the 
> end and find the last line break offset. Use that to determine the data range 
> which can be used for splitting. Few reads during split generation are 
> cheaper than not splitting the file at all.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21924) Split text files if only header/footer is present

2019-06-25 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21924:
-
Description: 
https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
 
{code}
int headerCount = 0;
int footerCount = 0;
if (table != null) {
  headerCount = Utilities.getHeaderCount(table);
  footerCount = Utilities.getFooterCount(table, conf);
  if (headerCount != 0 || footerCount != 0) {
// Input file has header or footer, cannot be splitted.
HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, Long.MAX_VALUE);
  }
}
{code}
this piece of code makes the CSV (or any text files with header/footer) files 
not splittable if header or footer is present. 
If only header is present, we can find the offset after first line break and 
use that to split. Similarly for footer, may be read few KB's of data at the 
end and find the last line break offset. Use that to determine the data range 
which can be used for splitting. Few reads during split generation are cheaper 
than not splitting the file at all.  

  was:
https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
 this piece of code makes the CSV (or any text files with header/footer) files 
not splittable if header or footer is present. 
If only header is present, we can find the offset after first line break and 
use that to split. Similarly for footer, may be read few KB's of data at the 
end and find the last line break offset. Use that to determine the data range 
which can be used for splitting. Few reads during split generation are cheaper 
than not splitting the file at all.  


> Split text files if only header/footer is present
> -
>
> Key: HIVE-21924
> URL: https://issues.apache.org/jira/browse/HIVE-21924
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Affects Versions: 2.4.0, 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Priority: Major
>
> https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503
>  
> {code}
> int headerCount = 0;
> int footerCount = 0;
> if (table != null) {
>   headerCount = Utilities.getHeaderCount(table);
>   footerCount = Utilities.getFooterCount(table, conf);
>   if (headerCount != 0 || footerCount != 0) {
> // Input file has header or footer, cannot be splitted.
> HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, 
> Long.MAX_VALUE);
>   }
> }
> {code}
> this piece of code makes the CSV (or any text files with header/footer) files 
> not splittable if header or footer is present. 
> If only header is present, we can find the offset after first line break and 
> use that to split. Similarly for footer, may be read few KB's of data at the 
> end and find the last line break offset. Use that to determine the data range 
> which can be used for splitting. Few reads during split generation are 
> cheaper than not splitting the file at all.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872850#comment-16872850
 ] 

Hive QA commented on HIVE-21867:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 2 new + 129 unchanged - 0 
fixed = 131 total (was 129) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17740/dev-support/hive-personality.sh
 |
| git revision | master / 967a1cc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17740/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17740/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, 
> HIVE-21867.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21915) Hive with TEZ UNION ALL and UDTF results in data loss

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872828#comment-16872828
 ] 

Hive QA commented on HIVE-21915:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972865/HIVE-21915.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17739/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17739/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17739/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12972865/HIVE-21915.03.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972865 - PreCommit-HIVE-Build

> Hive with TEZ UNION ALL and UDTF results in data loss
> -
>
> Key: HIVE-21915
> URL: https://issues.apache.org/jira/browse/HIVE-21915
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Wei Zhang
>Assignee: Wei Zhang
>Priority: Major
> Attachments: HIVE-21915.01.patch, HIVE-21915.02.patch, 
> HIVE-21915.03.patch
>
>
> The HQL syntax is like this:
> CREATE TEMPORARY TABLE tez_union_all_loss_data AS
> SELECT xxx, yyy, zzz,1 as tag
> FROM ods_1
> UNION ALL
> SELECT xxx, yyy, zzz, tag
> FROM
> (
> SELECT xxx
> ,get_json_object(get_json_object(tb,'$.a'),'$.b') AS yyy
> ,zzz
> ,2 as tag
> FROM ods_2
> LATERAL VIEW EXPLODE(some_udf(uuu)) team_number AS tb
> ) tbl 
> ;
>  
> With above HQL, we are expecting that rows with both tag = 2 and tag = 1 
> appear. In our case however, all the rows with tag = 1 are lost.
> Dig deeper we can find that the generated two maps have identical task tmp 
> paths. And that results from when UDTF is present, the FileSinkOperator will 
> be processed twice generating the tmp path in 
> GenTezUtils.removeUnionOperators();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872827#comment-16872827
 ] 

Hive QA commented on HIVE-17593:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931657/HIVE-17593.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 16340 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testBasicDDLCommands (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testCreateTableLike (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testDatabaseLocation (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testDropPartitionsWithPartialSpec 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testDropTableException (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testEmptyTableInstantiation 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testGetMessageBusTopicName 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testGetPartitionsWithPartialSpec 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testObjectNotFoundException 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testOtherFailure (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema
 (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSchema (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema
 (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionsHCatClientImpl 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testRenameTable (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testReplicationTaskIter 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation 
(batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=206)
org.apache.hive.hcatalog.api.TestHCatClient.testUpdateTableSchema (batchId=206)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17738/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17738/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17738/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931657 - PreCommit-HIVE-Build

> DataWritableWriter strip spaces for CHAR type before writing, but predicate 
> generator doesn't do same thing.
> 
>
> Key: HIVE-17593
> URL: https://issues.apache.org/jira/browse/HIVE-17593
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.0, 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, 
> HIVE-17593.4.patch, HIVE-17593.5.patch, HIVE-17593.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> DataWritableWriter strip spaces for CHAR type before writing. While when 
> generating predicate, it does NOT do same striping which should cause data 
> missing!
> In current version, it doesn't cause data missing since predicate is not well 
> push down to parquet due to HIVE-17261.
> Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as 
> same which will build a predicate with tail spaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21921) Support for correlated quantified predicates

2019-06-25 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21921:
---
Status: Patch Available  (was: Open)

> Support for correlated quantified predicates
> 
>
> Key: HIVE-21921
> URL: https://issues.apache.org/jira/browse/HIVE-21921
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21921.1.patch, HIVE-21921.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21921) Support for correlated quantified predicates

2019-06-25 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21921:
---
Attachment: HIVE-21921.2.patch

> Support for correlated quantified predicates
> 
>
> Key: HIVE-21921
> URL: https://issues.apache.org/jira/browse/HIVE-21921
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21921.1.patch, HIVE-21921.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21921) Support for correlated quantified predicates

2019-06-25 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21921:
---
Status: Open  (was: Patch Available)

> Support for correlated quantified predicates
> 
>
> Key: HIVE-21921
> URL: https://issues.apache.org/jira/browse/HIVE-21921
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21921.1.patch, HIVE-21921.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872804#comment-16872804
 ] 

Hive QA commented on HIVE-17593:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 1 new + 58 unchanged - 1 fixed 
= 59 total (was 59) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17738/dev-support/hive-personality.sh
 |
| git revision | master / 967a1cc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17738/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17738/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> DataWritableWriter strip spaces for CHAR type before writing, but predicate 
> generator doesn't do same thing.
> 
>
> Key: HIVE-17593
> URL: https://issues.apache.org/jira/browse/HIVE-17593
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.0, 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, 
> HIVE-17593.4.patch, HIVE-17593.5.patch, HIVE-17593.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> DataWritableWriter strip spaces for CHAR type before writing. While when 
> generating predicate, it does NOT do same striping which should cause data 
> missing!
> In current version, it doesn't cause data missing since predicate is not well 
> push down to parquet due to HIVE-17261.
> Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as 
> same which will build a predicate with tail spaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872795#comment-16872795
 ] 

Hive QA commented on HIVE-21914:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972860/HIVE-21914.03.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16340 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17737/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17737/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17737/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972860 - PreCommit-HIVE-Build

> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while they belong to the DDL framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872780#comment-16872780
 ] 

Hive QA commented on HIVE-21914:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
17s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} llap-server in master has 82 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 1 new + 326 unchanged - 17 
fixed = 327 total (was 343) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
17s{color} | {color:red} ql generated 1 new + 2252 unchanged - 1 fixed = 2253 
total (was 2253) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Should org.apache.hadoop.hive.ql.parse.HiveParser$DFA238 be a _static_ 
inner class?  At HiveParser.java:inner class?  At HiveParser.java:[lines 
48391-48404] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17737/dev-support/hive-personality.sh
 |
| git revision | master / 967a1cc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17737/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17737/yetus/new-findbugs-ql.html
 |
| modules | C: ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17737/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while 

[jira] [Updated] (HIVE-20854) Sensible Defaults: Hive's Zookeeper heartbeat interval is 20 minutes, change to 2

2019-06-25 Thread Alan Gates (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-20854:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed patch 2 to master.

> Sensible Defaults: Hive's Zookeeper heartbeat interval is 20 minutes, change 
> to 2
> -
>
> Key: HIVE-20854
> URL: https://issues.apache.org/jira/browse/HIVE-20854
> Project: Hive
>  Issue Type: Bug
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20854.1.patch, HIVE-20854.2.patch, 
> HIVE-20854.2.patch, HIVE-20854.2.patch
>
>
> {code}
> HIVE_ZOOKEEPER_SESSION_TIMEOUT("hive.zookeeper.session.timeout", 
> "120ms",
> new TimeValidator(TimeUnit.MILLISECONDS),
> "ZooKeeper client's session timeout (in milliseconds). The client is 
> disconnected, and as a result, all locks released, \n" +
> "if a heartbeat is not sent in the timeout."),
> {code}
> That's 1,200,000ms which is too long for all practical purposes - a 20 minute 
> outage in case a node has a failure is too long.
> That is too long for the JDBC load-balancing, LLAP failure tolerance and the 
> lock manager expiry.
> Change to 2 minutes, as a sensible default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872753#comment-16872753
 ] 

Hive QA commented on HIVE-21907:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972855/HIVE-21907.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17735/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17735/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17735/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12972855/HIVE-21907.2.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972855 - PreCommit-HIVE-Build

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872755#comment-16872755
 ] 

Hive QA commented on HIVE-18735:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972856/HIVE-18735.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17736/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17736/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17736/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12972856/HIVE-18735.03.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972856 - PreCommit-HIVE-Build

> Create table like loses transactional attribute
> ---
>
> Key: HIVE-18735
> URL: https://issues.apache.org/jira/browse/HIVE-18735
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Eugene Koifman
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, 
> HIVE-18735.03.patch
>
>
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1;
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>  
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1518813564')
> {noformat}
> Specifying props explicitly does work 
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1 TBLPROPERTIES ('transactional'='true');
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>   
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t'
> TBLPROPERTIES (
>   'transactional'='true',
>   'transactional_properties'='default',
>   'transient_lastDdlTime'='1518814111')
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872752#comment-16872752
 ] 

Hive QA commented on HIVE-21225:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972852/HIVE-21225.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 60 failed/errored test(s), 16339 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats5] (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] 
(batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_sizebug] 
(batchId=89)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_exim] 
(batchId=186)
org.apache.hadoop.hive.ql.TestTxnCommands.testMmExim (batchId=341)
org.apache.hadoop.hive.ql.TestTxnCommands.testNonAcidToAcidConversion01 
(batchId=341)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion02 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion1 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion2 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion3 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnCommands2.testOriginalFileReaderWhenNonAcidConvertedToAcid
 (batchId=322)
org.apache.hadoop.hive.ql.TestTxnCommands2.updateDeletePartitioned (batchId=322)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion02
 (batchId=336)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion1
 (batchId=336)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion2
 (batchId=336)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion3
 (batchId=336)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testOriginalFileReaderWhenNonAcidConvertedToAcid
 (batchId=336)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.updateDeletePartitioned
 (batchId=336)
org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testMmExim
 (batchId=322)
org.apache.hadoop.hive.ql.TestTxnExIm.testImport (batchId=322)
org.apache.hadoop.hive.ql.TestTxnExIm.testImportNoTarget (batchId=322)
org.apache.hadoop.hive.ql.TestTxnExIm.testMM (batchId=322)
org.apache.hadoop.hive.ql.TestTxnExIm.testMMCreate (batchId=322)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadData (batchId=298)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversion 
(batchId=298)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataUpdate (batchId=298)
org.apache.hadoop.hive.ql.TestTxnLoadData.testMultiStatement (batchId=298)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testCompactStatsGather (batchId=322)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testEmptyCompactionResult 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testCompactStatsGather 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testEmptyCompactionResult 
(batchId=322)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=322)
org.apache.hadoop.hive.ql.io.TestAcidUtils.testObsoleteOriginals (batchId=310)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderFooterSerialize
 (batchId=313)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderFooterSerializeWithDeltas
 (batchId=313)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderNoFooterSerialize
 (batchId=313)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderNoFooterSerializeWithDeltas
 (batchId=313)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testSplitGenReadOps 
(batchId=313)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testSplitGenReadOpsLocalCache
 (batchId=313)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testSplitGenReadOpsLocalCacheChangeFileLen
 (batchId=313)

[jira] [Updated] (HIVE-21198) Introduce a database object reference class

2019-06-25 Thread Alan Gates (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-21198:
--
Status: Open  (was: Patch Available)

This patch looks good, but it's very out of date.  If you refresh it I'll take 
another look.

> Introduce a database object reference class
> ---
>
> Key: HIVE-21198
> URL: https://issues.apache.org/jira/browse/HIVE-21198
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21198.1.patch, HIVE-21198.2.patch, 
> HIVE-21198.3.patch, HIVE-21198.4.patch, HIVE-21198.5.patch, 
> HIVE-21198.6.patch, HIVE-21198.7.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> There are many places in which "{databasename}.{tablename}" is passed as a 
> single string; there are some places where the they travel as 2 separate 
> arguments.
> Idea would be to introduce a simple immutable class with 2 fields ; and pass 
> these informations together. Making this better is required if we would be 
> wanting to enable dot in tablenames 
> HIVE-16907, HIVE-21151



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14737) Problem accessing /logs in a Kerberized Hive Server 2 Web UI

2019-06-25 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872738#comment-16872738
 ] 

Rajkumar Singh commented on HIVE-14737:
---

[~daijy] Thanks for the review, I have refactored the suggested change, please 
review.

> Problem accessing /logs in a Kerberized Hive Server 2 Web UI
> 
>
> Key: HIVE-14737
> URL: https://issues.apache.org/jira/browse/HIVE-14737
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Matyas Orhidi
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-14737.01.patch, HIVE-14737.02.patch, 
> HIVE-14737.03.patch, HIVE-14737.patch
>
>
> The /logs menu fails with error [1] when the cluster is Kerberized. Other 
> menu items are working properly.
> [1] HTTP ERROR: 401
> Problem accessing /logs/. Reason:
> Unauthenticated users are not authorized to access this page.
> Powered by Jetty://



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-14737) Problem accessing /logs in a Kerberized Hive Server 2 Web UI

2019-06-25 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-14737:
--
Attachment: HIVE-14737.03.patch

> Problem accessing /logs in a Kerberized Hive Server 2 Web UI
> 
>
> Key: HIVE-14737
> URL: https://issues.apache.org/jira/browse/HIVE-14737
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Matyas Orhidi
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-14737.01.patch, HIVE-14737.02.patch, 
> HIVE-14737.03.patch, HIVE-14737.patch
>
>
> The /logs menu fails with error [1] when the cluster is Kerberized. Other 
> menu items are working properly.
> [1] HTTP ERROR: 401
> Problem accessing /logs/. Reason:
> Unauthenticated users are not authorized to access this page.
> Powered by Jetty://



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-14737) Problem accessing /logs in a Kerberized Hive Server 2 Web UI

2019-06-25 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-14737:
--
Status: Open  (was: Patch Available)

> Problem accessing /logs in a Kerberized Hive Server 2 Web UI
> 
>
> Key: HIVE-14737
> URL: https://issues.apache.org/jira/browse/HIVE-14737
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Matyas Orhidi
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-14737.01.patch, HIVE-14737.02.patch, 
> HIVE-14737.03.patch, HIVE-14737.patch
>
>
> The /logs menu fails with error [1] when the cluster is Kerberized. Other 
> menu items are working properly.
> [1] HTTP ERROR: 401
> Problem accessing /logs/. Reason:
> Unauthenticated users are not authorized to access this page.
> Powered by Jetty://



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=267051=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-267051
 ]

ASF GitHub Bot logged work on HIVE-21867:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 21:28
Start Date: 25/Jun/19 21:28
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #687: HIVE-21867
URL: https://github.com/apache/hive/pull/687#discussion_r297401637
 
 

 ##
 File path: ql/src/test/results/clientpositive/llap/mergejoin.q.out
 ##
 @@ -41,8 +41,8 @@ STAGE PLANS:
 Filter Vectorization:
 className: VectorFilterOperator
 native: true
-predicateExpression: FilterExprAndExpr(children: 
SelectColumnIsNotNull(col 0:string), FilterExprAndExpr(children: 
FilterStringColumnBetweenDynamicValue(col 0:string, left NULL, right NULL), 
VectorInBloomFilterColDynamicValue))
-predicate: (key is not null and (key BETWEEN 
DynamicValue(RS_7_b_key_min) AND DynamicValue(RS_7_b_key_max) and 
in_bloom_filter(key, DynamicValue(RS_7_b_key_bloom_filter (type: boolean)
 
 Review comment:
   Positive change :thumbsup:
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 267051)
Time Spent: 0.5h  (was: 20m)

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, 
> HIVE-21867.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=267050=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-267050
 ]

ASF GitHub Bot logged work on HIVE-21867:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 21:28
Start Date: 25/Jun/19 21:28
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #687: HIVE-21867
URL: https://github.com/apache/hive/pull/687#discussion_r297397916
 
 

 ##
 File path: ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
 ##
 @@ -1421,7 +1421,7 @@ STAGE PLANS:
   outputColumnNames: _col1
   input vertices:
 1 Map 5
-  Statistics: Num rows: 25 Data size: 2225 Basic 
stats: COMPLETE Column stats: COMPLETE
+  Statistics: Num rows: 4 Data size: 356 Basic stats: 
COMPLETE Column stats: COMPLETE
 
 Review comment:
   I wonder why stats estimation changed
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 267050)
Time Spent: 20m  (was: 10m)

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, 
> HIVE-21867.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=267052=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-267052
 ]

ASF GitHub Bot logged work on HIVE-21867:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 21:28
Start Date: 25/Jun/19 21:28
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #687: HIVE-21867
URL: https://github.com/apache/hive/pull/687#discussion_r297401189
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java
 ##
 @@ -1766,6 +1774,59 @@ private void 
removeSemijoinOptimizationByBenefit(OptimizeTezProcContext procCtx)
   GenTezUtils.removeBranch(rs);
   GenTezUtils.removeSemiJoinOperator(procCtx.parseContext, rs, ts);
 }
+
+for (Entry> e : 
globalReductionFactorMap.asMap().entrySet()) {
 
 Review comment:
   Creating separate method for this and adding comments to explain why we are 
doing it will be nice.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 267052)
Time Spent: 40m  (was: 0.5h)

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, 
> HIVE-21867.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21869) Clean up the Kafka storage handler readme and examples

2019-06-25 Thread Alan Gates (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-21869:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch 3 committed to master.  Thanks Kristopher.

> Clean up the Kafka storage handler readme and examples
> --
>
> Key: HIVE-21869
> URL: https://issues.apache.org/jira/browse/HIVE-21869
> Project: Hive
>  Issue Type: Improvement
>  Components: kafka integration
>Affects Versions: 4.0.0
>Reporter: Kristopher Kane
>Assignee: Kristopher Kane
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21869.1.patch, HIVE-21869.2.patch, 
> HIVE-21869.3.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872724#comment-16872724
 ] 

Hive QA commented on HIVE-21225:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
13s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 16 new + 169 unchanged - 1 
fixed = 185 total (was 170) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
21s{color} | {color:red} ql generated 1 new + 2253 unchanged - 0 fixed = 2254 
total (was 2253) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Unread field:AcidUtils.java:[line 1400] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17734/dev-support/hive-personality.sh
 |
| git revision | master / aed7500 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17734/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17734/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17734/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17734/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ACID: getAcidState() should cache a recursive dir listing locally
> -
>
> Key: HIVE-21225
> URL: https://issues.apache.org/jira/browse/HIVE-21225
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal V
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, 
> HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, async-pid-44-2.svg
>
>
> Currently getAcidState() makes 3 calls into the FS api which 

[jira] [Commented] (HIVE-21821) Backport HIVE-21739 to branch-3.1

2019-06-25 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872714#comment-16872714
 ] 

Alan Gates commented on HIVE-21821:
---

Committed patch to branch-3.

> Backport HIVE-21739 to branch-3.1
> -
>
> Key: HIVE-21821
> URL: https://issues.apache.org/jira/browse/HIVE-21821
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Aditya Shah
>Assignee: Aditya Shah
>Priority: Major
> Fix For: 3.1.2
>
> Attachments: HIVE-21821.branch-3.1.1.patch, 
> HIVE-21821.branch-3.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21902) HiveServer2 UI: jetty response header needs X-Frame-Options

2019-06-25 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-21902:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Patch pushed to master. Thanks Rajkumar!

> HiveServer2 UI: jetty response header needs X-Frame-Options
> ---
>
> Key: HIVE-21902
> URL: https://issues.apache.org/jira/browse/HIVE-21902
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>  Labels: security
> Fix For: 4.0.0
>
> Attachments: HIVE-21902.01.patch, HIVE-21902.patch
>
>
> there are some vulnerability are reported for hiveserver2 ui
> X-Frame-Options or Content-Security-Policy: frame-ancestors HTTP Headers 
> missing on port 10002. 
> {code}
> GET / HTTP/1.1 
> Host: HOSTNAME:10002 
> Connection: Keep-Alive 
> X-XSS-Protection HTTP Header missing on port 10002. 
> X-Content-Type-Options HTTP Header missing on port 10002. 
> {code}
> after the proposed changes
> {code}
> HTTP/1.1 200 OK
> Date: Thu, 20 Jun 2019 05:29:59 GMT
> Content-Type: text/html;charset=utf-8
> X-Content-Type-Options: nosniff
> X-FRAME-OPTIONS: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Set-Cookie: JSESSIONID=15kscuow9cmy7qms6dzaxllqt;Path=/
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Content-Length: 3824
> Server: Jetty(9.3.25.v20180904)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21787) Metastore table cache LRU eviction

2019-06-25 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-21787:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Close this one. Will tract unit tests on another ticket.

> Metastore table cache LRU eviction
> --
>
> Key: HIVE-21787
> URL: https://issues.apache.org/jira/browse/HIVE-21787
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Sam An
>Assignee: Sam An
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21787.1.patch, HIVE-21787.10.patch, 
> HIVE-21787.11.patch, HIVE-21787.13.patch, HIVE-21787.14.patch, 
> HIVE-21787.15.patch, HIVE-21787.2.patch, HIVE-21787.3.patch, 
> HIVE-21787.4.patch, HIVE-21787.5.patch, HIVE-21787.6.patch, 
> HIVE-21787.7.patch, HIVE-21787.8.patch, HIVE-21787.9.patch
>
>
> Metastore currently uses black/white list to specify patterns of tables to 
> load into the cache. Cache is loaded in one shot "prewarm", and updated by a 
> background thread. This is not a very efficient design. 
> In this feature, we try to enhance the cache for Tables with LRU to improve 
> cache utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21878) Metric for AM to show whether it is currently running a DAG

2019-06-25 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-21878:
--
Attachment: HIVE-21878.1.patch

> Metric for AM to show whether it is currently running a DAG
> ---
>
> Key: HIVE-21878
> URL: https://issues.apache.org/jira/browse/HIVE-21878
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-21878.1.patch, HIVE-21878.1.patch
>
>
> Add a basic gauge metric to indicate whether a Tez AM is currently running a 
> DAG for a Hive query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872695#comment-16872695
 ] 

Hive QA commented on HIVE-21922:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972894/HIVE-21922.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16307 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17733/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17733/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17733/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972894 - PreCommit-HIVE-Build

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch, HIVE-21922.1.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14737) Problem accessing /logs in a Kerberized Hive Server 2 Web UI

2019-06-25 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872676#comment-16872676
 ] 

Daniel Dai commented on HIVE-14737:
---

It looks good in general. One small comment, can we refactor a bit to reuse 
setupSpnegoFilter? I see most code are duplicates.

> Problem accessing /logs in a Kerberized Hive Server 2 Web UI
> 
>
> Key: HIVE-14737
> URL: https://issues.apache.org/jira/browse/HIVE-14737
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Matyas Orhidi
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-14737.01.patch, HIVE-14737.02.patch, 
> HIVE-14737.patch
>
>
> The /logs menu fails with error [1] when the cluster is Kerberized. Other 
> menu items are working properly.
> [1] HTTP ERROR: 401
> Problem accessing /logs/. Reason:
> Unauthenticated users are not authorized to access this page.
> Powered by Jetty://



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21902) HiveServer2 UI: jetty response header needs X-Frame-Options

2019-06-25 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872666#comment-16872666
 ] 

Daniel Dai commented on HIVE-21902:
---

+1

> HiveServer2 UI: jetty response header needs X-Frame-Options
> ---
>
> Key: HIVE-21902
> URL: https://issues.apache.org/jira/browse/HIVE-21902
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>  Labels: security
> Attachments: HIVE-21902.01.patch, HIVE-21902.patch
>
>
> there are some vulnerability are reported for hiveserver2 ui
> X-Frame-Options or Content-Security-Policy: frame-ancestors HTTP Headers 
> missing on port 10002. 
> {code}
> GET / HTTP/1.1 
> Host: HOSTNAME:10002 
> Connection: Keep-Alive 
> X-XSS-Protection HTTP Header missing on port 10002. 
> X-Content-Type-Options HTTP Header missing on port 10002. 
> {code}
> after the proposed changes
> {code}
> HTTP/1.1 200 OK
> Date: Thu, 20 Jun 2019 05:29:59 GMT
> Content-Type: text/html;charset=utf-8
> X-Content-Type-Options: nosniff
> X-FRAME-OPTIONS: SAMEORIGIN
> X-XSS-Protection: 1; mode=block
> Set-Cookie: JSESSIONID=15kscuow9cmy7qms6dzaxllqt;Path=/
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Content-Length: 3824
> Server: Jetty(9.3.25.v20180904)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872662#comment-16872662
 ] 

Hive QA commented on HIVE-21922:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
10s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} llap-server in master has 82 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 8 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17733/dev-support/hive-personality.sh
 |
| git revision | master / aed7500 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17733/yetus/whitespace-tabs.txt
 |
| modules | C: common ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17733/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch, HIVE-21922.1.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above 

[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872632#comment-16872632
 ] 

Hive QA commented on HIVE-18735:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972856/HIVE-18735.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16339 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17732/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17732/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17732/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972856 - PreCommit-HIVE-Build

> Create table like loses transactional attribute
> ---
>
> Key: HIVE-18735
> URL: https://issues.apache.org/jira/browse/HIVE-18735
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Eugene Koifman
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, 
> HIVE-18735.03.patch
>
>
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1;
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>  
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1518813564')
> {noformat}
> Specifying props explicitly does work 
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1 TBLPROPERTIES ('transactional'='true');
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>   
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t'
> TBLPROPERTIES (
>   'transactional'='true',
>   'transactional_properties'='default',
>   'transient_lastDdlTime'='1518814111')
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18735) Create table like loses transactional attribute

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872594#comment-16872594
 ] 

Hive QA commented on HIVE-18735:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
19s{color} | {color:red} ql generated 1 new + 2253 unchanged - 0 fixed = 2254 
total (was 2253) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Possible null pointer dereference of likeTable in 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(ASTNode, 
QB, SemanticAnalyzer$PlannerContext)  Dereferenced at 
SemanticAnalyzer.java:likeTable in 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(ASTNode, 
QB, SemanticAnalyzer$PlannerContext)  Dereferenced at 
SemanticAnalyzer.java:[line 13594] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17732/dev-support/hive-personality.sh
 |
| git revision | master / aed7500 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17732/yetus/new-findbugs-ql.html
 |
| modules | C: ql hbase-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17732/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create table like loses transactional attribute
> ---
>
> Key: HIVE-18735
> URL: https://issues.apache.org/jira/browse/HIVE-18735
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Eugene Koifman
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, 
> HIVE-18735.03.patch
>
>
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";

[jira] [Commented] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872568#comment-16872568
 ] 

Hive QA commented on HIVE-21907:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972855/HIVE-21907.2.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16346 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17731/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17731/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17731/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972855 - PreCommit-HIVE-Build

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21905) Generics improvement around the FetchOperator class

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21905:
---
Fix Version/s: 4.0.0

> Generics improvement around the FetchOperator class
> ---
>
> Key: HIVE-21905
> URL: https://issues.apache.org/jira/browse/HIVE-21905
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-21905.1.patch, HIVE-21905.1.patch, 
> HIVE-21905.2.patch
>
>
> In and around the org.apache.hadoop.hive.ql.exec.FetchOperator class the 
> generics are handled poorly. Lot's of declarations are missing generics, 
> which makes lots of noise in the IDE and makes it hard to be sure of the 
> correctness of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21905) Generics improvement around the FetchOperator class

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21905:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master, thanks [~isuller].

> Generics improvement around the FetchOperator class
> ---
>
> Key: HIVE-21905
> URL: https://issues.apache.org/jira/browse/HIVE-21905
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-21905.1.patch, HIVE-21905.1.patch, 
> HIVE-21905.2.patch
>
>
> In and around the org.apache.hadoop.hive.ql.exec.FetchOperator class the 
> generics are handled poorly. Lot's of declarations are missing generics, 
> which makes lots of noise in the IDE and makes it hard to be sure of the 
> correctness of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21905) Generics improvement around the FetchOperator class

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872560#comment-16872560
 ] 

Jesus Camacho Rodriguez commented on HIVE-21905:


+1

> Generics improvement around the FetchOperator class
> ---
>
> Key: HIVE-21905
> URL: https://issues.apache.org/jira/browse/HIVE-21905
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-21905.1.patch, HIVE-21905.1.patch, 
> HIVE-21905.2.patch
>
>
> In and around the org.apache.hadoop.hive.ql.exec.FetchOperator class the 
> generics are handled poorly. Lot's of declarations are missing generics, 
> which makes lots of noise in the IDE and makes it hard to be sure of the 
> correctness of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872539#comment-16872539
 ] 

Adam Szita commented on HIVE-21922:
---

Thanks [~pvary]

I amended my patch with:
 * clearer documentation parts
 * in TezSessionState, I'm no longer writing keytab file path to this.conf, 
rather to tezConf. This is required, so that when opening a new tez session we 
will see "" for hive.llap.task.scheduler.am.registry.keytab.file if it was 
before..

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch, HIVE-21922.1.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21922:
--
Attachment: HIVE-21922.1.patch

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch, HIVE-21922.1.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21886) REPL - With table list - Handle rename events during replace policy

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21886:
--
Labels: DR Replication pull-request-available  (was: DR Replication)

> REPL - With table list - Handle rename events during replace policy
> ---
>
> Key: HIVE-21886
> URL: https://issues.apache.org/jira/browse/HIVE-21886
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21886.01.patch
>
>
> If some rename events are found to be dumped and replayed while replace 
> policy is getting executed, it needs to take care of the policy inclusion in 
> both the policy for each table name.
>  1. Create a list of tables to be bootstrapped. 
>   2. During handling of alter table, if the alter type is rename 
>       1. If the old table name is present in the list of table to be 
> bootstrapped, remove it.
>        2. If the new table name, matches the new policy, add it to the list 
> of tables to be bootstrapped.
>   3. During handling of drop table
>        1. if the table is in the list of tables to be bootstrapped, then 
> remove it and ignore the event.
>   4. During other event handling 
>        1. if the table is there in the list of tables to be bootstrapped, 
> then ignore the event.
>  
> Rename handling during replace policy
>  # Old name not matching old policy – The old table will not be there at the 
> target cluster. The table will not be returned by get-all-table.
>  ## Old name is not matching new policy
>  ### New name not matching old policy
>   New name not matching new policy
>  * Ignore the event, no need to do anything.
>   New name matching new policy
>  * The table will be returned by get-all-table. Replace policy handler 
> will bootstrap this table as its matching new policy and not matching old 
> policy.
>  * All the future events will be ignored as part of check added by 
> replace policy handling.
>  * All the event with old table name will anyways be ignored as the old 
> name is not matching the new policy.
>  ### New name matching old policy
>   New name not matching new policy
>  * As the new name is not matching the new policy, the table need not be 
> replicated.
>  * As the old name is not matching the new policy, the rename events will 
> be ignored.
>  * So nothing to be done for this scenario.
>   New name matching new policy
>  * As the new name is matching both old and new policy, replace handler 
> will not bootstrap the table.
>  * Add the table to the list of tables to be bootstrapped.
>  * Ignore all the events with new name.
>  * If there is a drop event for the table (with new name), then remove 
> the table from the the list of table to be bootstrapped.
>  * In case of rename event (double rename)
>  ** If the new name satisfies the table pattern, then add the new name to 
> the list of tables to be bootstrapped and remove the old name from the list 
> of tables to be bootstrapped.
>  ** If the new name does not satisfies then just removed the table name 
> from the list of tables to be bootstrapped.
>  ## Old name is matching new policy – As per replace policy handler, which 
> checks based on old table, the table should be bootstrapped and event should 
> be ignored. But rename handler should decide based on new name.The old table 
> name will not be returned by get-all-table, so replace handler will not d 
> anything for the old table.
>  ### New name not matching old policy
>   New name not matching new policy
>  * As the old table is not there at target and new name is not matching 
> new policy. Ignore the event.
>  * No need to add the table to the list of tables to be bootstrapped.
>  * All the subsequent events will be ignored as the new name is not 
> matching the new policy.
>   New name matching new policy
>  * As the new name is not matching old policy but matching new policy, 
> the table will be bootstrapped by replace policy handler. So rename event 
> need not add this table to list of table to be bootstrapped.
>  * All the future events will be ignored by replace policy handler.
>  * For rename event (double rename)
>  ** If there is a rename, the table (with intermittent new name) will not 
> be present and thus replace handler will not bootstrap the table.
>  ** So if the new name (the latest one) is matching the new policy, then 
> add it to the list of table to be bootstrapped.
>  ** And If the new name (the latest one)  is not matching the new policy, 
> then just ignore the event as the  

[jira] [Work logged] (HIVE-21886) REPL - With table list - Handle rename events during replace policy

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21886?focusedWorklogId=266841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266841
 ]

ASF GitHub Bot logged work on HIVE-21886:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 16:47
Start Date: 25/Jun/19 16:47
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #688: HIVE-21886 : 
REPL - With table list - Handle rename events during replace policy
URL: https://github.com/apache/hive/pull/688
 
 
   …
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266841)
Time Spent: 10m
Remaining Estimate: 0h

> REPL - With table list - Handle rename events during replace policy
> ---
>
> Key: HIVE-21886
> URL: https://issues.apache.org/jira/browse/HIVE-21886
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-21886.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If some rename events are found to be dumped and replayed while replace 
> policy is getting executed, it needs to take care of the policy inclusion in 
> both the policy for each table name.
>  1. Create a list of tables to be bootstrapped. 
>   2. During handling of alter table, if the alter type is rename 
>       1. If the old table name is present in the list of table to be 
> bootstrapped, remove it.
>        2. If the new table name, matches the new policy, add it to the list 
> of tables to be bootstrapped.
>   3. During handling of drop table
>        1. if the table is in the list of tables to be bootstrapped, then 
> remove it and ignore the event.
>   4. During other event handling 
>        1. if the table is there in the list of tables to be bootstrapped, 
> then ignore the event.
>  
> Rename handling during replace policy
>  # Old name not matching old policy – The old table will not be there at the 
> target cluster. The table will not be returned by get-all-table.
>  ## Old name is not matching new policy
>  ### New name not matching old policy
>   New name not matching new policy
>  * Ignore the event, no need to do anything.
>   New name matching new policy
>  * The table will be returned by get-all-table. Replace policy handler 
> will bootstrap this table as its matching new policy and not matching old 
> policy.
>  * All the future events will be ignored as part of check added by 
> replace policy handling.
>  * All the event with old table name will anyways be ignored as the old 
> name is not matching the new policy.
>  ### New name matching old policy
>   New name not matching new policy
>  * As the new name is not matching the new policy, the table need not be 
> replicated.
>  * As the old name is not matching the new policy, the rename events will 
> be ignored.
>  * So nothing to be done for this scenario.
>   New name matching new policy
>  * As the new name is matching both old and new policy, replace handler 
> will not bootstrap the table.
>  * Add the table to the list of tables to be bootstrapped.
>  * Ignore all the events with new name.
>  * If there is a drop event for the table (with new name), then remove 
> the table from the the list of table to be bootstrapped.
>  * In case of rename event (double rename)
>  ** If the new name satisfies the table pattern, then add the new name to 
> the list of tables to be bootstrapped and remove the old name from the list 
> of tables to be bootstrapped.
>  ** If the new name does not satisfies then just removed the table name 
> from the list of tables to be bootstrapped.
>  ## Old name is matching new policy – As per replace policy handler, which 
> checks based on old table, the table should be bootstrapped and event should 
> be ignored. But rename handler should decide based on new name.The old table 
> name will not be returned by get-all-table, so replace handler will not d 
> anything for the old table.
>  ### New name not matching old policy
>   New name not matching new policy
>  * As the old table is not there at target and new name is not matching 
> new policy. Ignore the event.
>  * No need to add the table to the list of tables to be bootstrapped.
>  * All the subsequent events 

[jira] [Updated] (HIVE-21886) REPL - With table list - Handle rename events during replace policy

2019-06-25 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21886:
---
Status: Patch Available  (was: Open)

> REPL - With table list - Handle rename events during replace policy
> ---
>
> Key: HIVE-21886
> URL: https://issues.apache.org/jira/browse/HIVE-21886
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR, Replication
> Attachments: HIVE-21886.01.patch
>
>
> If some rename events are found to be dumped and replayed while replace 
> policy is getting executed, it needs to take care of the policy inclusion in 
> both the policy for each table name.
>  1. Create a list of tables to be bootstrapped. 
>   2. During handling of alter table, if the alter type is rename 
>       1. If the old table name is present in the list of table to be 
> bootstrapped, remove it.
>        2. If the new table name, matches the new policy, add it to the list 
> of tables to be bootstrapped.
>   3. During handling of drop table
>        1. if the table is in the list of tables to be bootstrapped, then 
> remove it and ignore the event.
>   4. During other event handling 
>        1. if the table is there in the list of tables to be bootstrapped, 
> then ignore the event.
>  
> Rename handling during replace policy
>  # Old name not matching old policy – The old table will not be there at the 
> target cluster. The table will not be returned by get-all-table.
>  ## Old name is not matching new policy
>  ### New name not matching old policy
>   New name not matching new policy
>  * Ignore the event, no need to do anything.
>   New name matching new policy
>  * The table will be returned by get-all-table. Replace policy handler 
> will bootstrap this table as its matching new policy and not matching old 
> policy.
>  * All the future events will be ignored as part of check added by 
> replace policy handling.
>  * All the event with old table name will anyways be ignored as the old 
> name is not matching the new policy.
>  ### New name matching old policy
>   New name not matching new policy
>  * As the new name is not matching the new policy, the table need not be 
> replicated.
>  * As the old name is not matching the new policy, the rename events will 
> be ignored.
>  * So nothing to be done for this scenario.
>   New name matching new policy
>  * As the new name is matching both old and new policy, replace handler 
> will not bootstrap the table.
>  * Add the table to the list of tables to be bootstrapped.
>  * Ignore all the events with new name.
>  * If there is a drop event for the table (with new name), then remove 
> the table from the the list of table to be bootstrapped.
>  * In case of rename event (double rename)
>  ** If the new name satisfies the table pattern, then add the new name to 
> the list of tables to be bootstrapped and remove the old name from the list 
> of tables to be bootstrapped.
>  ** If the new name does not satisfies then just removed the table name 
> from the list of tables to be bootstrapped.
>  ## Old name is matching new policy – As per replace policy handler, which 
> checks based on old table, the table should be bootstrapped and event should 
> be ignored. But rename handler should decide based on new name.The old table 
> name will not be returned by get-all-table, so replace handler will not d 
> anything for the old table.
>  ### New name not matching old policy
>   New name not matching new policy
>  * As the old table is not there at target and new name is not matching 
> new policy. Ignore the event.
>  * No need to add the table to the list of tables to be bootstrapped.
>  * All the subsequent events will be ignored as the new name is not 
> matching the new policy.
>   New name matching new policy
>  * As the new name is not matching old policy but matching new policy, 
> the table will be bootstrapped by replace policy handler. So rename event 
> need not add this table to list of table to be bootstrapped.
>  * All the future events will be ignored by replace policy handler.
>  * For rename event (double rename)
>  ** If there is a rename, the table (with intermittent new name) will not 
> be present and thus replace handler will not bootstrap the table.
>  ** So if the new name (the latest one) is matching the new policy, then 
> add it to the list of table to be bootstrapped.
>  ** And If the new name (the latest one)  is not matching the new policy, 
> then just ignore the event as the  intermittent new name would not have added 
> to the 

[jira] [Updated] (HIVE-21886) REPL - With table list - Handle rename events during replace policy

2019-06-25 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21886:
---
Attachment: HIVE-21886.01.patch

> REPL - With table list - Handle rename events during replace policy
> ---
>
> Key: HIVE-21886
> URL: https://issues.apache.org/jira/browse/HIVE-21886
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: DR, Replication
> Attachments: HIVE-21886.01.patch
>
>
> If some rename events are found to be dumped and replayed while replace 
> policy is getting executed, it needs to take care of the policy inclusion in 
> both the policy for each table name.
>  1. Create a list of tables to be bootstrapped. 
>   2. During handling of alter table, if the alter type is rename 
>       1. If the old table name is present in the list of table to be 
> bootstrapped, remove it.
>        2. If the new table name, matches the new policy, add it to the list 
> of tables to be bootstrapped.
>   3. During handling of drop table
>        1. if the table is in the list of tables to be bootstrapped, then 
> remove it and ignore the event.
>   4. During other event handling 
>        1. if the table is there in the list of tables to be bootstrapped, 
> then ignore the event.
>  
> Rename handling during replace policy
>  # Old name not matching old policy – The old table will not be there at the 
> target cluster. The table will not be returned by get-all-table.
>  ## Old name is not matching new policy
>  ### New name not matching old policy
>   New name not matching new policy
>  * Ignore the event, no need to do anything.
>   New name matching new policy
>  * The table will be returned by get-all-table. Replace policy handler 
> will bootstrap this table as its matching new policy and not matching old 
> policy.
>  * All the future events will be ignored as part of check added by 
> replace policy handling.
>  * All the event with old table name will anyways be ignored as the old 
> name is not matching the new policy.
>  ### New name matching old policy
>   New name not matching new policy
>  * As the new name is not matching the new policy, the table need not be 
> replicated.
>  * As the old name is not matching the new policy, the rename events will 
> be ignored.
>  * So nothing to be done for this scenario.
>   New name matching new policy
>  * As the new name is matching both old and new policy, replace handler 
> will not bootstrap the table.
>  * Add the table to the list of tables to be bootstrapped.
>  * Ignore all the events with new name.
>  * If there is a drop event for the table (with new name), then remove 
> the table from the the list of table to be bootstrapped.
>  * In case of rename event (double rename)
>  ** If the new name satisfies the table pattern, then add the new name to 
> the list of tables to be bootstrapped and remove the old name from the list 
> of tables to be bootstrapped.
>  ** If the new name does not satisfies then just removed the table name 
> from the list of tables to be bootstrapped.
>  ## Old name is matching new policy – As per replace policy handler, which 
> checks based on old table, the table should be bootstrapped and event should 
> be ignored. But rename handler should decide based on new name.The old table 
> name will not be returned by get-all-table, so replace handler will not d 
> anything for the old table.
>  ### New name not matching old policy
>   New name not matching new policy
>  * As the old table is not there at target and new name is not matching 
> new policy. Ignore the event.
>  * No need to add the table to the list of tables to be bootstrapped.
>  * All the subsequent events will be ignored as the new name is not 
> matching the new policy.
>   New name matching new policy
>  * As the new name is not matching old policy but matching new policy, 
> the table will be bootstrapped by replace policy handler. So rename event 
> need not add this table to list of table to be bootstrapped.
>  * All the future events will be ignored by replace policy handler.
>  * For rename event (double rename)
>  ** If there is a rename, the table (with intermittent new name) will not 
> be present and thus replace handler will not bootstrap the table.
>  ** So if the new name (the latest one) is matching the new policy, then 
> add it to the list of table to be bootstrapped.
>  ** And If the new name (the latest one)  is not matching the new policy, 
> then just ignore the event as the  intermittent new name would not have added 
> to the list of 

[jira] [Commented] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872517#comment-16872517
 ] 

Hive QA commented on HIVE-21907:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} llap-common in master has 84 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} llap-server in master has 82 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} llap-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 10 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} llap-common generated 6 new + 84 unchanged - 0 fixed = 
90 total (was 84) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} llap-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:llap-common |
|  |  
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$SetCapacityRequestProto.PARSER
 isn't final but should be  At LlapDaemonProtocolProtos.java:be  At 
LlapDaemonProtocolProtos.java:[line 21807] |
|  |  Class 
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$SetCapacityRequestProto
 defines non-transient non-serializable instance field unknownFields  In 
LlapDaemonProtocolProtos.java:instance field unknownFields  In 
LlapDaemonProtocolProtos.java |
|  |  Useless control flow in 
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$SetCapacityRequestProto$Builder.maybeForceBuilderInitialization()
  At LlapDaemonProtocolProtos.java: At LlapDaemonProtocolProtos.java:[line 
22048] |
|  |  
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$SetCapacityResponseProto.PARSER
 isn't final but should be  At LlapDaemonProtocolProtos.java:be  At 
LlapDaemonProtocolProtos.java:[line 22300] |
|  |  Class 
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$SetCapacityResponseProto
 defines non-transient non-serializable instance field unknownFields  In 
LlapDaemonProtocolProtos.java:instance field unknownFields  In 
LlapDaemonProtocolProtos.java |
|  |  Useless control flow in 
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$SetCapacityResponseProto$Builder.maybeForceBuilderInitialization()
  At LlapDaemonProtocolProtos.java: At LlapDaemonProtocolProtos.java:[line 
22474] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  

[jira] [Commented] (HIVE-21923) Disabling n-way joins caused some resultset changes

2019-06-25 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872505#comment-16872505
 ] 

Gopal V commented on HIVE-21923:


[~kgyrtkirk]: FYI, my notes have - HIVE-9038 which has issues with map-joins + 
n-way joins together.


> Disabling n-way joins caused some resultset changes
> ---
>
> Key: HIVE-21923
> URL: https://issues.apache.org/jira/browse/HIVE-21923
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-21189 have introduced some resultset changes
> in ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
> https://github.com/apache/hive/commit/5799398450c17d06e8ef144ce835a8524f5abec9#diff-56b3ab96b6c90fdbebe2c4f84e8595afL500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21915) Hive with TEZ UNION ALL and UDTF results in data loss

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872490#comment-16872490
 ] 

Hive QA commented on HIVE-21915:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972865/HIVE-21915.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17730/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17730/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17730/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-06-25 16:17:41.035
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17730/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-06-25 16:17:41.039
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   84b5ba7..18a5dcb  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 84b5ba7 HIVE-21913: GenericUDTFGetSplits should handle usernames 
in the same way as LLAP (Prasanth Jayachandran reviewed by Jason Dere)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 18a5dcb HIVE-21857: Sort conditions in a filter predicate to 
accelerate query processing (Jesus Camacho Rodriguez, reviewed by Vineet Garg)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-06-25 16:17:43.315
+ rm -rf ../yetus_PreCommit-HIVE-Build-17730
+ mkdir ../yetus_PreCommit-HIVE-Build-17730
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17730
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17730/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/itests/src/test/resources/testconfiguration.properties: does not exist 
in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java: does not 
exist in index
error: patch failed: itests/src/test/resources/testconfiguration.properties:339
Falling back to three-way merge...
Applied patch to 'itests/src/test/resources/testconfiguration.properties' with 
conflicts.
Going to apply patch with: git apply -p1
error: patch failed: itests/src/test/resources/testconfiguration.properties:339
Falling back to three-way merge...
Applied patch to 'itests/src/test/resources/testconfiguration.properties' with 
conflicts.
U itests/src/test/resources/testconfiguration.properties
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-17730
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972865 - PreCommit-HIVE-Build

> Hive with TEZ UNION ALL and UDTF results in data loss
> -
>
> Key: HIVE-21915
> URL: https://issues.apache.org/jira/browse/HIVE-21915
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Wei Zhang
>Assignee: Wei Zhang
>Priority: Major
> Attachments: HIVE-21915.01.patch, HIVE-21915.02.patch, 
> HIVE-21915.03.patch
>
>
> The HQL syntax is like this:
> CREATE TEMPORARY TABLE tez_union_all_loss_data AS
> SELECT xxx, yyy, zzz,1 as tag
> FROM ods_1
> UNION ALL
> SELECT xxx, yyy, zzz, tag
> FROM
> (
> SELECT xxx
> ,get_json_object(get_json_object(tb,'$.a'),'$.b') AS yyy
> ,zzz
> ,2 as tag
> FROM ods_2
> LATERAL VIEW 

[jira] [Commented] (HIVE-21874) Implement add partitions related methods on temporary table

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872487#comment-16872487
 ] 

Hive QA commented on HIVE-21874:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972830/HIVE-21874.03.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16597 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17729/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17729/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17729/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972830 - PreCommit-HIVE-Build

> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21874.01.patch, HIVE-21874.02.patch, 
> HIVE-21874.03.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21846) Create a thread in TezAM which periodically fetches LlapDaemon metrics

2019-06-25 Thread Antal Sinkovits (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Sinkovits updated HIVE-21846:
---
Attachment: HIVE-21846.02.patch

> Create a thread in TezAM which periodically fetches LlapDaemon metrics
> --
>
> Key: HIVE-21846
> URL: https://issues.apache.org/jira/browse/HIVE-21846
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Antal Sinkovits
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21846.01.patch, HIVE-21846.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> LlapTaskSchedulerService should start a thread which periodically fetches the 
> LlapDaemon metrics and stores them in the NodeInfo object.
> This should be just the first implementation - later we should find a way 
> where we do not need NxM requests between N TezAM and M LlapDaemon



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST

2019-06-25 Thread Oliver Draese (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872473#comment-16872473
 ] 

Oliver Draese commented on HIVE-15177:
--

Trying rerun. Test case failure is unrelated to patch.

> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST
> -
>
> Key: HIVE-15177
> URL: https://issues.apache.org/jira/browse/HIVE-15177
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Subrahmanya
>Assignee: Oliver Draese
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HIVE-15177.1.patch, HIVE-15177.2.patch
>
>
> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST.
> When auth type is set to fromSubject, _HOST in principal is not resolved to 
> the actual host name even though the correct host name is available. This 
> leads to connection failure. If auth type is not set to fromSubject host 
> resolution is done correctly.
> The problem is in getKerberosTransport method of 
> org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is 
> true host name in the principal is not resolved. When it is false, host name 
> is passed on to HadoopThriftAuthBridge, which takes care of resolving the 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-15177) Authentication with hive fails when kerberos auth type is set to fromSubject and principal contains _HOST

2019-06-25 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-15177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese updated HIVE-15177:
-
Attachment: HIVE-15177.2.patch

> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST
> -
>
> Key: HIVE-15177
> URL: https://issues.apache.org/jira/browse/HIVE-15177
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication
>Reporter: Subrahmanya
>Assignee: Oliver Draese
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HIVE-15177.1.patch, HIVE-15177.2.patch
>
>
> Authentication with hive fails when kerberos auth type is set to fromSubject 
> and principal contains _HOST.
> When auth type is set to fromSubject, _HOST in principal is not resolved to 
> the actual host name even though the correct host name is available. This 
> leads to connection failure. If auth type is not set to fromSubject host 
> resolution is done correctly.
> The problem is in getKerberosTransport method of 
> org.apache.hive.service.auth.KerberosSaslHelper class. When assumeSubject is 
> true host name in the principal is not resolved. When it is false, host name 
> is passed on to HadoopThriftAuthBridge, which takes care of resolving the 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14888) SparkClientImpl checks for "kerberos" string in hiveconf only when determining whether to use keytab file.

2019-06-25 Thread David McGinnis (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872472#comment-16872472
 ] 

David McGinnis commented on HIVE-14888:
---

Fix is ready and has been submitted in Review Board and on GitHub for review, 
but no committers seem interested in reviewing it. Links to both are attached 
to the JIRA.

 

[~trega], [~xuefuz]: If you can find a committer that is willing to give up 10 
minutes to get this reviewed so we can get it in, that would help immensely. 
Emails to the dev group have not gotten any activity on it.

> SparkClientImpl checks for "kerberos" string in hiveconf only when 
> determining whether to use keytab file.
> --
>
> Key: HIVE-14888
> URL: https://issues.apache.org/jira/browse/HIVE-14888
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.1.0
>Reporter: Thomas Rega
>Assignee: David McGinnis
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-14888.1-spark.patch, HIVE-14888.2.patch, 
> HIVE-14888.3.patch, HIVE-14888.4.patch, HIVE-14888.5.patch
>
>   Original Estimate: 5m
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The SparkClientImpl will only provide a principal and keytab argument if the 
> HADOOP_SECURITY_AUTHENTICATION in hive conf is set to "kerberos". This will 
> not work on clusters with Hadoop security enabled that are not configured as 
> "kerberos", for example, a cluster which is configured for "ldap".
> The solution is to call UserGroupInformation.isSecurityEnabled() instead.
>  
> Code Review: [https://reviews.apache.org/r/70718/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21874) Implement add partitions related methods on temporary table

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872446#comment-16872446
 ] 

Hive QA commented on HIVE-21874:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
12s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
12s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 15 unchanged - 0 fixed 
= 16 total (was 15) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17729/dev-support/hive-personality.sh
 |
| git revision | master / 84b5ba7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17729/yetus/diff-checkstyle-ql.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17729/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21874.01.patch, HIVE-21874.02.patch, 
> HIVE-21874.03.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21867:
---
Attachment: HIVE-21867.03.patch

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.03.patch, 
> HIVE-21867.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872440#comment-16872440
 ] 

Jesus Camacho Rodriguez commented on HIVE-21867:


[~vgarg], could you review this one? Thanks
https://github.com/apache/hive/pull/687



> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21867?focusedWorklogId=266784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266784
 ]

ASF GitHub Bot logged work on HIVE-21867:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 15:22
Start Date: 25/Jun/19 15:22
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #687: HIVE-21867
URL: https://github.com/apache/hive/pull/687
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266784)
Time Spent: 10m
Remaining Estimate: 0h

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21867) Sort semijoin conditions to accelerate query processing

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21867:
--
Labels: pull-request-available  (was: )

> Sort semijoin conditions to accelerate query processing
> ---
>
> Key: HIVE-21867
> URL: https://issues.apache.org/jira/browse/HIVE-21867
> Project: Hive
>  Issue Type: New Feature
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21867.02.patch, HIVE-21867.patch
>
>
> The problem was tackled for CBO in HIVE-21857. Semijoin filters are 
> introduced later in the planning phase. Follow similar approach to sort them, 
> trying to accelerate filter evaluation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21857) Sort conditions in a filter predicate to accelerate query processing

2019-06-25 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21857:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks for reviewing [~vgarg]!

> Sort conditions in a filter predicate to accelerate query processing
> 
>
> Key: HIVE-21857
> URL: https://issues.apache.org/jira/browse/HIVE-21857
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21857.01.patch, HIVE-21857.02.patch, 
> HIVE-21857.03.patch, HIVE-21857.04.patch, HIVE-21857.05.patch, 
> HIVE-21857.06.patch, HIVE-21857.07.patch, HIVE-21857.08.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Following approach similar to 
> http://db.cs.berkeley.edu/jmh/miscpapers/sigmod93.pdf .
> To reorder predicates in AND conditions, we could rank each of elements in 
> the clauses in increasing order based on following formula:
> {code}
> rank = (selectivity - 1) / cost per tuple
> {code}
> Similarly, for OR conditions:
> {code}
> rank = (-selectivity) / cost per tuple
> {code}
> Selectivity can be computed with FilterSelectivityEstimator. For cost per 
> tuple, we will need to come up with some heuristic based on how expensive is 
> the evaluation of the functions contained in that predicate. Custom UDFs 
> could be annotated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21857) Sort conditions in a filter predicate to accelerate query processing

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21857?focusedWorklogId=266773=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266773
 ]

ASF GitHub Bot logged work on HIVE-21857:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 15:09
Start Date: 25/Jun/19 15:09
Worklog Time Spent: 10m 
  Work Description: asfgit commented on pull request #671: HIVE-21857
URL: https://github.com/apache/hive/pull/671
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266773)
Time Spent: 1h 20m  (was: 1h 10m)

> Sort conditions in a filter predicate to accelerate query processing
> 
>
> Key: HIVE-21857
> URL: https://issues.apache.org/jira/browse/HIVE-21857
> Project: Hive
>  Issue Type: New Feature
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21857.01.patch, HIVE-21857.02.patch, 
> HIVE-21857.03.patch, HIVE-21857.04.patch, HIVE-21857.05.patch, 
> HIVE-21857.06.patch, HIVE-21857.07.patch, HIVE-21857.08.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Following approach similar to 
> http://db.cs.berkeley.edu/jmh/miscpapers/sigmod93.pdf .
> To reorder predicates in AND conditions, we could rank each of elements in 
> the clauses in increasing order based on following formula:
> {code}
> rank = (selectivity - 1) / cost per tuple
> {code}
> Similarly, for OR conditions:
> {code}
> rank = (-selectivity) / cost per tuple
> {code}
> Selectivity can be computed with FilterSelectivityEstimator. For cost per 
> tuple, we will need to come up with some heuristic based on how expensive is 
> the evaluation of the functions contained in that predicate. Custom UDFs 
> could be annotated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21868) Vectorize CAST...FORMAT

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872413#comment-16872413
 ] 

Hive QA commented on HIVE-21868:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972831/HIVE-21868.04.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16346 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestTxnCommands.testMergeOnTezEdges (batchId=341)
org.apache.hive.hcatalog.mapreduce.TestHCatPartitionPublish.org.apache.hive.hcatalog.mapreduce.TestHCatPartitionPublish
 (batchId=211)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17728/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17728/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17728/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972831 - PreCommit-HIVE-Build

> Vectorize CAST...FORMAT
> ---
>
> Key: HIVE-21868
> URL: https://issues.apache.org/jira/browse/HIVE-21868
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21868.01.patch, HIVE-21868.01.patch, 
> HIVE-21868.02.patch, HIVE-21868.03.patch, HIVE-21868.04.patch
>
>
> Vectorize UDFs for CAST ( AS STRING/CHAR/VARCHAR FORMAT 
> ) and CAST ( AS TIMESTAMP/DATE FORMAT ).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21923) Disabling n-way joins caused some resultset changes

2019-06-25 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872389#comment-16872389
 ] 

Zoltan Haindrich commented on HIVE-21923:
-

disabling auto.convert.join changes the resultset from 0 to 5680 ; for the 
query:
{code}
SELECT COUNT(*)
 FROM src1 x JOIN srcpart z ON (x.key = z.key)
 JOIN srcpart w ON (x.key = w.key)
 JOIN src y ON (y.key = x.key)
{code}
because there is no limit or similar in the above query - I think this means 
that there is bug somewhere...

> Disabling n-way joins caused some resultset changes
> ---
>
> Key: HIVE-21923
> URL: https://issues.apache.org/jira/browse/HIVE-21923
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-21189 have introduced some resultset changes
> in ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
> https://github.com/apache/hive/commit/5799398450c17d06e8ef144ce835a8524f5abec9#diff-56b3ab96b6c90fdbebe2c4f84e8595afL500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21915) Hive with TEZ UNION ALL and UDTF results in data loss

2019-06-25 Thread Wei Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872385#comment-16872385
 ] 

Wei Zhang commented on HIVE-21915:
--

UPDATE:

We have to set hive.merge.tezfiles=true; to reproduce this issue, and updated 
the test case to turn on file merge.

In our settings, hive.merge.tezfiles defaults to true. Ignored this factor 
before.

> Hive with TEZ UNION ALL and UDTF results in data loss
> -
>
> Key: HIVE-21915
> URL: https://issues.apache.org/jira/browse/HIVE-21915
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Wei Zhang
>Assignee: Wei Zhang
>Priority: Major
> Attachments: HIVE-21915.01.patch, HIVE-21915.02.patch, 
> HIVE-21915.03.patch
>
>
> The HQL syntax is like this:
> CREATE TEMPORARY TABLE tez_union_all_loss_data AS
> SELECT xxx, yyy, zzz,1 as tag
> FROM ods_1
> UNION ALL
> SELECT xxx, yyy, zzz, tag
> FROM
> (
> SELECT xxx
> ,get_json_object(get_json_object(tb,'$.a'),'$.b') AS yyy
> ,zzz
> ,2 as tag
> FROM ods_2
> LATERAL VIEW EXPLODE(some_udf(uuu)) team_number AS tb
> ) tbl 
> ;
>  
> With above HQL, we are expecting that rows with both tag = 2 and tag = 1 
> appear. In our case however, all the rows with tag = 1 are lost.
> Dig deeper we can find that the generated two maps have identical task tmp 
> paths. And that results from when UDTF is present, the FileSinkOperator will 
> be processed twice generating the tmp path in 
> GenTezUtils.removeUnionOperators();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17593?focusedWorklogId=266688=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266688
 ]

ASF GitHub Bot logged work on HIVE-17593:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 14:10
Start Date: 25/Jun/19 14:10
Worklog Time Spent: 10m 
  Work Description: chenjunjiedada commented on pull request #383: 
HIVE-17593: DataWritableWriter strip spaces for CHAR type which cause…
URL: https://github.com/apache/hive/pull/383
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266688)
Time Spent: 10m
Remaining Estimate: 0h

> DataWritableWriter strip spaces for CHAR type before writing, but predicate 
> generator doesn't do same thing.
> 
>
> Key: HIVE-17593
> URL: https://issues.apache.org/jira/browse/HIVE-17593
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.0, 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, 
> HIVE-17593.4.patch, HIVE-17593.5.patch, HIVE-17593.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> DataWritableWriter strip spaces for CHAR type before writing. While when 
> generating predicate, it does NOT do same striping which should cause data 
> missing!
> In current version, it doesn't cause data missing since predicate is not well 
> push down to parquet due to HIVE-17261.
> Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as 
> same which will build a predicate with tail spaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21868) Vectorize CAST...FORMAT

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872381#comment-16872381
 ] 

Hive QA commented on HIVE-21868:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
14s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} ql: The patch generated 0 new + 412 unchanged - 1 
fixed = 412 total (was 413) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} common generated 1 new + 62 unchanged - 0 fixed = 63 
total (was 62) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
18s{color} | {color:red} ql generated 2 new + 2254 unchanged - 0 fixed = 2256 
total (was 2254) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:common |
|  |  Class 
org.apache.hadoop.hive.common.format.datetime.HiveSqlDateTimeFormatter defines 
non-transient non-serializable instance field tokens  In 
HiveSqlDateTimeFormatter.java:instance field tokens  In 
HiveSqlDateTimeFormatter.java |
| FindBugs | module:ql |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hive.ql.exec.vector.expressions.CastDateToString.sqlFormat(BytesColumnVector,
 long[], int, HiveSqlDateTimeFormatter):in 
org.apache.hadoop.hive.ql.exec.vector.expressions.CastDateToString.sqlFormat(BytesColumnVector,
 long[], int, HiveSqlDateTimeFormatter): String.getBytes()  At 
CastDateToString.java:[line 70] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hive.ql.exec.vector.expressions.CastTimestampToString.sqlFormat(BytesColumnVector,
 TimestampColumnVector, int, HiveSqlDateTimeFormatter):in 
org.apache.hadoop.hive.ql.exec.vector.expressions.CastTimestampToString.sqlFormat(BytesColumnVector,
 TimestampColumnVector, int, HiveSqlDateTimeFormatter): String.getBytes()  At 
CastTimestampToString.java:[line 79] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17728/dev-support/hive-personality.sh
 |
| git revision 

[jira] [Updated] (HIVE-21915) Hive with TEZ UNION ALL and UDTF results in data loss

2019-06-25 Thread Wei Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhang updated HIVE-21915:
-
Attachment: HIVE-21915.03.patch

> Hive with TEZ UNION ALL and UDTF results in data loss
> -
>
> Key: HIVE-21915
> URL: https://issues.apache.org/jira/browse/HIVE-21915
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1
>Reporter: Wei Zhang
>Assignee: Wei Zhang
>Priority: Major
> Attachments: HIVE-21915.01.patch, HIVE-21915.02.patch, 
> HIVE-21915.03.patch
>
>
> The HQL syntax is like this:
> CREATE TEMPORARY TABLE tez_union_all_loss_data AS
> SELECT xxx, yyy, zzz,1 as tag
> FROM ods_1
> UNION ALL
> SELECT xxx, yyy, zzz, tag
> FROM
> (
> SELECT xxx
> ,get_json_object(get_json_object(tb,'$.a'),'$.b') AS yyy
> ,zzz
> ,2 as tag
> FROM ods_2
> LATERAL VIEW EXPLODE(some_udf(uuu)) team_number AS tb
> ) tbl 
> ;
>  
> With above HQL, we are expecting that rows with both tag = 2 and tag = 1 
> appear. In our case however, all the rows with tag = 1 are lost.
> Dig deeper we can find that the generated two maps have identical task tmp 
> paths. And that results from when UDTF is present, the FileSinkOperator will 
> be processed twice generating the tmp path in 
> GenTezUtils.removeUnionOperators();
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21914:
--
Attachment: HIVE-21914.03.patch

> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while they belong to the DDL framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-21914:
--
Attachment: (was: HIVE-21914.03.patch)

> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while they belong to the DDL framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872346#comment-16872346
 ] 

Hive QA commented on HIVE-21914:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972823/HIVE-21914.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17727/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17727/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17727/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12972823/HIVE-21914.03.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972823 - PreCommit-HIVE-Build

> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while they belong to the DDL framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872340#comment-16872340
 ] 

Hive QA commented on HIVE-21914:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972823/HIVE-21914.03.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16339 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeader.testHouseKeepingThreadExistence
 (batchId=240)
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence
 (batchId=242)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17726/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17726/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17726/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972823 - PreCommit-HIVE-Build

> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while they belong to the DDL framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21923) Disabling n-way joins caused some resultset changes

2019-06-25 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872338#comment-16872338
 ] 

Zoltan Haindrich commented on HIVE-21923:
-

this could be either a bug triggered by nway joins; or something which is just 
surfaced...

> Disabling n-way joins caused some resultset changes
> ---
>
> Key: HIVE-21923
> URL: https://issues.apache.org/jira/browse/HIVE-21923
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-21189 have introduced some resultset changes
> in ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
> https://github.com/apache/hive/commit/5799398450c17d06e8ef144ce835a8524f5abec9#diff-56b3ab96b6c90fdbebe2c4f84e8595afL500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21923) Disabling n-way joins caused some resultset changes

2019-06-25 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-21923:
---


> Disabling n-way joins caused some resultset changes
> ---
>
> Key: HIVE-21923
> URL: https://issues.apache.org/jira/browse/HIVE-21923
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-21189 have introduced some resultset changes
> in ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
> https://github.com/apache/hive/commit/5799398450c17d06e8ef144ce835a8524f5abec9#diff-56b3ab96b6c90fdbebe2c4f84e8595afL500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18735) Create table like loses transactional attribute

2019-06-25 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-18735:
-
Attachment: HIVE-18735.03.patch

> Create table like loses transactional attribute
> ---
>
> Key: HIVE-18735
> URL: https://issues.apache.org/jira/browse/HIVE-18735
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Eugene Koifman
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch, 
> HIVE-18735.03.patch
>
>
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1;
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>  
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1518813564')
> {noformat}
> Specifying props explicitly does work 
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1 TBLPROPERTIES ('transactional'='true');
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>   
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t'
> TBLPROPERTIES (
>   'transactional'='true',
>   'transactional_properties'='default',
>   'transient_lastDdlTime'='1518814111')
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872313#comment-16872313
 ] 

Peter Vary commented on HIVE-21922:
---

+1 pending tests.

Do not forget to add the new config to the wiki: 
[https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties]

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21907:
--
Attachment: HIVE-21907.2.patch

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21907:
--
Attachment: (was: HIVE-21907.2.patch)

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21907?focusedWorklogId=266600=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266600
 ]

ASF GitHub Bot logged work on HIVE-21907:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 12:33
Start Date: 25/Jun/19 12:33
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #685: HIVE-21907: Add a 
new LlapDaemon Management API method to set the daemon capacity
URL: https://github.com/apache/hive/pull/685#discussion_r297162456
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/TaskExecutorService.java
 ##
 @@ -176,6 +181,31 @@ public TaskExecutorService(int numExecutors, int 
waitQueueSize,
 Futures.addCallback(future, new WaitQueueWorkerCallback());
   }
 
+  /**
+   * Sets the TaskExecutorService capacity to the new values. Both the number 
of executors and the
+   * queue size should be smaller than that original values, so we do not mess 
up with the other
+   * settings. Setting smaller capacity will not cancel or reject already 
executing or queued tasks
+   * in itself.
+   * @param newNumExecutors The new number of executors
+   * @param newWaitQueueSize The new number of wait queue size
+   */
+  @Override
+  public synchronized void setCapacity(int newNumExecutors, int 
newWaitQueueSize) {
+if (newNumExecutors > configuredMaxExecutors) {
 
 Review comment:
   Added check and test
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266600)
Time Spent: 1h 10m  (was: 1h)

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21914) Move Function and Macro related DDL operations into the DDL framework

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872307#comment-16872307
 ] 

Hive QA commented on HIVE-21914:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
6s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
49s{color} | {color:blue} llap-server in master has 82 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 1 new + 326 unchanged - 17 
fixed = 327 total (was 343) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
27s{color} | {color:red} ql generated 1 new + 2253 unchanged - 1 fixed = 2254 
total (was 2254) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Should org.apache.hadoop.hive.ql.parse.HiveParser$DFA238 be a _static_ 
inner class?  At HiveParser.java:inner class?  At HiveParser.java:[lines 
48391-48404] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17726/dev-support/hive-personality.sh
 |
| git revision | master / 84b5ba7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17726/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17726/yetus/new-findbugs-ql.html
 |
| modules | C: ql llap-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17726/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Move Function and Macro related DDL operations into the DDL framework
> -
>
> Key: HIVE-21914
> URL: https://issues.apache.org/jira/browse/HIVE-21914
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Fix For: 4.0.0
>
> Attachments: HIVE-21914.01.patch, HIVE-21914.02.patch, 
> HIVE-21914.03.patch
>
>
> Some Function and Macro related operations are handled by FunctionTask, and 
> FunctionWork while 

[jira] [Commented] (HIVE-21921) Support for correlated quantified predicates

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872291#comment-16872291
 ] 

Hive QA commented on HIVE-21921:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972798/HIVE-21921.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16340 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_ANY]
 (batchId=176)
org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testMergeOnTezEdges
 (batchId=322)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17725/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17725/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17725/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972798 - PreCommit-HIVE-Build

> Support for correlated quantified predicates
> 
>
> Key: HIVE-21921
> URL: https://issues.apache.org/jira/browse/HIVE-21921
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21921.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-06-25 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-21225:

Attachment: HIVE-21225.4.patch

> ACID: getAcidState() should cache a recursive dir listing locally
> -
>
> Key: HIVE-21225
> URL: https://issues.apache.org/jira/browse/HIVE-21225
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Gopal V
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, 
> HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, async-pid-44-2.svg
>
>
> Currently getAcidState() makes 3 calls into the FS api which could be 
> answered by making a single recursive listDir call and reusing the same data 
> to check for isRawFormat() and isValidBase().
> All delta operations for a single partition can go against a single listed 
> directory snapshot instead of interacting with the NameNode or ObjectStore 
> within the inner loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21907?focusedWorklogId=266541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266541
 ]

ASF GitHub Bot logged work on HIVE-21907:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 11:06
Start Date: 25/Jun/19 11:06
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #685: HIVE-21907: Add a 
new LlapDaemon Management API method to set the daemon capacity
URL: https://github.com/apache/hive/pull/685#discussion_r297131670
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/TaskExecutorService.java
 ##
 @@ -176,6 +181,31 @@ public TaskExecutorService(int numExecutors, int 
waitQueueSize,
 Futures.addCallback(future, new WaitQueueWorkerCallback());
   }
 
+  /**
+   * Sets the TaskExecutorService capacity to the new values. Both the number 
of executors and the
+   * queue size should be smaller than that original values, so we do not mess 
up with the other
 
 Review comment:
   Container memory sizes are calculated based on the executors. Adding more 
executors and this way executing more containers can result in memory 
oversubscription.
   
   Added more specific comment
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266541)
Time Spent: 1h  (was: 50m)

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21921) Support for correlated quantified predicates

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872264#comment-16872264
 ] 

Hive QA commented on HIVE-21921:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
17s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 19 new + 152 unchanged - 1 
fixed = 171 total (was 153) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17725/dev-support/hive-personality.sh
 |
| git revision | master / 84b5ba7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17725/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17725/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support for correlated quantified predicates
> 
>
> Key: HIVE-21921
> URL: https://issues.apache.org/jira/browse/HIVE-21921
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21921.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872237#comment-16872237
 ] 

Hive QA commented on HIVE-21225:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972793/HIVE-21225.4.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17724/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17724/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17724/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-06-25 10:21:10.591
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17724/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-06-25 10:21:10.595
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 84b5ba7 HIVE-21913: GenericUDTFGetSplits should handle usernames 
in the same way as LLAP (Prasanth Jayachandran reviewed by Jason Dere)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 84b5ba7 HIVE-21913: GenericUDTFGetSplits should handle usernames 
in the same way as LLAP (Prasanth Jayachandran reviewed by Jason Dere)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-06-25 10:21:11.747
+ rm -rf ../yetus_PreCommit-HIVE-Build-17724
+ mkdir ../yetus_PreCommit-HIVE-Build-17724
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17724
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17724/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCrudCompactorOnTez.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/HdfsUtils.java: does not 
exist in index
error: a/ql/src/test/org/apache/hadoop/hive/ql/io/TestAcidUtils.java: does not 
exist in index
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:10: trailing whitespace.
  
/data/hiveptest/working/scratch/build.patch:46: trailing whitespace.

/data/hiveptest/working/scratch/build.patch:85: trailing whitespace.
  // Okay, we're going to need these originals.  
/data/hiveptest/working/scratch/build.patch:100: trailing whitespace.
  
/data/hiveptest/working/scratch/build.patch:115: trailing whitespace.
} 
warning: squelched 21 whitespace errors
warning: 26 lines add whitespace errors.
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc2366243295147497200.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc2366243295147497200.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc8182810356046681495.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output 

[jira] [Commented] (HIVE-21920) Extract command authorisation from the Driver

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872236#comment-16872236
 ] 

Hive QA commented on HIVE-21920:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12972786/HIVE-21920.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16339 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17723/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17723/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17723/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12972786 - PreCommit-HIVE-Build

> Extract command authorisation from the Driver
> -
>
> Key: HIVE-21920
> URL: https://issues.apache.org/jira/browse/HIVE-21920
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21920.01.patch
>
>
> There are ~400 lines of command authorisation in the Driver class, which are 
> also used by ExplainTask. Extract them into a separate package under  
> org.apache.hadoop.hive.ql.security.authorization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872206#comment-16872206
 ] 

Adam Szita commented on HIVE-21922:
---

The patch introduces the following new options:
 * In Hive conf
 ** *hive.llap.use.hs2.keytab.for.am.registry.keytab*: if set to true and 
hive.llap.task.scheduler.am.registry.keytab.file is empty, HS2 keytab will be 
added to Yarn as resource to be localized for Tez AM use
 * In LLAP's yarn service descriptor file compiler python script:
 ** *service-keytab-localized-path*: if set, Yarn will make sure LLAP daemons 
can reach the keytab file on this path, earlier uploaded to HDFS path as per 
service-keytab-dir / service-keytab options

[~pvary] can you take a look please?

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21922:
--
Status: Patch Available  (was: Open)

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18735) Create table like loses transactional attribute

2019-06-25 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-18735:
-
Attachment: HIVE-18735.02.patch

> Create table like loses transactional attribute
> ---
>
> Key: HIVE-18735
> URL: https://issues.apache.org/jira/browse/HIVE-18735
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.0.0
>Reporter: Eugene Koifman
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.01.patch, HIVE-18735.02.patch
>
>
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1;
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>  
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518813536099/warehouse/t'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1518813564')
> {noformat}
> Specifying props explicitly does work 
> {noformat}
> create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc 
> TBLPROPERTIES ('transactional'='true')";
> create table T like T1 TBLPROPERTIES ('transactional'='true');
> show create table T ;
> CREATE TABLE `T`(
>   `a` int,
>   `b` int)
> CLUSTERED BY (
>   a)
> INTO 2 BUCKETS
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> LOCATION
>   
> 'file:/Users/ekoifman/IdeaProjects/hive/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands-1518814098564/warehouse/t'
> TBLPROPERTIES (
>   'transactional'='true',
>   'transactional_properties'='default',
>   'transient_lastDdlTime'='1518814111')
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-21922:
--
Attachment: HIVE-21922.0.patch

> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-21922.0.patch
>
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21920) Extract command authorisation from the Driver

2019-06-25 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872191#comment-16872191
 ] 

Hive QA commented on HIVE-21920:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 7 new + 186 unchanged - 4 
fixed = 193 total (was 190) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} ql generated 0 new + 2253 unchanged - 1 fixed = 2253 
total (was 2254) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17723/dev-support/hive-personality.sh
 |
| git revision | master / 84b5ba7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17723/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17723/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Extract command authorisation from the Driver
> -
>
> Key: HIVE-21920
> URL: https://issues.apache.org/jira/browse/HIVE-21920
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.1.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21920.01.patch
>
>
> There are ~400 lines of command authorisation in the Driver class, which are 
> also used by ExplainTask. Extract them into a separate package under  
> org.apache.hadoop.hive.ql.security.authorization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21922) Allow keytabs to be reused in LLAP yarn applications through Yarn localization

2019-06-25 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita reassigned HIVE-21922:
-


> Allow keytabs to be reused in LLAP yarn applications through Yarn localization
> --
>
> Key: HIVE-21922
> URL: https://issues.apache.org/jira/browse/HIVE-21922
> Project: Hive
>  Issue Type: New Feature
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>
> In secure clusters LLAP has to be able to reach keytab files for kerberos 
> login.
> Currently _hive.llap.task.scheduler.am.registry.keytab.file_ and 
> _hive.llap.daemon.keytab.file_ configs are used to define the path of such 
> keytabs on the Tez AM and LLAP daemon side respectively. Both presume local 
> file system paths only - hence all nodes in the LLAP cluster (even those that 
> eventually don't end up executing a daemon...) have to have Hive's keytab 
> preinstalled on them.
> The above is described by this strategy: 
> [Pre-installed_Keytabs_for_AM_and_containers|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Pre-installed_Keytabs_for_AM_and_containers]
> Another approach can be 
> [Keytabs_for_AM_and_containers_distributed_via_YARN|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html#Keytabs_for_AM_and_containers_distributed_via_YARN]
>  where we rely on HDFS and Yarn resource localization, and no prior keytab 
> distribution is required. I intend to make this strategy an option for 
> Hive-LLAP in this jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21907?focusedWorklogId=266473=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-266473
 ]

ASF GitHub Bot logged work on HIVE-21907:
-

Author: ASF GitHub Bot
Created on: 25/Jun/19 09:17
Start Date: 25/Jun/19 09:17
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #685: HIVE-21907: Add a 
new LlapDaemon Management API method to set the daemon capacity
URL: https://github.com/apache/hive/pull/685#discussion_r297087947
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/ContainerRunnerImpl.java
 ##
 @@ -40,6 +40,7 @@
 import org.apache.hadoop.hive.llap.daemon.QueryFailedHandler;
 import org.apache.hadoop.hive.llap.daemon.SchedulerFragmentCompletingListener;
 import org.apache.hadoop.hive.llap.daemon.impl.LlapTokenChecker.LlapTokenInfo;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
 
 Review comment:
   Removed
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 266473)
Time Spent: 50m  (was: 40m)

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21907) Add a new LlapDaemon Management API method to set the daemon capacity

2019-06-25 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21907:
--
Attachment: HIVE-21907.2.patch

> Add a new LlapDaemon Management API method to set the daemon capacity
> -
>
> Key: HIVE-21907
> URL: https://issues.apache.org/jira/browse/HIVE-21907
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21907.2.patch, HIVE-21907.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Add a new method to LlapManagementProtocol API which can disable an Llap node.
> It would be even better, if we can dynamically set the number of executors 
> and the size of the wait queue. This way we can disable the node setting them 
> to 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >