[jira] [Updated] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22087:
-
Attachment: HIVE-22087.6.patch

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22087:
-
Status: Patch Available  (was: Open)

Fixing test failures.

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22087:
-
Status: Open  (was: Patch Available)

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat reassigned HIVE-22110:
-


> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Fix For: 4.0.0
>
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907002#comment-16907002
 ] 

Hive QA commented on HIVE-22087:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
30s{color} | {color:blue} standalone-metastore/metastore-common in master has 
32 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
15s{color} | {color:blue} standalone-metastore/metastore-server in master has 
180 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 2 new + 206 unchanged - 0 fixed = 208 total (was 206) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 33 new + 800 unchanged - 8 fixed = 833 total (was 808) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 4 new + 139 
unchanged - 1 fixed = 143 total (was 140) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 179 unchanged - 1 fixed = 180 total (was 180) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} metastore-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} standalone-metastore_metastore-server generated 0 
new + 25 unchanged - 1 fixed = 25 total (was 26) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} hive-unit in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  instanceof will always return true for all non-null values in 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_req(GetDatabaseRequest),
 since all RuntimeException are instances of RuntimeException  At 
HiveMetaStore.java:for all non-null values in 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_req(GetDatabaseRequest),
 since all RuntimeException are instances of RuntimeException  At 
HiveMetaStore.java:[line 1556] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 

[jira] [Commented] (HIVE-22109) Hive.renamePartition expects catalog name to be set instead of using default

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906935#comment-16906935
 ] 

Hive QA commented on HIVE-22109:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18332/dev-support/hive-personality.sh
 |
| git revision | master / fba9c20 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18332/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive.renamePartition expects catalog name to be set instead of using default
> 
>
> Key: HIVE-22109
> URL: https://issues.apache.org/jira/browse/HIVE-22109
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22109.patch
>
>
> This behavior is inconsistent with other APIs in this class where it uses the 
> default catalog name set in the HiveConf when catalog is null on the Table 
> object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22109) Hive.renamePartition expects catalog name to be set instead of using default

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906974#comment-16906974
 ] 

Hive QA commented on HIVE-22109:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977552/HIVE-22109.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16735 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18332/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18332/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18332/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977552 - PreCommit-HIVE-Build

> Hive.renamePartition expects catalog name to be set instead of using default
> 
>
> Key: HIVE-22109
> URL: https://issues.apache.org/jira/browse/HIVE-22109
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22109.patch
>
>
> This behavior is inconsistent with other APIs in this class where it uses the 
> default catalog name set in the HiveConf when catalog is null on the Table 
> object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907029#comment-16907029
 ] 

Hive QA commented on HIVE-22087:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977558/HIVE-22087.6.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16737 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testListener 
(batchId=229)
org.apache.hadoop.hive.ql.security.TestAuthorizationPreEventListener.testListener
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18333/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18333/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18333/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977558 - PreCommit-HIVE-Build

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22100) Hive generates a add partition event with empty partition list

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-22100:


Assignee: Naveen Gangam

> Hive generates a add partition event with empty partition list
> --
>
> Key: HIVE-22100
> URL: https://issues.apache.org/jira/browse/HIVE-22100
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Naveen Gangam
>Priority: Major
>
> If the user issues a {{alter table  add if not exists partition 
> }} and it the partition already exists, no partition is 
> added. However, metastore still generates a {{ADD_PARTITION}} event with 
> empty partition list. An {{alter table  drop if exists partition 
> }} does not generate the {{DROP_PARTITION}} event in case the 
> partition is not existing.
> This behavior is inconsistent and misleading. Metastore should not generate 
> such add_partition events.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907226#comment-16907226
 ] 

Hive QA commented on HIVE-22110:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977590/HIVE-22110.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16735 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18335/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18335/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18335/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977590 - PreCommit-HIVE-Build

> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22110.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907123#comment-16907123
 ] 

Hive QA commented on HIVE-13457:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} service: The patch generated 0 new + 39 unchanged - 
1 fixed = 39 total (was 40) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18334/dev-support/hive-personality.sh
 |
| git revision | master / fba9c20 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: service U: service |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18334/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>Assignee: Pawel Szostek
>Priority: Major
> Attachments: HIVE-13457.10.patch, HIVE-13457.11.patch, 
> HIVE-13457.12.patch, HIVE-13457.3.patch, HIVE-13457.4.patch, 
> HIVE-13457.5.patch, HIVE-13457.6.patch, HIVE-13457.6.patch, 
> HIVE-13457.7.patch, HIVE-13457.8.patch, HIVE-13457.9.patch, HIVE-13457.patch, 
> HIVE-13457.patch
>
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22111) Materialized view based on replicated table might not get refreshed

2019-08-14 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907177#comment-16907177
 ] 

Peter Vary commented on HIVE-22111:
---

CC: [~jcamachorodriguez], [~sankarh]

> Materialized view based on replicated table might not get refreshed
> ---
>
> Key: HIVE-22111
> URL: https://issues.apache.org/jira/browse/HIVE-22111
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, repl
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
>
> Consider the following scenario:
> * create a base table which we replicate
> * create a materialized view in the target hive based on the base table
> * modify (delete/update) the base table in the source hive
> * replicate the changes (delete/update) to the target hive
> * query the materialized view in the target hive
>  
> We do not refresh the data, since when the transaction is created by 
> replication we set ctc_update_delete to 'N'.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907192#comment-16907192
 ] 

Hive QA commented on HIVE-22110:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18335/dev-support/hive-personality.sh
 |
| git revision | master / fba9c20 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18335/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22110.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22111) Materialized view based on replicated table might not get refreshed

2019-08-14 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907221#comment-16907221
 ] 

Peter Vary commented on HIVE-22111:
---

In the TxnHandler.commitTxn method when we store the new commit generated by a 
replication event we do this:
{code:java}
  s = "insert into COMPLETED_TXN_COMPONENTS (ctc_txnid, ctc_database, " 
+
  "ctc_table, ctc_partition, ctc_writeid, ctc_update_delete) 
select tc_txnid," +
  " tc_database, tc_table, tc_partition, tc_writeid, '" + 
isUpdateDelete +
  "' from TXN_COMPONENTS where tc_txnid = " + txnid +
  //we only track compactor activity in TXN_COMPONENTS to handle 
the case where the
  //compactor txn aborts - so don't bother copying it to 
COMPLETED_TXN_COMPONENTS
  " AND tc_operation_type <> " + 
quoteChar(OperationType.COMPACT.sqlConst);
{code}
See: 
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L1227-L1233]

In case of replication {{isUpdateDelete}} is always 'N'.

{{TxnHandler.getMaterializationInvalidationInfo}} filters out components based 
on {{ctc_update_delete}}.
{code:java}
  query.append("select ctc_update_delete from COMPLETED_TXN_COMPONENTS 
where ctc_update_delete='Y' AND (");
{code}
See: 
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L2021]

By my understanding this means that this will cause the Materialized View to 
miss the change, and it will not be updated and might cause wrong results.

We do not have correct UpdateDelete information in case of replication, so the 
quick fix would be that we set the isUpdateDelete to 'Y' every time when we are 
coming from a replication event. If everything works as I expect then this 
would mean that we might end up regenerating the Materialized View 
unnecessarily on the target cluster, but we could ensure correct results even 
in this edge case. [~jcamachorodriguez]: Would this be an acceptable tradeoff?

Thanks,
 Peter

> Materialized view based on replicated table might not get refreshed
> ---
>
> Key: HIVE-22111
> URL: https://issues.apache.org/jira/browse/HIVE-22111
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, repl
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
>
> Consider the following scenario:
> * create a base table which we replicate
> * create a materialized view in the target hive based on the base table
> * modify (delete/update) the base table in the source hive
> * replicate the changes (delete/update) to the target hive
> * query the materialized view in the target hive
>  
> We do not refresh the data, since when the transaction is created by 
> replication we set ctc_update_delete to 'N'.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907607#comment-16907607
 ] 

Hive QA commented on HIVE-22112:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977628/HIVE-22112.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16739 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18338/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18338/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18338/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977628 - PreCommit-HIVE-Build

> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-22112.patch
>
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907640#comment-16907640
 ] 

Jason Dere commented on HIVE-22113:
---

+1 pending tests

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907661#comment-16907661
 ] 

Hive QA commented on HIVE-22081:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 4 new + 24 unchanged - 1 fixed 
= 28 total (was 25) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18340/dev-support/hive-personality.sh
 |
| git revision | master / 71605e6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18340/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18340/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907679#comment-16907679
 ] 

Hive QA commented on HIVE-22081:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977629/HIVE-21917.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16739 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18340/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18340/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18340/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977629 - PreCommit-HIVE-Build

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907736#comment-16907736
 ] 

Hive QA commented on HIVE-22114:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977656/HIVE-22114.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18342/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18342/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18342/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977656 - PreCommit-HIVE-Build

> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22114.1.patch
>
>
> Following insert query fails when all buckets are empty
> {code:sql}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> create table src1(name string, age int, gpa decimal(3,2));
> insert into src1 values("name", 56, 4);
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from src1 limit 0;
> {code}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> 

[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907700#comment-16907700
 ] 

Hive QA commented on HIVE-22113:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977638/HIVE-22113.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 16739 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18341/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18341/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18341/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977638 - PreCommit-HIVE-Build

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907707#comment-16907707
 ] 

Hive QA commented on HIVE-22114:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
6s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18342/dev-support/hive-personality.sh
 |
| git revision | master / ff463aa |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18342/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22114.1.patch
>
>
> Following insert query fails when all buckets are empty
> {code:sql}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> create table src1(name string, age int, gpa decimal(3,2));
> insert into src1 values("name", 56, 4);
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from src1 limit 0;
> {code}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 

[jira] [Commented] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907761#comment-16907761
 ] 

Hive QA commented on HIVE-22107:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18343/dev-support/hive-personality.sh
 |
| git revision | master / f9bd589 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18343/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20442) Hive stale lock when the hiveserver2 background thread died with NPE

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907638#comment-16907638
 ] 

Hive QA commented on HIVE-20442:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977631/HIVE-20442.4-branch-1.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 155 failed/errored test(s), 7897 tests 
executed
*Failed tests:*
{noformat}
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=339)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=370)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=349)
TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely 
timed out) (batchId=355)
TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=377)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=393)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=369)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=359)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=358)
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=378)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file 
(likely timed out) (batchId=357)
TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=327)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=336)
TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely 
timed out) (batchId=381)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=331)
TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=364)
TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) 
(batchId=396)
TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=397)
TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely 
timed out) (batchId=385)
TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely 
timed out) (batchId=373)
TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed 
out) (batchId=372)
TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=375)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=351)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=341)
TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) 
(batchId=354)
TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=399)
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=400)
TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=382)
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=356)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=428)
TestJdbcDriver2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=387)
TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file (likely timed out) 
(batchId=398)
TestJdbcWithLocalClusterSpark - did not produce a TEST-*.xml file (likely timed 
out) (batchId=392)
TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=389)
TestJdbcWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=425)
TestJdbcWithMiniKdcCookie - did not produce a TEST-*.xml file (likely timed 
out) (batchId=424)
TestJdbcWithMiniKdcSQLAuthBinary - did not produce a TEST-*.xml file (likely 
timed out) (batchId=422)
TestJdbcWithMiniKdcSQLAuthHttp - did not produce a TEST-*.xml file (likely 
timed out) (batchId=427)
TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=388)
TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely 
timed out) (batchId=394)
TestJdbcWithSQLAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=395)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=362)
TestMTQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=360)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=348)
TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) 
(batchId=352)
TestMetaStoreAuthorization - did not produce a TEST-*.xml file (likely timed 
out) 

[jira] [Updated] (HIVE-22109) Hive.renamePartition expects catalog name to be set instead of using default

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22109:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Fix has been committed to master. Thank you for the review [~thejas]. Closing 
the jira.

> Hive.renamePartition expects catalog name to be set instead of using default
> 
>
> Key: HIVE-22109
> URL: https://issues.apache.org/jira/browse/HIVE-22109
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22109.patch
>
>
> This behavior is inconsistent with other APIs in this class where it uses the 
> default catalog name set in the HiveConf when catalog is null on the Table 
> object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20057) For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute change not reflecting for non-CAPS

2019-08-14 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907778#comment-16907778
 ] 

Alan Gates commented on HIVE-20057:
---

Patch also committed to branch-3.

> For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute 
> change not reflecting for non-CAPS
> 
>
> Key: HIVE-20057
> URL: https://issues.apache.org/jira/browse/HIVE-20057
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: All Versions
>Reporter: Anirudh
>Assignee: Anirudh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.1.0
>
> Attachments: hive20057.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Hive EXTERNAL table shown as MANAGED after conversion using 
> {code} ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='True')
> {code}
>  
> The DESCRIBE FORMATTED shows:
> {code}
> Table Type:            MANAGED_TABLE
> Table Parameters:
>                                EXTERNAL           True
> {code}
>  
> This is actually a External table but is shown wrongly, as 'True' was used in 
> place of 'TRUE' in the ALTER statement.
> Issue explained here: 
> [StakOverflow - Hive Table is MANAGED or 
> EXTERNAL|https://stackoverflow.com/questions/51103317/hive-table-is-managed-or-external/51142873#51142873]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22107:
---
Attachment: HIVE-22107.2.patch

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22107:
---
Status: Patch Available  (was: Open)

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22107:
---
Status: Open  (was: Patch Available)

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22105) Update ORC to 1.5.6.

2019-08-14 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907658#comment-16907658
 ] 

Alan Gates commented on HIVE-22105:
---

I get errors for the following qfile tests with TestMiniLlapLocalCliDriver:

acid_vectorization_original.q,
change_allowincompatible_vectorization_false_date.q,
default_constraint.q,
deleteAnalyze.q,
enforce_constraint_notnull.q,
extrapolate_part_stats_partial_ndv.q,
materialized_view_create.q,
materialized_view_create_rewrite.q,
materialized_view_create_rewrite_4.q,
materialized_view_create_rewrite_5.q,
materialized_view_create_rewrite_dummy.q,
materialized_view_create_rewrite_multi_db.q,
materialized_view_create_rewrite_time_window.q,
materialized_view_describe.q,
orc_merge11.q,
orc_merge9.q

> Update ORC to 1.5.6.
> 
>
> Key: HIVE-22105
> URL: https://issues.apache.org/jira/browse/HIVE-22105
> Project: Hive
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ORC has had some important fixes in the 1.5 branch and they should be picked 
> up by Hive.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22114:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22114.1.patch
>
>
> Following insert query fails when all buckets are empty
> {code:sql}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> create table src1(name string, age int, gpa decimal(3,2));
> insert into src1 values("name", 56, 4);
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from src1 limit 0;
> {code}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
>   at 

[jira] [Commented] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907622#comment-16907622
 ] 

Vineet Garg commented on HIVE-22112:


+1 LGTM

> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-22112.patch
>
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22112:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22112.patch
>
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907688#comment-16907688
 ] 

Hive QA commented on HIVE-22113:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} llap-server in master has 83 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 1 new + 47 unchanged 
- 0 fixed = 48 total (was 47) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18341/dev-support/hive-personality.sh
 |
| git revision | master / 71605e6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18341/yetus/diff-checkstyle-llap-server.txt
 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18341/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> 

[jira] [Updated] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese updated HIVE-22113:
-
Attachment: HIVE-22113.1.patch

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-14 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra reassigned HIVE-22115:
-


> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907797#comment-16907797
 ] 

Ashutosh Chauhan commented on HIVE-22114:
-

+1

> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22114.1.patch
>
>
> Following insert query fails when all buckets are empty
> {code:sql}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> create table src1(name string, age int, gpa decimal(3,2));
> insert into src1 values("name", 56, 4);
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from src1 limit 0;
> {code}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> 

[jira] [Updated] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22114:
---
Attachment: HIVE-22114.1.patch

> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22114.1.patch
>
>
> Following insert query fails when all buckets are empty
> {code:sql}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> create table src1(name string, age int, gpa decimal(3,2));
> insert into src1 values("name", 56, 4);
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from src1 limit 0;
> {code}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> 

[jira] [Updated] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22114:
---
Status: Patch Available  (was: Open)

> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22114.1.patch
>
>
> Following insert query fails when all buckets are empty
> {code:sql}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> create table src1(name string, age int, gpa decimal(3,2));
> insert into src1 values("name", 56, 4);
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from src1 limit 0;
> {code}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> 

[jira] [Updated] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-14 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-22115:
--
Attachment: HIVE-22115.patch

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-22115.patch
>
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-14 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-22115:
--
Status: Patch Available  (was: In Progress)

> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HIVE-22115) Prevent the creation of query-router logger in HS2 as per property

2019-08-14 Thread slim bouguerra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22115 started by slim bouguerra.
-
> Prevent the creation of query-router logger in HS2 as per property
> --
>
> Key: HIVE-22115
> URL: https://issues.apache.org/jira/browse/HIVE-22115
> Project: Hive
>  Issue Type: Improvement
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> Avoid the creation and registration of query-router logger if the Hive server 
> Property is set to false by the user
> {code}
> HiveConf.ConfVars.HIVE_SERVER2_LOGGING_OPERATION_ENABLED
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22107) Correlated subquery producing wrong schema

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907784#comment-16907784
 ] 

Hive QA commented on HIVE-22107:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977657/HIVE-22107.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 16739 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_notexists] 
(batchId=99)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[external_jdbc_table_perf]
 (batchId=184)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_multi]
 (batchId=165)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=120)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query10] 
(batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query16] 
(batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query35] 
(batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query69] 
(batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query94] 
(batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query10]
 (batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query16]
 (batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query35]
 (batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query69]
 (batchId=296)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query94]
 (batchId=296)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18343/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18343/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18343/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977657 - PreCommit-HIVE-Build

> Correlated subquery producing wrong schema
> --
>
> Key: HIVE-22107
> URL: https://issues.apache.org/jira/browse/HIVE-22107
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22107.1.patch, HIVE-22107.2.patch
>
>
> *Repro*
> {code:sql}
> create table test(id int, name string,dept string);
> insert into test values(1,'a','it'),(2,'b','eee'),(NULL, 'c', 'cse');
> select distinct 'empno' as eid, a.id from test a where NOT EXISTS (select 
> c.id from test c where a.id=c.id);
> {code}
> {code}
> +---++
> |  eid  |  a.id  |
> +---++
> | NULL  | empno  |
> +---++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907819#comment-16907819
 ] 

Hive QA commented on HIVE-22113:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} llap-server in master has 83 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18344/dev-support/hive-personality.sh
 |
| git revision | master / a501e6e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18344/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> 

[jira] [Commented] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907840#comment-16907840
 ] 

Hive QA commented on HIVE-22113:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977658/HIVE-22113.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16740 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets]
 (batchId=179)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18344/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18344/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18344/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977658 - PreCommit-HIVE-Build

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.1.patch, HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22114:
---
Summary: insert query for partitioned insert only table failing when all 
buckets are empty  (was: insert query for partitioned table failing when all 
buckets are empty, s3 storage location)

> insert query for partitioned insert only table failing when all buckets are 
> empty
> -
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
>
> Following insert query fails when all buckets are empty
> {noformat}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc;
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from studenttab10k limit 0;
> {noformat}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2367)
>   at 

[jira] [Updated] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22114:
---
Description: 
Following insert query fails when all buckets are empty

{code:sql}
set hive.create.as.insert.only=true;
create table src_emptybucket_partitioned_1 (name string, age int, gpa 
decimal(3,2))
   partitioned by(year int)
   clustered by (age)
   sorted by (age)
   into 100 buckets
   stored as orc;
insert into table src_emptybucket_partitioned_1
   partition(year=2015)
   select * from studenttab10k limit 0;
{code}

Error:

{noformat}
ERROR : Job Commit failed with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
 No such file or directory: 
s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
# org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.FileNotFoundException: No such file or directory: 
s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2367)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1880)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1922)
at 
org.apache.hadoop.hive.ql.exec.Utilities.getMmDirectoryCandidates(Utilities.java:4185)
at 
org.apache.hadoop.hive.ql.exec.Utilities.handleMmTableFinalPath(Utilities.java:4386)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1397)
... 26 more

ERROR : FAILED: Execution Error, return code 3 from 
org.apache.hadoop.hive.ql.exec.tez.TezTask
{noformat}


  was:
Following insert query fails when all buckets are empty

{noformat}
create table src_emptybucket_partitioned_1 (name string, age int, gpa 
decimal(3,2))
   partitioned 

[jira] [Updated] (HIVE-22114) insert query for partitioned insert only table failing when all buckets are empty

2019-08-14 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22114:
---
Description: 
Following insert query fails when all buckets are empty

{code:sql}
create table src_emptybucket_partitioned_1 (name string, age int, gpa 
decimal(3,2))
   partitioned by(year int)
   clustered by (age)
   sorted by (age)
   into 100 buckets
   stored as orc tblproperties 
("transactional"="true", "transactional_properties"="insert_only");




create table src1(name string, age int, gpa decimal(3,2));
insert into src1 values("name", 56, 4);


insert into table src_emptybucket_partitioned_1
   partition(year=2015)
   select * from src1 limit 0;
{code}

Error:

{noformat}
ERROR : Job Commit failed with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
 No such file or directory: 
s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
# org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.FileNotFoundException: No such file or directory: 
s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2367)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1880)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1922)
at 
org.apache.hadoop.hive.ql.exec.Utilities.getMmDirectoryCandidates(Utilities.java:4185)
at 
org.apache.hadoop.hive.ql.exec.Utilities.handleMmTableFinalPath(Utilities.java:4386)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1397)
... 26 more

ERROR : FAILED: Execution Error, return code 3 from 
org.apache.hadoop.hive.ql.exec.tez.TezTask
{noformat}


  was:
Following insert query fails when all buckets are empty


[jira] [Updated] (HIVE-22068) Add more logging to notification cleaner and replication to track events

2019-08-14 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-22068:
--
Description: 
In repl load, update the status of target database to the last event dumped so 
that repl status returns that and next incremental can specify it as the event 
from which to start the dump. WIthout that repl status might return and old 
event which might cause, older events to be dumped again and/or a notification 
event missing error if the older events are cleaned by the cleaner.

While at it
 * Add more logging to DB notification listener cleaner thread
 ** The time when it considered cleaning, the interval and time before which 
events were cleared, the min and max id at that time
 ** how many events were cleared
 ** min and max id after the cleaning.
 * In REPL::START document the starting event, end event if specified and the 
maximum number of events, if specified.
 *

  was:
* Add more logging to DB notification listener cleaner thread
 ** The time when it considered cleaning, the interval and time before which 
events were cleared, the min and max id at that time
 ** how many events were cleared
 ** min and max id after the cleaning.
 * In REPL::START document the starting event, end event if specified and the 
maximum number of events, if specified.


> Add more logging to notification cleaner and replication to track events
> 
>
> Key: HIVE-22068
> URL: https://issues.apache.org/jira/browse/HIVE-22068
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22068.01.patch, HIVE-22068.02.patch, 
> HIVE-22068.03.patch, HIVE-22068.04.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In repl load, update the status of target database to the last event dumped 
> so that repl status returns that and next incremental can specify it as the 
> event from which to start the dump. WIthout that repl status might return and 
> old event which might cause, older events to be dumped again and/or a 
> notification event missing error if the older events are cleaned by the 
> cleaner.
> While at it
>  * Add more logging to DB notification listener cleaner thread
>  ** The time when it considered cleaning, the interval and time before which 
> events were cleared, the min and max id at that time
>  ** how many events were cleared
>  ** min and max id after the cleaning.
>  * In REPL::START document the starting event, end event if specified and the 
> maximum number of events, if specified.
>  *



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22068) Return the last event id dumped as repl status to avoid notification event missing error.

2019-08-14 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-22068:
--
Summary: Return the last event id dumped as repl status to avoid 
notification event missing error.  (was: Add more logging to notification 
cleaner and replication to track events)

> Return the last event id dumped as repl status to avoid notification event 
> missing error.
> -
>
> Key: HIVE-22068
> URL: https://issues.apache.org/jira/browse/HIVE-22068
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22068.01.patch, HIVE-22068.02.patch, 
> HIVE-22068.03.patch, HIVE-22068.04.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In repl load, update the status of target database to the last event dumped 
> so that repl status returns that and next incremental can specify it as the 
> event from which to start the dump. WIthout that repl status might return and 
> old event which might cause, older events to be dumped again and/or a 
> notification event missing error if the older events are cleaned by the 
> cleaner.
> While at it
>  * Add more logging to DB notification listener cleaner thread
>  ** The time when it considered cleaning, the interval and time before which 
> events were cleared, the min and max id at that time
>  ** how many events were cleared
>  ** min and max id after the cleaning.
>  * In REPL::START document the starting event, end event if specified and the 
> maximum number of events, if specified.
>  *



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907174#comment-16907174
 ] 

Hive QA commented on HIVE-13457:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977569/HIVE-13457.12.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16707 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=110)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18334/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18334/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18334/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977569 - PreCommit-HIVE-Build

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>Assignee: Pawel Szostek
>Priority: Major
> Attachments: HIVE-13457.10.patch, HIVE-13457.11.patch, 
> HIVE-13457.12.patch, HIVE-13457.3.patch, HIVE-13457.4.patch, 
> HIVE-13457.5.patch, HIVE-13457.6.patch, HIVE-13457.6.patch, 
> HIVE-13457.7.patch, HIVE-13457.8.patch, HIVE-13457.9.patch, HIVE-13457.patch, 
> HIVE-13457.patch
>
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2019-08-14 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-13457:
-
Attachment: HIVE-13457.12.patch

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>Assignee: Pawel Szostek
>Priority: Major
> Attachments: HIVE-13457.10.patch, HIVE-13457.11.patch, 
> HIVE-13457.12.patch, HIVE-13457.3.patch, HIVE-13457.4.patch, 
> HIVE-13457.5.patch, HIVE-13457.6.patch, HIVE-13457.6.patch, 
> HIVE-13457.7.patch, HIVE-13457.8.patch, HIVE-13457.9.patch, HIVE-13457.patch, 
> HIVE-13457.patch
>
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2019-08-14 Thread Szehon Ho (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907078#comment-16907078
 ] 

Szehon Ho commented on HIVE-13457:
--

More checkstyle fixes

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>Assignee: Pawel Szostek
>Priority: Major
> Attachments: HIVE-13457.10.patch, HIVE-13457.11.patch, 
> HIVE-13457.12.patch, HIVE-13457.3.patch, HIVE-13457.4.patch, 
> HIVE-13457.5.patch, HIVE-13457.6.patch, HIVE-13457.6.patch, 
> HIVE-13457.7.patch, HIVE-13457.8.patch, HIVE-13457.9.patch, HIVE-13457.patch, 
> HIVE-13457.patch
>
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-22110:
--
Attachment: HIVE-22110.01.patch
Status: Patch Available  (was: Open)

> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22110.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22110:
--
Labels: pull-request-available  (was: )

> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22110?focusedWorklogId=294676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294676
 ]

ASF GitHub Bot logged work on HIVE-22110:
-

Author: ASF GitHub Bot
Created on: 14/Aug/19 10:50
Start Date: 14/Aug/19 10:50
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #752: 
HIVE-22110 : Initialize ReplChangeManager during dump.
URL: https://github.com/apache/hive/pull/752
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294676)
Time Spent: 10m
Remaining Estimate: 0h

> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22111) Materialized view based on replicated table might not get refreshed

2019-08-14 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-22111:
-


> Materialized view based on replicated table might not get refreshed
> ---
>
> Key: HIVE-22111
> URL: https://issues.apache.org/jira/browse/HIVE-22111
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, repl
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Minor
>
> Consider the following scenario:
> * create a base table which we replicate
> * create a materialized view in the target hive based on the base table
> * modify (delete/update) the base table in the source hive
> * replicate the changes (delete/update) to the target hive
> * query the materialized view in the target hive
>  
> We do not refresh the data, since when the transaction is created by 
> replication we set ctc_update_delete to 'N'.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21550) TestObjectStore tests are flaky - A lock could not be obtained within the time requested

2019-08-14 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907246#comment-16907246
 ] 

Laszlo Bodor commented on HIVE-21550:
-

thanks [~vgarg], looking at current state of TestObjectStore:
https://github.com/apache/hive/blob/4510efd15f44cc4c217bbc65ad2147c14261bccc/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestObjectStore.java#L147
NPE is thrown while calling a method of a recently created object, not really 
sure how this is possible

> TestObjectStore tests are flaky -  A lock could not be obtained within the 
> time requested
> -
>
> Key: HIVE-21550
> URL: https://issues.apache.org/jira/browse/HIVE-21550
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21550.01.patch, HIVE-21550.02.patch, 
> HIVE-21550.repro.patch, 
> TEST-230_UTBatch_standalone-metastore__metastore-server_20_tests-TEST-org.apache.hadoop.hive.metastore.TestObjectStore.xml,
>  maven-test.txt, org.apache.hadoop.hive.metastore.TestObjectStore-output.txt, 
> screenshot-builds.apache.org-2019.03.30-12-38-32.png, 
> surefire_derby_stacktrace.log
>
>
> found in HIVE-21396
> TestObjectStore contains 24 tests, but 14 of them failed, the same ones, 
> twice in a row
>  [https://builds.apache.org/job/PreCommit-HIVE-Build/16744/testReport]
>  [https://builds.apache.org/job/PreCommit-HIVE-Build/16774/testReport]
> {code:java}
> org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
>  (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
>  (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
>  (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
> (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
> (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps 
> (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
> (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps 
> (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
> (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
> org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
> (batchId=230)
> {code}
> all of the tests fail while initializing (see [^maven-test.txt]), dropping 
> all objects (TestObjectStore.setUp:141->dropAllStoreObjects:776)
> {code:java}
> SELECT DISTINCT 'org.apache.hadoop.hive.metastore.model.MPartition' AS 
> NUCLEUS_TYPE,A0.CREATE_TIME,A0.LAST_ACCESS_TIME,A0.PART_NAME,A0.WRITE_ID,A0.PART_ID,A0.PART_NAME
>  AS NUCORDER0 FROM PARTITIONS A0 LEFT OUTER JOIN TBLS B0 ON A0.TBL_ID = 
> B0.TBL_ID LEFT OUTER JOIN DBS C0 ON B0.DB_ID = C0.DB_ID WHERE B0.TBL_NAME = ? 
> AND C0."NAME" = ? AND C0.CTLG_NAME = ? ORDER BY NUCORDER0 FETCH NEXT 100 ROWS 
> ONLY
> {code}
> seems like a deadlock or stuff, all the tests are failed in 2min0sec, so an 
> increased timeout wouldn't help here i think
> {code:java}
> javax.jdo.JDODataStoreException: Error executing SQL query "select 
> "PARTITIONS"."PART_ID" from "PARTITIONS" inner join "TBLS" on 
> "PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID" and "TBLS"."TBL_NAME" = ? inner join 
> "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID" and "DBS"."NAME" = ? where 
> "DBS"."CTLG_NAME" = ? order by "PART_NAME" asc". at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
>  ~[datanucleus-api-jdo-4.2.4.jar:?] at 
> org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) 
> ~[datanucleus-api-jdo-4.2.4.jar:?] at 
> org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) 
> ~[datanucleus-api-jdo-4.2.4.jar:?] at 
> org.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils.executeWithArray(MetastoreDirectSqlUtils.java:61)
>  [classes/:?] at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1882)
>  [classes/:?] at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionIdsViaSqlFilter(MetaStoreDirectSql.java:759)
>  [classes/:?] at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitions(MetaStoreDirectSql.java:673)
>  [classes/:?] at 
> 

[jira] [Commented] (HIVE-22110) Initialize ReplChangeManager before starting actual dump

2019-08-14 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907263#comment-16907263
 ] 

Sankar Hariappan commented on HIVE-22110:
-

+1, LGTM

> Initialize ReplChangeManager before starting actual dump
> 
>
> Key: HIVE-22110
> URL: https://issues.apache.org/jira/browse/HIVE-22110
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22110.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> REPL DUMP calls ReplChageManager.encodeFileUri() to add cmroot and checksum 
> to the url. This requires ReplChangeManager to be initialized. So, initialize 
> Repl change manager when taking a dump.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-14 Thread Nishant Bangarwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HIVE-20683:

Attachment: HIVE-20683.8.patch

> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, 
> HIVE-20683.6.patch, HIVE-20683.8.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters (Added via 
> https://github.com/apache/incubator-druid/pull/6222)
> Implementation details - 
> # Hive generates and passes the filters as part of 'filterExpr' in TableScan. 
> # DruidQueryBasedRecordReader gets this filter passed as part of the conf. 
> # During execution phase, before sending the query to druid in 
> DruidQueryBasedRecordReader we will deserialize this filter, translate it 
> into a DruidDimFilter and add it to existing DruidQuery.  Tez executor 
> already ensures that when we start reading results from the record reader, 
> all the dynamic values are initialized. 
> # Explaining a druid query also prints the query sent to druid as 
> {{druid.json.query}}. We also need to make sure to update the druid query 
> with the filters. During explain we do not have the actual values for the 
> dynamic values, so instead of values we will print the dynamic expression 
> itself as part of druid query. 
> Note:- This work needs druid to be updated to version 0.13.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22109) Hive.renamePartition expects catalog name to be set instead of using default

2019-08-14 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907395#comment-16907395
 ] 

Naveen Gangam commented on HIVE-22109:
--

[~thejas] Could you please review? a minor fix.

> Hive.renamePartition expects catalog name to be set instead of using default
> 
>
> Key: HIVE-22109
> URL: https://issues.apache.org/jira/browse/HIVE-22109
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22109.patch
>
>
> This behavior is inconsistent with other APIs in this class where it uses the 
> default catalog name set in the HiveConf when catalog is null on the Table 
> object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22114) insert query for partitioned table failing when all buckets are empty, s3 storage location

2019-08-14 Thread Aswathy Chellammal Sreekumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aswathy Chellammal Sreekumar reassigned HIVE-22114:
---


> insert query for partitioned table failing when all buckets are empty, s3 
> storage location
> --
>
> Key: HIVE-22114
> URL: https://issues.apache.org/jira/browse/HIVE-22114
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
>
> Following insert query fails when all buckets are empty
> {noformat}
> create table src_emptybucket_partitioned_1 (name string, age int, gpa 
> decimal(3,2))
>partitioned by(year int)
>clustered by (age)
>sorted by (age)
>into 100 buckets
>stored as orc;
> insert into table src_emptybucket_partitioned_1
>partition(year=2015)
>select * from studenttab10k limit 0;
> {noformat}
> Error:
> {noformat}
> ERROR : Job Commit failed with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException:
>  No such file or directory: 
> s3a://warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015)'
> # org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1403)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:798)
>   at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:803)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:590)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:327)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2335)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2002)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1674)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1372)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1366)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a:///warehouse/tablespace/managed/hive/src_emptybucket_partitioned/year=2015
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2805)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2694)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2587)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2388)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2367)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2367)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1880)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1922)
>   at 
> 

[jira] [Commented] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907554#comment-16907554
 ] 

Hive QA commented on HIVE-22087:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977625/HIVE-22087.7.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16733 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosExternalTables.org.apache.hadoop.hive.ql.parse.TestReplicationScenariosExternalTables
 (batchId=258)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18337/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18337/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18337/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977625 - PreCommit-HIVE-Build

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch, HIVE-22087.7.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907573#comment-16907573
 ] 

Hive QA commented on HIVE-22112:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18338/dev-support/hive-personality.sh
 |
| git revision | master / 71605e6 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18338/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore testutils/ptest2 U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18338/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-22112.patch
>
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) GenericUDFDateFormat can't handle Julian dates properly

2019-08-14 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Status: In Progress  (was: Patch Available)

> GenericUDFDateFormat can't handle Julian dates properly
> ---
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: backward-incompatible
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) GenericUDFDateFormat can't handle Julian dates properly

2019-08-14 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Attachment: HIVE-22099.1.patch

> GenericUDFDateFormat can't handle Julian dates properly
> ---
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: backward-incompatible
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-14 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Labels:   (was: backward-incompatible)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-14 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Description: 
Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
handled improperly by date/timestamp UDFs.

E.g. DateFormat UDF:

Although the dates are in Julian calendar, the formatter insists to print these 
according to Gregorian calendar causing multiple days of difference in some 
cases:

 
{code:java}
beeline> select date_format('1001-01-05','dd---MM--');
++
| _c0 |
++
| 30---12--1000 |
++{code}
 I've observed similar problems in the following UDFs:
 * add_months
 * date_format
 * day
 * month
 * months_between
 * weekofyear
 * year

 

  was:
Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
handled improperly by DateFormat UDF:

Although the dates are in Julian calendar, the formatter insists to print these 
according to Gregorian calendar causing multiple days of difference in some 
cases:

 
{code:java}
beeline> select date_format('1001-01-05','dd---MM--');
++
| _c0 |
++
| 30---12--1000 |
++{code}
 

 


> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: backward-incompatible
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-14 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22099:
--
Summary: Several date related UDFs can't handle Julian dates properly since 
HIVE-20007  (was: GenericUDFDateFormat can't handle Julian dates properly)

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: backward-incompatible
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22099) Several date related UDFs can't handle Julian dates properly since HIVE-20007

2019-08-14 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907316#comment-16907316
 ] 

Adam Szita commented on HIVE-22099:
---

I'm reusing the original Jira created for fixing date_format UDF, as multiple 
UDFs are suffering from the same issue. As per discussions with Karen and Jesus 
I'm also changing approach and trying to fix the UDFs with a temporary solution 
so that we don't break backward compatibility.

> Several date related UDFs can't handle Julian dates properly since HIVE-20007
> -
>
> Key: HIVE-22099
> URL: https://issues.apache.org/jira/browse/HIVE-22099
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-22099.0.patch, HIVE-22099.1.patch
>
>
> Currently dates that belong to Julian calendar (before Oct 15, 1582) are 
> handled improperly by date/timestamp UDFs.
> E.g. DateFormat UDF:
> Although the dates are in Julian calendar, the formatter insists to print 
> these according to Gregorian calendar causing multiple days of difference in 
> some cases:
>  
> {code:java}
> beeline> select date_format('1001-01-05','dd---MM--');
> ++
> | _c0 |
> ++
> | 30---12--1000 |
> ++{code}
>  I've observed similar problems in the following UDFs:
>  * add_months
>  * date_format
>  * day
>  * month
>  * months_between
>  * weekofyear
>  * year
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22087:
-
Status: Open  (was: Patch Available)

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22087:
-
Attachment: HIVE-22087.7.patch

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch, HIVE-22087.7.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22087:
-
Status: Patch Available  (was: Open)

Some refactoring in the get_database* to address the extra preevent 
notifications on internal calls.

> HMS Translation: Translate getDatabase() API to alter warehouse location
> 
>
> Key: HIVE-22087
> URL: https://issues.apache.org/jira/browse/HIVE-22087
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22087.1.patch, HIVE-22087.2.patch, 
> HIVE-22087.3.patch, HIVE-22087.5.patch, HIVE-22087.6.patch, HIVE-22087.7.patch
>
>
> It makes sense to translate getDatabase() calls as well, to alter the 
> location for the Database based on whether or not the processor has 
> capabilities to write to the managed warehouse directory. Every DB has 2 
> locations, one external and the other in the managed warehouse directory. If 
> the processor has any AcidWrite capability, then the location remains 
> unchanged for the database.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-22112:
---


> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22112:

Attachment: HIVE-22112.patch

> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-22112.patch
>
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22112) update jackson version in disconnected poms

2019-08-14 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22112:

Status: Patch Available  (was: Open)

> update jackson version in disconnected poms 
> 
>
> Key: HIVE-22112
> URL: https://issues.apache.org/jira/browse/HIVE-22112
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-22112.patch
>
>
> was updated in top level pom via HIVE-22089



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-14 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22081:
--
Status: Open  (was: Patch Available)

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-14 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22081:
--
Attachment: HIVE-21917.02.patch
Status: Patch Available  (was: Open)

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-21917.02.patch, 
> HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-20442) Hive stale lock when the hiveserver2 background thread died with NPE

2019-08-14 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-20442:
--
Status: Open  (was: Patch Available)

> Hive stale lock when the hiveserver2 background thread died with NPE
> 
>
> Key: HIVE-20442
> URL: https://issues.apache.org/jira/browse/HIVE-20442
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 2.1.1, 1.2.0
> Environment: Hive-2.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20442.01.branch-2.patch, 
> HIVE-20442.1-branch-1.2.patch, HIVE-20442.2-branch-1.2.patch, 
> HIVE-20442.3-branch-1.2.patch, HIVE-20442.4-branch-1.2.patch
>
>
> this look like a race condition where background thread is not able to 
> release the lock it aquired.
> 1. hiveserver2 background thread request for lock
> {code}
> 2018-08-20T14:13:38,813 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbLockManager (DbLockManager.java:lock(100)) - Requesting: 
> queryId=hive_xxx LockRequest(component:[LockComponent(type:SHARED_READ, 
> level:TABLE, dbname:testdb, tablename:test_table, operationType:SELECT)], 
> txnid:0, user:hive, hostname:HOSTNAME, agentInfo:hive_xxx)
> {code}
> 2. acquired the lock and start heartbeating
> {code}
> 2018-08-20T14:36:30,233 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbTxnManager (DbTxnManager.java:startHeartbeat(517)) - Started 
> heartbeat with delay/interval = 15/15 MILLISECONDS for 
> query: agentInfo:hive_xxx
> {code}
> 3. during time between event #1 and #2, client disconnected and deleteContext 
> cleanup the session dir
> {code}
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-XXX]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(136)) - 
> Session disconnected without closing properly.
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(140)) - 
> Closing the session: SessionHandle [3be07faf-5544-4178-8b50-8173002b171a]
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> service.CompositeService (SessionManager.java:closeSession(363)) - Session 
> closed, SessionHandle [xxx], current sessions:2
> {code}
> 4. background thread died with NPE while trying to get the queryid 
> {code}
> java.lang.NullPointerException: null
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1568) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at java.security.AccessController.doPrivileged(Native Method) 
> [?:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_77]
> {code}
> did not get a chance to release the lock and heartbeater thread continue 
> heartbeat indefinately.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-20442) Hive stale lock when the hiveserver2 background thread died with NPE

2019-08-14 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-20442:
--
Attachment: HIVE-20442.4-branch-1.2.patch
Status: Patch Available  (was: Open)

> Hive stale lock when the hiveserver2 background thread died with NPE
> 
>
> Key: HIVE-20442
> URL: https://issues.apache.org/jira/browse/HIVE-20442
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 2.1.1, 1.2.0
> Environment: Hive-2.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20442.01.branch-2.patch, 
> HIVE-20442.1-branch-1.2.patch, HIVE-20442.2-branch-1.2.patch, 
> HIVE-20442.3-branch-1.2.patch, HIVE-20442.4-branch-1.2.patch
>
>
> this look like a race condition where background thread is not able to 
> release the lock it aquired.
> 1. hiveserver2 background thread request for lock
> {code}
> 2018-08-20T14:13:38,813 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbLockManager (DbLockManager.java:lock(100)) - Requesting: 
> queryId=hive_xxx LockRequest(component:[LockComponent(type:SHARED_READ, 
> level:TABLE, dbname:testdb, tablename:test_table, operationType:SELECT)], 
> txnid:0, user:hive, hostname:HOSTNAME, agentInfo:hive_xxx)
> {code}
> 2. acquired the lock and start heartbeating
> {code}
> 2018-08-20T14:36:30,233 INFO  [HiveServer2-Background-Pool: Thread-X]: 
> lockmgr.DbTxnManager (DbTxnManager.java:startHeartbeat(517)) - Started 
> heartbeat with delay/interval = 15/15 MILLISECONDS for 
> query: agentInfo:hive_xxx
> {code}
> 3. during time between event #1 and #2, client disconnected and deleteContext 
> cleanup the session dir
> {code}
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-XXX]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(136)) - 
> Session disconnected without closing properly.
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> thrift.ThriftCLIService (ThriftBinaryCLIService.java:deleteContext(140)) - 
> Closing the session: SessionHandle [3be07faf-5544-4178-8b50-8173002b171a]
> 2018-08-21T15:39:57,820 INFO  [HiveServer2-Handler-Pool: Thread-]: 
> service.CompositeService (SessionManager.java:closeSession(363)) - Session 
> closed, SessionHandle [xxx], current sessions:2
> {code}
> 4. background thread died with NPE while trying to get the queryid 
> {code}
> java.lang.NullPointerException: null
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1568) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) 
> ~[hive-exec-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336)
>  [hive-service-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
> at java.security.AccessController.doPrivileged(Native Method) 
> [?:1.8.0_77]
> at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_77]
> {code}
> did not get a chance to release the lock and heartbeater thread continue 
> heartbeat indefinately.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907488#comment-16907488
 ] 

Hive QA commented on HIVE-20683:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2251 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} druid-handler in master has 3 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} itests/qtest-druid in master has 7 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18336/dev-support/hive-personality.sh
 |
| git revision | master / 71605e6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql druid-handler . itests itests/qtest-druid U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18336/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, 
> HIVE-20683.6.patch, HIVE-20683.8.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have 

[jira] [Commented] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907494#comment-16907494
 ] 

Hive QA commented on HIVE-20683:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12977619/HIVE-20683.8.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16740 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18336/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18336/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18336/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12977619 - PreCommit-HIVE-Build

> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, 
> HIVE-20683.6.patch, HIVE-20683.8.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters (Added via 
> https://github.com/apache/incubator-druid/pull/6222)
> Implementation details - 
> # Hive generates and passes the filters as part of 'filterExpr' in TableScan. 
> # DruidQueryBasedRecordReader gets this filter passed as part of the conf. 
> # During execution phase, before sending the query to druid in 
> DruidQueryBasedRecordReader we will deserialize this filter, translate it 
> into a DruidDimFilter and add it to existing DruidQuery.  Tez executor 
> already ensures that when we start reading results from the record reader, 
> all the dynamic values are initialized. 
> # Explaining a druid query also prints the query sent to druid as 
> {{druid.json.query}}. We also need to make sure to update the druid query 
> with the filters. During explain we do not have the actual values for the 
> dynamic values, so instead of values we will print the dynamic expression 
> itself as part of druid query. 
> Note:- This work needs druid to be updated to version 0.13.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21931) Slow compaction for tiny tables

2019-08-14 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907513#comment-16907513
 ] 

Rajkumar Singh commented on HIVE-21931:
---

compaction should be affected by wait time 
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/AlterTableCompactOperation.java#L102
 only in case of blocking compaction command (alter table compact 'major' 
and wait), if that the case then increasing wait time exp will be a good idea.
[~csringhofer] can you confirm that you are seeing this issue with the blocking 
compaction call?


> Slow compaction for tiny tables
> ---
>
> Key: HIVE-21931
> URL: https://issues.apache.org/jira/browse/HIVE-21931
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Csaba Ringhofer
>Priority: Major
>  Labels: compaction
>
> I observed the issue in Impala development environment when (major) 
> compacting insert_only transactional tables in Hive. The compaction could 
> take ~10 minutes even when it only had to merge 2 rows from 2 inserts. The 
> actual work was done much earlier, the new base file was correctly written to 
> HDFS, and Hive seemed to wait without doing any work.
> The compactions are started manually, hive.compactor.initiator.on=false to 
> avoid "surprise compaction" during tests.
> {code}
> hive.compactor.abortedtxn.threshold=1000
> hive.compactor.check.interval=300s
> hive.compactor.cleaner.run.interval=5000ms
> hive.compactor.compact.insert.only=true
> hive.compactor.crud.query.based=false
> hive.compactor.delta.num.threshold=10
> hive.compactor.delta.pct.threshold=0.1
> hive.compactor.history.reaper.interval=2m
> hive.compactor.history.retention.attempted=2
> hive.compactor.history.retention.failed=3
> hive.compactor.history.retention.succeeded=3
> hive.compactor.initiator.failed.compacts.threshold=2
> hive.compactor.initiator.on=false
> hive.compactor.max.num.delta=500
> hive.compactor.worker.threads=4
> hive.compactor.worker.timeout=86400s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22096) Backport HIVE-21584 to branch-2.3

2019-08-14 Thread Alan Gates (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907517#comment-16907517
 ] 

Alan Gates commented on HIVE-22096:
---

Tested on branch-2 and committed there as well.

> Backport HIVE-21584 to branch-2.3
> -
>
> Key: HIVE-22096
> URL: https://issues.apache.org/jira/browse/HIVE-22096
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 2.3.6
>
> Attachments: HIVE-22096.branch-2.3.patch
>
>
> Backport HIVE-21584 to make Spark support JDK 11.
> https://www.mail-archive.com/dev@hive.apache.org/msg137001.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22087) HMS Translation: Translate getDatabase() API to alter warehouse location

2019-08-14 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907527#comment-16907527
 ] 

Hive QA commented on HIVE-22087:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
22s{color} | {color:blue} standalone-metastore/metastore-common in master has 
32 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} standalone-metastore/metastore-server in master has 
180 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 2 new + 206 unchanged - 0 fixed = 208 total (was 206) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 33 new + 800 unchanged - 8 fixed = 833 total (was 808) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 4 new + 139 
unchanged - 1 fixed = 143 total (was 140) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 180 unchanged - 0 fixed = 181 total (was 180) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} metastore-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} standalone-metastore_metastore-server generated 0 
new + 25 unchanged - 1 fixed = 25 total (was 26) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} hive-unit in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  instanceof will always return true for all non-null values in 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_req(GetDatabaseRequest),
 since all RuntimeException are instances of RuntimeException  At 
HiveMetaStore.java:for all non-null values in 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_req(GetDatabaseRequest),
 since all RuntimeException are instances of RuntimeException  At 
HiveMetaStore.java:[line 1559] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 

[jira] [Assigned] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese reassigned HIVE-22113:



> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22113 started by Oliver Draese.

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22113) Prevent LLAP shutdown on AMReporter related RuntimeException

2019-08-14 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese updated HIVE-22113:
-
Attachment: HIVE-22113.patch
Status: Patch Available  (was: In Progress)

> Prevent LLAP shutdown on AMReporter related RuntimeException
> 
>
> Key: HIVE-22113
> URL: https://issues.apache.org/jira/browse/HIVE-22113
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.1
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Major
>  Labels: llap
> Attachments: HIVE-22113.patch
>
>
> If a task attempt cannot be removed from AMReporter (i.e. task attempt was 
> not found), the AMReporter throws a RuntimeException. This exception is not 
> caught and trickles up, causing an LLAP shutdown:
> {{2019-08-08T23:34:39,748[Wait-Queue-Scheduler-0()]:[Wait-Queue-Scheduler-0,5,main]}}{{java.lang.RuntimeException:_1563528877295_18872_3728_01_03_0't}}{{
> 
> at$AMNodeInfo.removeTaskAttempt(AMReporter.java:524)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(AMReporter.java:243)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskRunnerCallable.java:384)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at(TaskExecutorService.java:739)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$1100(TaskExecutorService.java:91)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$WaitQueueWorker.run(TaskExecutorService.java:396)~[hive-llap-server-3.1.0.3.1.0.103-1.jar:3.1.0.3.1.0.103-1]}}{{
> 
> at$RunnableAdapter.call(Executors.java:511)~[?:1.8.0_161]}}{{
> 
> at$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(InterruptibleTask.java:41)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(TrustedListenableFutureTask.java:77)[hive-exec-3.1.0.3.1.0.103-1.jar:3.1.0-SNAPSHOT]}}{{
> 
> at(ThreadPoolExecutor.java:1149)[?:1.8.0_161]}}{{
> 
> at$Worker.run(ThreadPoolExecutor.java:624)[?:1.8.0_161]}}{{
> at(Thread.java:748)[?:1.8.0_161]}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2019-08-14 Thread Szehon Ho (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907408#comment-16907408
 ] 

Szehon Ho commented on HIVE-13457:
--

Test failures look not related.  Committing original patch from Pawel with just 
checkstyle fixes.

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>Assignee: Pawel Szostek
>Priority: Major
> Attachments: HIVE-13457.10.patch, HIVE-13457.11.patch, 
> HIVE-13457.12.patch, HIVE-13457.3.patch, HIVE-13457.4.patch, 
> HIVE-13457.5.patch, HIVE-13457.6.patch, HIVE-13457.6.patch, 
> HIVE-13457.7.patch, HIVE-13457.8.patch, HIVE-13457.9.patch, HIVE-13457.patch, 
> HIVE-13457.patch
>
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-13457) Create HS2 REST API endpoints for monitoring information

2019-08-14 Thread Szehon Ho (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-13457:
-
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

> Create HS2 REST API endpoints for monitoring information
> 
>
> Key: HIVE-13457
> URL: https://issues.apache.org/jira/browse/HIVE-13457
> Project: Hive
>  Issue Type: Improvement
>Reporter: Szehon Ho
>Assignee: Pawel Szostek
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-13457.10.patch, HIVE-13457.11.patch, 
> HIVE-13457.12.patch, HIVE-13457.3.patch, HIVE-13457.4.patch, 
> HIVE-13457.5.patch, HIVE-13457.6.patch, HIVE-13457.6.patch, 
> HIVE-13457.7.patch, HIVE-13457.8.patch, HIVE-13457.9.patch, HIVE-13457.patch, 
> HIVE-13457.patch
>
>
> Similar to what is exposed in HS2 webui in HIVE-12338, it would be nice if 
> other UI's like admin tools or Hue can access and display this information as 
> well.  Hence, we will create some REST endpoints to expose this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22109) Hive.renamePartition expects catalog name to be set instead of using default

2019-08-14 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907445#comment-16907445
 ] 

Thejas M Nair commented on HIVE-22109:
--

+1


> Hive.renamePartition expects catalog name to be set instead of using default
> 
>
> Key: HIVE-22109
> URL: https://issues.apache.org/jira/browse/HIVE-22109
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22109.patch
>
>
> This behavior is inconsistent with other APIs in this class where it uses the 
> default catalog name set in the HiveConf when catalog is null on the Table 
> object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)