[jira] [Commented] (HIVE-18997) Hive column casting from decimal to double is resulting in NULL

2018-03-19 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405839#comment-16405839
 ] 

Gopal V commented on HIVE-18997:


Have you tested with Hive-3.0 instead of 1.1?

> Hive column casting from decimal to double is resulting in NULL
> ---
>
> Key: HIVE-18997
> URL: https://issues.apache.org/jira/browse/HIVE-18997
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, distribution
>Affects Versions: 1.1.0
> Environment: Hive CLI  and cloudera 5.8.3 distribution
>Reporter: Rangaswamy Narayan
>Priority: Major
>
> I have hive table table1 schema of the table looks like this
> {{[CREATE TABLE table1(p_decimal1 DECIMAL(38,5)) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY ',' STORED AS TEXTFILE] }}
> and i have below value in the table
> {{row : col(p_decimal1) row1 : 12345123451234512345123.45123 }}
> in later stage if i execute
> {{select CAST(p_decimal1 AS DOUBLE) from table1; }}
> query then I am getting {{NULL}} as a output. 
> expected output should be non-null value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405835#comment-16405835
 ] 

Hive QA commented on HIVE-18780:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915249/HIVE-18780.2.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 13417 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-18997) Hive column casting from decimal to double is resulting in NULL

2018-03-19 Thread Rangaswamy Narayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rangaswamy Narayan updated HIVE-18997:
--
Environment: Hive CLI  and cloudera 5.8.3 distribution  (was: i have 
checked this is hive CLI. )

> Hive column casting from decimal to double is resulting in NULL
> ---
>
> Key: HIVE-18997
> URL: https://issues.apache.org/jira/browse/HIVE-18997
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, distribution
>Affects Versions: 1.1.0
> Environment: Hive CLI  and cloudera 5.8.3 distribution
>Reporter: Rangaswamy Narayan
>Priority: Major
>
> I have hive table table1 schema of the table looks like this
> {{[CREATE TABLE table1(p_decimal1 DECIMAL(38,5)) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY ',' STORED AS TEXTFILE] }}
> and i have below value in the table
> {{row : col(p_decimal1) row1 : 12345123451234512345123.45123 }}
> in later stage if i execute
> {{select CAST(p_decimal1 AS DOUBLE) from table1; }}
> query then I am getting {{NULL}} as a output. 
> expected output should be non-null value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18997) Hive column casting from decimal to double is resulting in NULL

2018-03-19 Thread Rangaswamy Narayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rangaswamy Narayan updated HIVE-18997:
--
Component/s: distribution

> Hive column casting from decimal to double is resulting in NULL
> ---
>
> Key: HIVE-18997
> URL: https://issues.apache.org/jira/browse/HIVE-18997
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, distribution
>Affects Versions: 1.1.0
> Environment: i have checked this is hive CLI. 
>Reporter: Rangaswamy Narayan
>Priority: Major
>
> I have hive table table1 schema of the table looks like this
> {{[CREATE TABLE table1(p_decimal1 DECIMAL(38,5)) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY ',' STORED AS TEXTFILE] }}
> and i have below value in the table
> {{row : col(p_decimal1) row1 : 12345123451234512345123.45123 }}
> in later stage if i execute
> {{select CAST(p_decimal1 AS DOUBLE) from table1; }}
> query then I am getting {{NULL}} as a output. 
> expected output should be non-null value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405822#comment-16405822
 ] 

Hive QA commented on HIVE-18780:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
44s{color} | {color:red} root: The patch generated 14 new + 401 unchanged - 41 
fixed = 415 total (was 442) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} druid-handler: The patch generated 14 new + 205 
unchanged - 41 fixed = 219 total (was 246) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9715/dev-support/hive-personality.sh
 |
| git revision | master / 68459cf |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9715/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9715/yetus/diff-checkstyle-druid-handler.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9715/yetus/patch-asflicense-problems.txt
 |
| modules | C: common . druid-handler itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9715/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.2.patch, HIVE-18780.patch, HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The 

[jira] [Updated] (HIVE-18953) Implement CHECK constraint

2018-03-19 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18953:
---
Status: Patch Available  (was: Open)

> Implement CHECK constraint
> --
>
> Key: HIVE-18953
> URL: https://issues.apache.org/jira/browse/HIVE-18953
> Project: Hive
>  Issue Type: New Feature
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18953.1.patch, HIVE-18953.2.patch, 
> HIVE-18953.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18953) Implement CHECK constraint

2018-03-19 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18953:
---
Status: Open  (was: Patch Available)

> Implement CHECK constraint
> --
>
> Key: HIVE-18953
> URL: https://issues.apache.org/jira/browse/HIVE-18953
> Project: Hive
>  Issue Type: New Feature
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18953.1.patch, HIVE-18953.2.patch, 
> HIVE-18953.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18953) Implement CHECK constraint

2018-03-19 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18953:
---
Attachment: HIVE-18953.3.patch

> Implement CHECK constraint
> --
>
> Key: HIVE-18953
> URL: https://issues.apache.org/jira/browse/HIVE-18953
> Project: Hive
>  Issue Type: New Feature
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18953.1.patch, HIVE-18953.2.patch, 
> HIVE-18953.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-18780:
--
Attachment: HIVE-18780.2.patch

> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.2.patch, HIVE-18780.patch, HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add the schema out of the calcite plan instead of serializing the query 
> itself as part of the Hive query context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18974) Wrong UTC time while converting from CST by to_utc_timestamp UDFs

2018-03-19 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405762#comment-16405762
 ] 

Rui Li commented on HIVE-18974:
---

Could be the same issue as HIVE-14305. Does the machine's system timezone use 
DST?

> Wrong UTC time while converting from CST by to_utc_timestamp UDFs 
> --
>
> Key: HIVE-18974
> URL: https://issues.apache.org/jira/browse/HIVE-18974
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1
>Reporter: swayam
>Assignee: Hive QA
>Priority: Critical
>
> {color:#FF}Error on Daylight Saving 2017{color}
>  
> select to_utc_timestamp("2017-03-11 19:00:00",'CST');
> OK
> 2017-03-12 01:00:00 --> expected 6 hr difference 
> Time taken: 0.08 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-11 20:00:00",'CST');
> OK
> 2017-03-12 03:00:00 --> wrong 7 hr difference 
> Time taken: 0.088 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-11 21:00:00",'CST');
> OK
> 2017-03-12 04:00:00--> wrong 7 hr difference 
> Time taken: 2.884 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-11 22:00:00",'CST');
> OK
> 2017-03-12 05:00:00--> wrong 7 hr difference 
> Time taken: 0.075 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-11 23:00:00",'CST');
> OK
> 2017-03-12 06:00:00 --> wrong 7 hr difference 
> Time taken: 0.068 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-12 00:00:00",'CST');
> OK
> 2017-03-12 07:00:00 --> wrong 7 hr difference 
> Time taken: 4.769 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-12 01:00:00",'CST');
> OK
> 2017-03-12 08:00:00 --> wrong 7 hr difference 
> Time taken: 0.066 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-12 02:00:00",'CST');
> OK
> 2017-03-12 08:00:00 --> wrong 7 hr difference 
> Time taken: 0.066 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-12 03:00:00",'CST');
> OK
> 2017-03-12 08:00:00 --> expected 5 hr 
> Time taken: 0.061 seconds, Fetched: 1 row(s)
> hive> select to_utc_timestamp("2017-03-12 04:00:00",'CST');
> OK
> 2017-03-12 09:00:00--> expected 5 hr 
> Time taken: 0.065 seconds, Fetched: 1 row(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405726#comment-16405726
 ] 

Hive QA commented on HIVE-18780:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915219/HIVE-18780.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 38 failed/errored test(s), 12999 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover

2018-03-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405724#comment-16405724
 ] 

Sergey Shelukhin commented on HIVE-18952:
-

RB. Actually with the WM patch in the registry adding update support it might 
be easy to include better AM age reporting in this patch and not followup.

> Tez session disconnect and reconnect on HS2 HA failover
> ---
>
> Key: HIVE-18952
> URL: https://issues.apache.org/jira/browse/HIVE-18952
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18952.patch
>
>
> Now that TEZ-3892 is committed, HIVE-18281 can make use of tez session 
> disconnect and reconnect on HA failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405707#comment-16405707
 ] 

Hive QA commented on HIVE-18780:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
44s{color} | {color:red} root: The patch generated 14 new + 401 unchanged - 41 
fixed = 415 total (was 442) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} druid-handler: The patch generated 14 new + 205 
unchanged - 41 fixed = 219 total (was 246) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9714/dev-support/hive-personality.sh
 |
| git revision | master / 26c0ab6 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9714/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9714/yetus/diff-checkstyle-druid-handler.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9714/yetus/patch-asflicense-problems.txt
 |
| modules | C: common . druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9714/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.patch, HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add 

[jira] [Commented] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover

2018-03-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405706#comment-16405706
 ] 

Sergey Shelukhin commented on HIVE-18952:
-

Preliminary patch.
1) Need to test it on a cluster.
2) Need to see if I can test it in unit tests... maybe some tests added to 
HIVE-18281, although I don't know if they create AMs/can create AMs.

Also this won't be committable without a Tez release that provides the 
getClient API.

[~ewohlstadter] [~hagleitn] can you take a look? will post RB shortly.

> Tez session disconnect and reconnect on HS2 HA failover
> ---
>
> Key: HIVE-18952
> URL: https://issues.apache.org/jira/browse/HIVE-18952
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18952.patch
>
>
> Now that TEZ-3892 is committed, HIVE-18281 can make use of tez session 
> disconnect and reconnect on HA failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18952) Tez session disconnect and reconnect on HS2 HA failover

2018-03-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18952:

Attachment: HIVE-18952.patch

> Tez session disconnect and reconnect on HS2 HA failover
> ---
>
> Key: HIVE-18952
> URL: https://issues.apache.org/jira/browse/HIVE-18952
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18952.patch
>
>
> Now that TEZ-3892 is committed, HIVE-18281 can make use of tez session 
> disconnect and reconnect on HA failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18968) LLAP: report guaranteed tasks count in AM registry to check for consistency

2018-03-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18968:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> LLAP: report guaranteed tasks count in AM registry to check for consistency
> ---
>
> Key: HIVE-18968
> URL: https://issues.apache.org/jira/browse/HIVE-18968
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18968.01.patch, HIVE-18968.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405679#comment-16405679
 ] 

Hive QA commented on HIVE-17843:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915223/HIVE-17843.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 29 failed/errored test(s), 13020 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405666#comment-16405666
 ] 

Hive QA commented on HIVE-17843:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
42s{color} | {color:red} root: The patch generated 1 new + 17 unchanged - 2 
fixed = 18 total (was 19) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 17 unchanged - 2 fixed 
= 18 total (was 19) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9713/dev-support/hive-personality.sh
 |
| git revision | master / 26c0ab6 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9713/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9713/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9713/yetus/patch-asflicense-problems.txt
 |
| modules | C: . ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9713/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16669) Fine tune Compaction to take advantage of Acid 2.0

2018-03-19 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405648#comment-16405648
 ] 

Eugene Koifman commented on HIVE-16669:
---

if compactor runs in a txn, it must write to min_history.  Suppose txnid:7 is 
A, txnid:70 is Compactor.  71 starts and sees 7 as A and 70 as O.  If 70 makes 
7 empty we need to make sure {{cleanEmptyAbortedTxns()}} doesn't remove 7's 
entry from TXNS since files produced by 70 are not visible to 71 and if 70 
reads older files, it will treat 7's data as committed (if 
{{cleanEmptyAbortedTxns()}} runs).



> Fine tune Compaction to take advantage of Acid 2.0
> --
>
> Key: HIVE-16669
> URL: https://issues.apache.org/jira/browse/HIVE-16669
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-16669.wip.patch
>
>
> * There is little point using 2.0 vectorized reader since there is no 
> operator pipeline in compaction
> * If minor compaction just concats delete_delta files together, then the 2 
> stage compaction should always ensure that we have a limited number of Orc 
> readers to do the merging and current OrcRawRecordMerger should be fine
> * ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18940) Hive notifications serialize all write DDL operations

2018-03-19 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405643#comment-16405643
 ] 

Thejas M Nair commented on HIVE-18940:
--

[~vihangk1]'s approach of accumulating the events and only getting the lock 
towards end seems like a reasonable way to reduce the duration lock is held to 
a very small time, for metastore calls that lead to several events. Other parts 
of the transactions can go ahead in parallel.
It doesn't have to use same commitID for all events, getting new values should 
be OK, as long as the lock on NOTIFICATION_SEQUENCE is obtained at the end.



> Hive notifications serialize all write DDL operations
> -
>
> Key: HIVE-18940
> URL: https://issues.apache.org/jira/browse/HIVE-18940
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> The implementation of DbNotificationListener uses a single row to store 
> current notification ID and uses {{SELECT FOR UPDATE}} to lock the row. This 
> serializes all write DDL operations which isn't good.
> We should consider using database auto-increment for notification ID instead. 
> Especially on mMySQL/innoDb it is supported natively with relatively 
> light-weight locking. 
> This creates potential issue for consumers though because such IDs may have 
> holes. There are two types of holes - transient hole for a transaction which 
> have not committed yet and will be committed shortly and permanent holes for 
> transactions that fail. Consumers need to deal with it. It may be useful to 
> add DB-generated timestamp as well to assist in recovery from holes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18786) NPE in Hive windowing functions

2018-03-19 Thread alal alal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405633#comment-16405633
 ] 

alal alal commented on HIVE-18786:
--

has anybody encountered this issue? We are also hitting this issue

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row (tag=0) 
\{"key":{"reducesinkkey0":11,"reducesinkkey1":"2018-03-16 
16:09:52"},"value":\{"_col73":"112018-03-16 
16:09:52U076567680762270765676807622800021078400021120436"}}
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:365)
 at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:287)
 ... 16 more

 

Caused by: java.lang.NullPointerException at 
org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115)
 at org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114) 
at 
org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200)
 at 
org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155)
 at 
org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538)
 at 
org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349)
 at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123) at 
org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897) at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) 
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356)

 

 

 

 

> NPE in Hive windowing functions
> ---
>
> Key: HIVE-18786
> URL: https://issues.apache.org/jira/browse/HIVE-18786
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.2
>Reporter: Michael Bieniosek
>Priority: Major
>
> When I run a Hive query with windowing functions, if there's enough data I 
> get an NPE.
> For example something like this query might break:
> select id, created_date, max(created_date) over (partition by id) 
> latest_created_any from ...
> The only workaround I've found is to remove the windowing functions entirely.
> The stacktrace looks suspiciously similar to +HIVE-15278+, but I'm in 
> hive-2.3.2 which appears to have the bugfix applied.
>  
>  Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
>        at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:297)
>         at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:317)
>         at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
>        ... 14 more
>  Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row (tag=0) 
>         at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:365)
>        at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:287)
>         ... 16 more
> Caused by: java.lang.NullPointerException
>           at 
> org.apache.hadoop.hive.ql.exec.persistence.PTFRowContainer.first(PTFRowContainer.java:115)
>           at 
> org.apache.hadoop.hive.ql.exec.PTFPartition.iterator(PTFPartition.java:114)
>           at 
> org.apache.hadoop.hive.ql.udf.ptf.BasePartitionEvaluator.getPartitionAgg(BasePartitionEvaluator.java:200)
>           at 
> org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.evaluateFunctionOnPartition(WindowingTableFunction.java:155)
>           at 
> org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.iterator(WindowingTableFunction.java:538)
>           at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:349)
>           at 
> org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:123)
>           at 
> org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>           at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>           at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18995) Vectorization: Add option to suppress "Execution mode: vectorized" for testing purposes

2018-03-19 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-18995:
---


> Vectorization: Add option to suppress "Execution mode: vectorized" for 
> testing purposes
> ---
>
> Key: HIVE-18995
> URL: https://issues.apache.org/jira/browse/HIVE-18995
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>
> In order to see Q file differences in large runs it is helpful to eliminate 
> change noise from "Execution mode: vectorized" in EXPLAIN output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-19 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17843:
---
Attachment: HIVE-17843.4.patch

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch, HIVE-17843.4.patch
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18984) Make time window configurable per materialized view

2018-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405585#comment-16405585
 ] 

Ashutosh Chauhan commented on HIVE-18984:
-

+1

> Make time window configurable per materialized view
> ---
>
> Key: HIVE-18984
> URL: https://issues.apache.org/jira/browse/HIVE-18984
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18984.patch
>
>
> Currently, {{hive.materializedview.rewriting.time.window}} can be used to 
> specify a time window after which outdated materialized views become invalid 
> for automatic query rewriting (default value is 0). We would like to be able 
> to specify this property for each individual materialized view too via 
> tblproperties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18925) Hive doesn't work when JVM is America/Bahia_Banderas time zone

2018-03-19 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405584#comment-16405584
 ] 

Ashutosh Chauhan commented on HIVE-18925:
-

This is only used to parse timestamp not dates. So, this change is good.

> Hive doesn't work when JVM is America/Bahia_Banderas time zone
> --
>
> Key: HIVE-18925
> URL: https://issues.apache.org/jira/browse/HIVE-18925
> Project: Hive
>  Issue Type: Bug
> Environment: JVM in America/Bahia_Banderas zone
>Reporter: Piotr Findeisen
>Assignee: Piotr Findeisen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-18925.patch
>
>
> Hive Server2 doesn't  work if started with 
> {{-Duser.timezone=America/Bahia_Banderas}}
>  
> Steps to reproduce
>  # use [https://github.com/big-data-europe/docker-hive]
>  # Add {{HADOOP_CLIENT_OPTS: '-Duser.timezone=America/Bahia_Banderas'}} to 
> {{hive-server}} docker container environment configuration
>  # {{docker-compose up}}
>  # 
> {code:java}
> host# docker-compose exec hive-server bash
> container# /opt/hive/bin/beeline -u jdbc:hive2://localhost:1 
> --verbose=true
> ...
> jdbc:hive2://localhost:1> select 1;{code}
> The above fails and prints
> {noformat}
> Error: java.lang.IllegalStateException: Can't overwrite cause with 
> org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas) (state=08S01,code=0)
> java.sql.SQLException: java.lang.IllegalStateException: Can't overwrite cause 
> with org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas)
> at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:323)
> at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:253)
> at org.apache.hive.beeline.Commands.executeInternal(Commands.java:997)
> at org.apache.hive.beeline.Commands.execute(Commands.java:1205)
> at org.apache.hive.beeline.Commands.sql(Commands.java:1134)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1314)
> at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1178)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1033)
> at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:519)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:501)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.IllegalStateException: Can't overwrite cause with 
> org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas)
> at java.lang.Throwable.initCause(Throwable.java:457)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:237)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:237)
> at 
> org.apache.hive.service.cli.HiveSQLException.toCause(HiveSQLException.java:198)
> at 
> org.apache.hive.service.cli.HiveSQLException.(HiveSQLException.java:108)
> at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:267)
> at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:253)
> at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:313)
> ... 15 more
> Caused by: java.lang.ExceptionInInitializerError: null
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hive.service.cli.HiveSQLException.newInstance(HiveSQLException.java:245)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:211)
> ... 21 more{noformat}
> From the above stacktrace it's not visible what is the cause, but i think 
> it's initialization of 
> {{org.apache.hive.common.util.TimestampParser#startingDateValue}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18994) Handle client connections on failover

2018-03-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405579#comment-16405579
 ] 

Prasanth Jayachandran commented on HIVE-18994:
--

This depends on HIVE-18982 for the tests. Won't be able to submit until 
HIVE-18982 is committed. It is up for review though.

cc/ [~sershe]

> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18994.1.patch
>
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18994) Handle client connections on failover

2018-03-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18994:
-
Attachment: HIVE-18994.1.patch

> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18994.1.patch
>
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18994) Handle client connections on failover

2018-03-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-18994:



> Handle client connections on failover
> -
>
> Key: HIVE-18994
> URL: https://issues.apache.org/jira/browse/HIVE-18994
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>
> When leader failover happens (either automatically or manually), tez sessions 
> are closed. But client connections are not. We need to close the client 
> connections explicitly so that workload manager revokes all the guaranteed 
> slots and upon reconnection client will connect to active HS2 instance (this 
> is to avoid clients reusing the same connection and submitting queries to 
> passive HS2). In future, some timeout or other policies (may be WM will run 
> everything speculatively) can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-18780:
--
Attachment: HIVE-18780.patch

> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.patch, HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add the schema out of the calcite plan instead of serializing the query 
> itself as part of the Hive query context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-18780:
--
Status: Patch Available  (was: Open)

> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add the schema out of the calcite plan instead of serializing the query 
> itself as part of the Hive query context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18780) Improve schema discovery For Druid Storage Handler

2018-03-19 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-18780:
--
Attachment: HIVE-18780.patch

> Improve schema discovery For Druid Storage Handler
> --
>
> Key: HIVE-18780
> URL: https://issues.apache.org/jira/browse/HIVE-18780
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18780.patch
>
>
> Currently, Druid Storage adapter issues a Segment metadata Query every time 
> the query is of type Select or Scan. Not only that but then every input split 
> (map) will do the same as well since it is using the same Serde, this is very 
> expensive and put a lot of pressure on the Druid Cluster. The way to fix this 
> is to add the schema out of the calcite plan instead of serializing the query 
> itself as part of the Hive query context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405537#comment-16405537
 ] 

Hive QA commented on HIVE-17843:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915158/HIVE-17843.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 13416 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-18925) Hive doesn't work when JVM is America/Bahia_Banderas time zone

2018-03-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405529#comment-16405529
 ] 

Sergey Shelukhin commented on HIVE-18925:
-

Hmm... I'm not familiar enough with this code to tell. cc [~ashutoshc]

> Hive doesn't work when JVM is America/Bahia_Banderas time zone
> --
>
> Key: HIVE-18925
> URL: https://issues.apache.org/jira/browse/HIVE-18925
> Project: Hive
>  Issue Type: Bug
> Environment: JVM in America/Bahia_Banderas zone
>Reporter: Piotr Findeisen
>Assignee: Piotr Findeisen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-18925.patch
>
>
> Hive Server2 doesn't  work if started with 
> {{-Duser.timezone=America/Bahia_Banderas}}
>  
> Steps to reproduce
>  # use [https://github.com/big-data-europe/docker-hive]
>  # Add {{HADOOP_CLIENT_OPTS: '-Duser.timezone=America/Bahia_Banderas'}} to 
> {{hive-server}} docker container environment configuration
>  # {{docker-compose up}}
>  # 
> {code:java}
> host# docker-compose exec hive-server bash
> container# /opt/hive/bin/beeline -u jdbc:hive2://localhost:1 
> --verbose=true
> ...
> jdbc:hive2://localhost:1> select 1;{code}
> The above fails and prints
> {noformat}
> Error: java.lang.IllegalStateException: Can't overwrite cause with 
> org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas) (state=08S01,code=0)
> java.sql.SQLException: java.lang.IllegalStateException: Can't overwrite cause 
> with org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas)
> at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:323)
> at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:253)
> at org.apache.hive.beeline.Commands.executeInternal(Commands.java:997)
> at org.apache.hive.beeline.Commands.execute(Commands.java:1205)
> at org.apache.hive.beeline.Commands.sql(Commands.java:1134)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1314)
> at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1178)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1033)
> at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:519)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:501)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.IllegalStateException: Can't overwrite cause with 
> org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas)
> at java.lang.Throwable.initCause(Throwable.java:457)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:237)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:237)
> at 
> org.apache.hive.service.cli.HiveSQLException.toCause(HiveSQLException.java:198)
> at 
> org.apache.hive.service.cli.HiveSQLException.(HiveSQLException.java:108)
> at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:267)
> at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:253)
> at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:313)
> ... 15 more
> Caused by: java.lang.ExceptionInInitializerError: null
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hive.service.cli.HiveSQLException.newInstance(HiveSQLException.java:245)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:211)
> ... 21 more{noformat}
> From the above stacktrace it's not visible what is the cause, but i think 
> it's initialization of 
> {{org.apache.hive.common.util.TimestampParser#startingDateValue}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405526#comment-16405526
 ] 

Aihua Xu commented on HIVE-18986:
-

OK. We rely on DN to overwrite the stats (we update the object and then during 
transaction commit, the data is saved in data store).  So I can't simply use 
DirectSQL since the object won't get updated easily. I will try to use batch 
for DN operation instead.

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18925) Hive doesn't work when JVM is America/Bahia_Banderas time zone

2018-03-19 Thread Piotr Findeisen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405524#comment-16405524
 ] 

Piotr Findeisen commented on HIVE-18925:


[~sershe] could you please check if this is a correct fix for 
{{TimestampParser#startingDateValue}}?

> Hive doesn't work when JVM is America/Bahia_Banderas time zone
> --
>
> Key: HIVE-18925
> URL: https://issues.apache.org/jira/browse/HIVE-18925
> Project: Hive
>  Issue Type: Bug
> Environment: JVM in America/Bahia_Banderas zone
>Reporter: Piotr Findeisen
>Assignee: Piotr Findeisen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-18925.patch
>
>
> Hive Server2 doesn't  work if started with 
> {{-Duser.timezone=America/Bahia_Banderas}}
>  
> Steps to reproduce
>  # use [https://github.com/big-data-europe/docker-hive]
>  # Add {{HADOOP_CLIENT_OPTS: '-Duser.timezone=America/Bahia_Banderas'}} to 
> {{hive-server}} docker container environment configuration
>  # {{docker-compose up}}
>  # 
> {code:java}
> host# docker-compose exec hive-server bash
> container# /opt/hive/bin/beeline -u jdbc:hive2://localhost:1 
> --verbose=true
> ...
> jdbc:hive2://localhost:1> select 1;{code}
> The above fails and prints
> {noformat}
> Error: java.lang.IllegalStateException: Can't overwrite cause with 
> org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas) (state=08S01,code=0)
> java.sql.SQLException: java.lang.IllegalStateException: Can't overwrite cause 
> with org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas)
> at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:323)
> at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:253)
> at org.apache.hive.beeline.Commands.executeInternal(Commands.java:997)
> at org.apache.hive.beeline.Commands.execute(Commands.java:1205)
> at org.apache.hive.beeline.Commands.sql(Commands.java:1134)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1314)
> at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1178)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1033)
> at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:519)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:501)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.IllegalStateException: Can't overwrite cause with 
> org.joda.time.IllegalInstantException: Illegal instant due to time zone 
> offset transition (daylight savings time 'gap'): 1970-01-01T00:00:00.000 
> (America/Bahia_Banderas)
> at java.lang.Throwable.initCause(Throwable.java:457)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:237)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:237)
> at 
> org.apache.hive.service.cli.HiveSQLException.toCause(HiveSQLException.java:198)
> at 
> org.apache.hive.service.cli.HiveSQLException.(HiveSQLException.java:108)
> at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:267)
> at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:253)
> at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:313)
> ... 15 more
> Caused by: java.lang.ExceptionInInitializerError: null
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hive.service.cli.HiveSQLException.newInstance(HiveSQLException.java:245)
> at 
> org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:211)
> ... 21 more{noformat}
> From the above stacktrace it's not visible what is the cause, but i think 
> it's initialization of 
> {{org.apache.hive.common.util.TimestampParser#startingDateValue}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405498#comment-16405498
 ] 

Hive QA commented on HIVE-17843:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} root: The patch generated 0 new + 17 unchanged - 2 
fixed = 17 total (was 19) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} ql: The patch generated 0 new + 17 unchanged - 2 
fixed = 17 total (was 19) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9712/dev-support/hive-personality.sh
 |
| git revision | master / 26c0ab6 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9712/yetus/patch-asflicense-problems.txt
 |
| modules | C: . ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9712/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18264) CachedStore: Store cached partitions/col stats within the table cache and make prewarm non-blocking

2018-03-19 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405480#comment-16405480
 ] 

Alexander Kolbasov commented on HIVE-18264:
---

[~vgumashta] It wasn't clear that you have the final version ready - there were 
multiple patches posted here and you never confirmed that there is one you 
intend to commit.

> CachedStore: Store cached partitions/col stats within the table cache and 
> make prewarm non-blocking
> ---
>
> Key: HIVE-18264
> URL: https://issues.apache.org/jira/browse/HIVE-18264
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-18264.1.patch, HIVE-18264.2.patch, 
> HIVE-18264.3.patch, HIVE-18264.4.patch, HIVE-18264.5.patch, 
> HIVE-18264.6.patch, HIVE-18264.7.patch, HIVE-18264.8.patch, HIVE-18264.8.patch
>
>
> Currently we have a separate cache for partitions and partition col stats 
> which results in some calls iterating through each of these for 
> retrieving/updating. For example, to modify a partition col stat, currently 
> we need to lock table, partition and partition col stats caches which are all 
> separate hashmaps. We can get better performance by organizing 
> hierarchically. For example, we can have a partition, partition col stats and 
> table col stats cache per table to improve on the previous mechanisms. This 
> will also result in better concurrency, since now instead of locking the 
> whole cache, we can selectively lock the table cache and modify multiple 
> tables in parallel. 
> In addition, currently, the prewarm mechanism populates all the caches 
> initially (it skips tables that do not pass whitelist/blacklist filter) and 
> it is a blocking call. This patch also makes prewarm non-blocking so that the 
> calls for tables that are already cached can be served from the memory and 
> the ones that are not can be served from the rdbms. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-15566) Schema tool upgrade schema fails from 1.2.1 to 2.1.1 because COMPACTION_QUEUE does not exist

2018-03-19 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali resolved HIVE-15566.
-
Resolution: Cannot Reproduce

> Schema tool upgrade schema fails from 1.2.1 to 2.1.1 because COMPACTION_QUEUE 
> does not exist
> 
>
> Key: HIVE-15566
> URL: https://issues.apache.org/jira/browse/HIVE-15566
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Vihang Karajgaonkar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
>
> When we use schematool to upgrade metastore schema from 1.2.1 to 2.x* it 
> fails with the error 
> "ALTER TABLE' cannot be performed on 'COMPACTION_QUEUE' because it does not 
> exist"
> The table COMPACTION_QUEUE is created hive-txn-schema-2.1.0.derby.sql but the 
> upgrade script does not seem to call it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18992) enable synthetic file IDs by default in LLAP

2018-03-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405447#comment-16405447
 ] 

Prasanth Jayachandran commented on HIVE-18992:
--

+1, pending tests.

> enable synthetic file IDs by default in LLAP
> 
>
> Key: HIVE-18992
> URL: https://issues.apache.org/jira/browse/HIVE-18992
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18992.patch
>
>
> The file IDs are much more reliable than they were initially (hash+len+date 
> instead of just one hash of everything) so they should be enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18982) Provide a CLI option to manually trigger failover

2018-03-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405439#comment-16405439
 ] 

Prasanth Jayachandran commented on HIVE-18982:
--

Output of CLI commands

{code:title=hive --service hiveserver2 --listHAPeers}
{
  "hiveServer2Instances" : [ {
"host" : "localhost",
"rpcPort" : 10003,
"workerIdentity" : "b98b7faa-c35a-43e3-b6be-d51c75f7524e",
"properties" : {
  "hive.server2.instance.uri" : "localhost:10003",
  "hive.server2.authentication" : "NONE",
  "hive.server2.transport.mode" : "binary",
  "hive.server2.thrift.sasl.qop" : "auth",
  "hive.server2.thrift.bind.host" : "localhost",
  "hive.server2.thrift.port" : "10003",
  "hive.server2.use.SSL" : "false",
  "registry.unique.id" : "b98b7faa-c35a-43e3-b6be-d51c75f7524e",
  "hive.server2.webui.port" : "10030"
},
"transportMode" : "binary",
"httpEndpoint" : "",
"leader" : true
  }, {
"host" : "localhost",
"rpcPort" : 10002,
"workerIdentity" : "38067edc-62ad-4be1-9d3c-718f241b76b4",
"properties" : {
  "hive.server2.instance.uri" : "localhost:10002",
  "hive.server2.authentication" : "NONE",
  "hive.server2.transport.mode" : "binary",
  "hive.server2.thrift.sasl.qop" : "auth",
  "hive.server2.thrift.bind.host" : "localhost",
  "hive.server2.thrift.port" : "10002",
  "hive.server2.use.SSL" : "false",
  "registry.unique.id" : "38067edc-62ad-4be1-9d3c-718f241b76b4",
  "hive.server2.webui.port" : "10020"
},
"transportMode" : "binary",
"httpEndpoint" : "",
"leader" : false
  }, {
"host" : "localhost",
"rpcPort" : 10004,
"workerIdentity" : "ad296e8c-73dc-4f8c-b4d5-9487c125b8ef",
"properties" : {
  "hive.server2.instance.uri" : "localhost:10004",
  "hive.server2.authentication" : "NONE",
  "hive.server2.transport.mode" : "binary",
  "hive.server2.thrift.sasl.qop" : "auth",
  "hive.server2.thrift.bind.host" : "localhost",
  "hive.server2.thrift.port" : "10004",
  "hive.server2.use.SSL" : "false",
  "registry.unique.id" : "ad296e8c-73dc-4f8c-b4d5-9487c125b8ef",
  "hive.server2.webui.port" : "10040"
},
"transportMode" : "binary",
"httpEndpoint" : "",
"leader" : false
  } ]
}
{code}

{code:title=hive --service hiveserver2 --failover 
b98b7faa-c35a-43e3-b6be-d51c75f7524e}
{  "success" : true,  "message" : "Failover successful!"}
{code}

> Provide a CLI option to manually trigger failover
> -
>
> Key: HIVE-18982
> URL: https://issues.apache.org/jira/browse/HIVE-18982
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18982.1.patch, HIVE-18982.2.patch
>
>
> HIVE-18281 added active-passive HA. There might be a administrative need to 
> trigger a manual failover of HS2 Active server. Add command line tool to view 
> list of all HS2 instances and trigger manual failover (only under force 
> mode). The clients currently connected to active HS2 will be closed. In 
> future, more options to existing clients connections can be handled via 
> configs/options (like wait until timeout, wait until current sessions are 
> closed etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405437#comment-16405437
 ] 

Aihua Xu commented on HIVE-18986:
-

Thanks for taking a look. I notice that some tests are affected by this. 
Looking into that. Will posted a new patch later.

[~ychena] I think we just have a bad name getPartitionColStats. Actually it's 
to get table column stats. I'm changing to use getTableColumnStatistics so we 
can use DirectSQL as well.

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Alexander Kolbasov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405431#comment-16405431
 ] 

Alexander Kolbasov commented on HIVE-18986:
---

[~aihuaxu] Can you post reviewboard request?

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18992) enable synthetic file IDs by default in LLAP

2018-03-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18992:

Attachment: HIVE-18992.patch

> enable synthetic file IDs by default in LLAP
> 
>
> Key: HIVE-18992
> URL: https://issues.apache.org/jira/browse/HIVE-18992
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18992.patch
>
>
> The file IDs are much more reliable than they were initially (hash+len+date 
> instead of just one hash of everything) so they should be enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18992) enable synthetic file IDs by default in LLAP

2018-03-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405418#comment-16405418
 ] 

Sergey Shelukhin commented on HIVE-18992:
-

[~prasanth_j] can you take a look? one line patch

> enable synthetic file IDs by default in LLAP
> 
>
> Key: HIVE-18992
> URL: https://issues.apache.org/jira/browse/HIVE-18992
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18992.patch
>
>
> The file IDs are much more reliable than they were initially (hash+len+date 
> instead of just one hash of everything) so they should be enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18992) enable synthetic file IDs by default in LLAP

2018-03-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18992:

Status: Patch Available  (was: Open)

> enable synthetic file IDs by default in LLAP
> 
>
> Key: HIVE-18992
> URL: https://issues.apache.org/jira/browse/HIVE-18992
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18992.patch
>
>
> The file IDs are much more reliable than they were initially (hash+len+date 
> instead of just one hash of everything) so they should be enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18992) enable synthetic file IDs by default in LLAP

2018-03-19 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18992:
---


> enable synthetic file IDs by default in LLAP
> 
>
> Key: HIVE-18992
> URL: https://issues.apache.org/jira/browse/HIVE-18992
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> The file IDs are much more reliable than they were initially (hash+len+date 
> instead of just one hash of everything) so they should be enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18825) Define ValidTxnList before starting query optimization

2018-03-19 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405367#comment-16405367
 ] 

Jesus Camacho Rodriguez commented on HIVE-18825:


[~ekoifman], could you take a look? We are planning to push the rest of 
materialized view work that relies on this patch by EOW. Thanks

> Define ValidTxnList before starting query optimization
> --
>
> Key: HIVE-18825
> URL: https://issues.apache.org/jira/browse/HIVE-18825
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18825.01.patch, HIVE-18825.02.patch, 
> HIVE-18825.03.patch, HIVE-18825.04.patch, HIVE-18825.05.patch, 
> HIVE-18825.patch
>
>
> Consider a set of tables used by a materialized view where inserts happened 
> after the materialization was created. To compute incremental view 
> maintenance, we need to be able to filter only new rows from those base 
> tables. That can be done by inserting a filter operator with condition e.g. 
> {{ROW\_\_ID.transactionId < highwatermark and ROW\_\_ID.transactionId NOT 
> IN()}} on top of the MVs query definition and triggering the 
> rewriting (which should in turn produce a partial rewriting). However, to do 
> that, we need to have a value for {{ValidTxnList}} during query compilation 
> so we know the snapshot that we are querying.
> This patch aims to generate {{ValidTxnList}} before query optimization. There 
> should not be any visible changes for end user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18991) Drop database cascade doesn't work with materialized views

2018-03-19 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405342#comment-16405342
 ] 

Jesus Camacho Rodriguez commented on HIVE-18991:


Thanks [~alangates]. Also related HIVE-18620. I will take a look at both asap.

> Drop database cascade doesn't work with materialized views
> --
>
> Key: HIVE-18991
> URL: https://issues.apache.org/jira/browse/HIVE-18991
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> Create a database, add a table and then a materialized view that depends on 
> the table.  Then drop the database with cascade set.  Sometimes this will 
> fail because when HiveMetaStore.drop_database_core goes to drop all of the 
> tables it may drop the base table before the materialized view, which will 
> cause an integrity constraint violation in the RDBMS.  To resolve this that 
> method should change to fetch and drop materialized views before tables.
> cc [~jcamachorodriguez]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18926) Imporve operator-tree matching

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405340#comment-16405340
 ] 

Hive QA commented on HIVE-18926:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915159/HIVE-18926.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 29 failed/errored test(s), 13009 tests 
executed
*Failed tests:*
{noformat}
TestJdbcWithDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=92)

[infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q]
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Assigned] (HIVE-18991) Drop database cascade doesn't work with materialized views

2018-03-19 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-18991:
--

Assignee: Jesus Camacho Rodriguez

> Drop database cascade doesn't work with materialized views
> --
>
> Key: HIVE-18991
> URL: https://issues.apache.org/jira/browse/HIVE-18991
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> Create a database, add a table and then a materialized view that depends on 
> the table.  Then drop the database with cascade set.  Sometimes this will 
> fail because when HiveMetaStore.drop_database_core goes to drop all of the 
> tables it may drop the base table before the materialized view, which will 
> cause an integrity constraint violation in the RDBMS.  To resolve this that 
> method should change to fetch and drop materialized views before tables.
> cc [~jcamachorodriguez]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18990) Hive doesn't close Tez session properly

2018-03-19 Thread Kryvenko Igor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kryvenko Igor updated HIVE-18990:
-
Status: Patch Available  (was: Open)

> Hive doesn't close Tez session properly
> ---
>
> Key: HIVE-18990
> URL: https://issues.apache.org/jira/browse/HIVE-18990
> Project: Hive
>  Issue Type: Bug
>Reporter: Kryvenko Igor
>Assignee: Kryvenko Igor
>Priority: Major
> Attachments: HIVE-18990.01.patch
>
>
> Hive doesn't close Tez session properly if AM isn't ready for accepting DAG.
> *STR*
> This can be easily reproduced using the following steps:
> *1) configure cluster on Tez;*
> *2) create file test.hql*
> cat ~/test.hql
> show databases;
> *3) run the job*
> $ hive --hiveconf hive.root.logger=DEBUG,console --hiveconf 
> hive.execution.engine=tez -f ~/test.hql
> If we login into Yarn UI,  we will see that jobs status is failed even it 
> finished successfully.
> It happens because hive creates tez session by default. And if query finished 
> very quickly, we can't close tez session properly because AM isn't ready for 
> accepting any requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18990) Hive doesn't close Tez session properly

2018-03-19 Thread Kryvenko Igor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405320#comment-16405320
 ] 

Kryvenko Igor commented on HIVE-18990:
--

[~sershe] Could you review, please?

> Hive doesn't close Tez session properly
> ---
>
> Key: HIVE-18990
> URL: https://issues.apache.org/jira/browse/HIVE-18990
> Project: Hive
>  Issue Type: Bug
>Reporter: Kryvenko Igor
>Assignee: Kryvenko Igor
>Priority: Major
> Attachments: HIVE-18990.01.patch
>
>
> Hive doesn't close Tez session properly if AM isn't ready for accepting DAG.
> *STR*
> This can be easily reproduced using the following steps:
> *1) configure cluster on Tez;*
> *2) create file test.hql*
> cat ~/test.hql
> show databases;
> *3) run the job*
> $ hive --hiveconf hive.root.logger=DEBUG,console --hiveconf 
> hive.execution.engine=tez -f ~/test.hql
> If we login into Yarn UI,  we will see that jobs status is failed even it 
> finished successfully.
> It happens because hive creates tez session by default. And if query finished 
> very quickly, we can't close tez session properly because AM isn't ready for 
> accepting any requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18990) Hive doesn't close Tez session properly

2018-03-19 Thread Kryvenko Igor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kryvenko Igor updated HIVE-18990:
-
Attachment: HIVE-18990.01.patch

> Hive doesn't close Tez session properly
> ---
>
> Key: HIVE-18990
> URL: https://issues.apache.org/jira/browse/HIVE-18990
> Project: Hive
>  Issue Type: Bug
>Reporter: Kryvenko Igor
>Assignee: Kryvenko Igor
>Priority: Major
> Attachments: HIVE-18990.01.patch
>
>
> Hive doesn't close Tez session properly if AM isn't ready for accepting DAG.
> *STR*
> This can be easily reproduced using the following steps:
> *1) configure cluster on Tez;*
> *2) create file test.hql*
> cat ~/test.hql
> show databases;
> *3) run the job*
> $ hive --hiveconf hive.root.logger=DEBUG,console --hiveconf 
> hive.execution.engine=tez -f ~/test.hql
> If we login into Yarn UI,  we will see that jobs status is failed even it 
> finished successfully.
> It happens because hive creates tez session by default. And if query finished 
> very quickly, we can't close tez session properly because AM isn't ready for 
> accepting any requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18990) Hive doesn't close Tez session properly

2018-03-19 Thread Kryvenko Igor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kryvenko Igor reassigned HIVE-18990:



> Hive doesn't close Tez session properly
> ---
>
> Key: HIVE-18990
> URL: https://issues.apache.org/jira/browse/HIVE-18990
> Project: Hive
>  Issue Type: Bug
>Reporter: Kryvenko Igor
>Assignee: Kryvenko Igor
>Priority: Major
>
> Hive doesn't close Tez session properly if AM isn't ready for accepting DAG.
> *STR*
> This can be easily reproduced using the following steps:
> *1) configure cluster on Tez;*
> *2) create file test.hql*
> cat ~/test.hql
> show databases;
> *3) run the job*
> $ hive --hiveconf hive.root.logger=DEBUG,console --hiveconf 
> hive.execution.engine=tez -f ~/test.hql
> If we login into Yarn UI,  we will see that jobs status is failed even it 
> finished successfully.
> It happens because hive creates tez session by default. And if query finished 
> very quickly, we can't close tez session properly because AM isn't ready for 
> accepting any requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-19 Thread Kryvenko Igor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405302#comment-16405302
 ] 

Kryvenko Igor commented on HIVE-18727:
--

patch #2. 
  Add APL licence headers. 
  Rename {{HiveError}} to {{DataConstraintViolationError}} and move it to 
{{org.apache.hadoop.hive.ql.exec.errors}} package

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.02.patch, HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-19 Thread Kryvenko Igor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kryvenko Igor updated HIVE-18727:
-
Attachment: HIVE-18727.02.patch

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.02.patch, HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Export from Acid table

2018-03-19 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405285#comment-16405285
 ] 

Eugene Koifman commented on HIVE-18739:
---

https://reviews.apache.org/r/66148/

> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from Acid table

2018-03-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Summary: Add support for Export from Acid table  (was: Add support for 
Export from unpartitioned Acid table)

> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18740) Add support for Export from partitioned Acid table

2018-03-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman resolved HIVE-18740.
---
Resolution: Won't Fix

was done as part of HIVE-18739

> Add support for Export from partitioned Acid table
> --
>
> Key: HIVE-18740
> URL: https://issues.apache.org/jira/browse/HIVE-18740
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>
> figure out how to translate (partial) partition spec from Export command into 
> a "where" clause



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-19 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405277#comment-16405277
 ] 

Sergey Shelukhin commented on HIVE-18739:
-

Is it possible to post a RB?

> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18982) Provide a CLI option to manually trigger failover

2018-03-19 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405280#comment-16405280
 ] 

Prasanth Jayachandran commented on HIVE-18982:
--

Test failures are unrelated. [~sershe] can you please review this patch?

> Provide a CLI option to manually trigger failover
> -
>
> Key: HIVE-18982
> URL: https://issues.apache.org/jira/browse/HIVE-18982
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18982.1.patch, HIVE-18982.2.patch
>
>
> HIVE-18281 added active-passive HA. There might be a administrative need to 
> trigger a manual failover of HS2 Active server. Add command line tool to view 
> list of all HS2 instances and trigger manual failover (only under force 
> mode). The clients currently connected to active HS2 will be closed. In 
> future, more options to existing clients connections can be handled via 
> configs/options (like wait until timeout, wait until current sessions are 
> closed etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18982) Provide a CLI option to manually trigger failover

2018-03-19 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18982:
-
Attachment: HIVE-18982.2.patch

> Provide a CLI option to manually trigger failover
> -
>
> Key: HIVE-18982
> URL: https://issues.apache.org/jira/browse/HIVE-18982
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18982.1.patch, HIVE-18982.2.patch
>
>
> HIVE-18281 added active-passive HA. There might be a administrative need to 
> trigger a manual failover of HS2 Active server. Add command line tool to view 
> list of all HS2 instances and trigger manual failover (only under force 
> mode). The clients currently connected to active HS2 will be closed. In 
> future, more options to existing clients connections can be handled via 
> configs/options (like wait until timeout, wait until current sessions are 
> closed etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.12.patch

> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-19 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405273#comment-16405273
 ] 

Eugene Koifman commented on HIVE-18739:
---

TestCommands.testNoopReplEximCommands failure is related

patch 12 address it

[~sershe] could you review please

> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, 
> HIVE-18739.12.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18926) Imporve operator-tree matching

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405271#comment-16405271
 ] 

Hive QA commented on HIVE-18926:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 3 new + 137 unchanged - 12 
fixed = 140 total (was 149) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9711/dev-support/hive-personality.sh
 |
| git revision | master / 26c0ab6 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9711/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9711/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9711/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Imporve operator-tree matching
> --
>
> Key: HIVE-18926
> URL: https://issues.apache.org/jira/browse/HIVE-18926
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18926.01.patch, HIVE-18926.02.patch, 
> HIVE-18926.03.patch, HIVE-18926.04.patch
>
>
> currently joins are not matched



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405222#comment-16405222
 ] 

Hive QA commented on HIVE-18739:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915141/HIVE-18739.11.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 13033 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-18264) CachedStore: Store cached partitions/col stats within the table cache and make prewarm non-blocking

2018-03-19 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405190#comment-16405190
 ] 

Vaibhav Gumashta commented on HIVE-18264:
-

Committed to master. Thanks [~akolb] and [~daijy] for the reviews.

> CachedStore: Store cached partitions/col stats within the table cache and 
> make prewarm non-blocking
> ---
>
> Key: HIVE-18264
> URL: https://issues.apache.org/jira/browse/HIVE-18264
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-18264.1.patch, HIVE-18264.2.patch, 
> HIVE-18264.3.patch, HIVE-18264.4.patch, HIVE-18264.5.patch, 
> HIVE-18264.6.patch, HIVE-18264.7.patch, HIVE-18264.8.patch, HIVE-18264.8.patch
>
>
> Currently we have a separate cache for partitions and partition col stats 
> which results in some calls iterating through each of these for 
> retrieving/updating. For example, to modify a partition col stat, currently 
> we need to lock table, partition and partition col stats caches which are all 
> separate hashmaps. We can get better performance by organizing 
> hierarchically. For example, we can have a partition, partition col stats and 
> table col stats cache per table to improve on the previous mechanisms. This 
> will also result in better concurrency, since now instead of locking the 
> whole cache, we can selectively lock the table cache and modify multiple 
> tables in parallel. 
> In addition, currently, the prewarm mechanism populates all the caches 
> initially (it skips tables that do not pass whitelist/blacklist filter) and 
> it is a blocking call. This patch also makes prewarm non-blocking so that the 
> calls for tables that are already cached can be served from the memory and 
> the ones that are not can be served from the rdbms. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18264) CachedStore: Store cached partitions/col stats within the table cache and make prewarm non-blocking

2018-03-19 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-18264:

  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: 3.0.0
  Status: Resolved  (was: Patch Available)

> CachedStore: Store cached partitions/col stats within the table cache and 
> make prewarm non-blocking
> ---
>
> Key: HIVE-18264
> URL: https://issues.apache.org/jira/browse/HIVE-18264
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-18264.1.patch, HIVE-18264.2.patch, 
> HIVE-18264.3.patch, HIVE-18264.4.patch, HIVE-18264.5.patch, 
> HIVE-18264.6.patch, HIVE-18264.7.patch, HIVE-18264.8.patch, HIVE-18264.8.patch
>
>
> Currently we have a separate cache for partitions and partition col stats 
> which results in some calls iterating through each of these for 
> retrieving/updating. For example, to modify a partition col stat, currently 
> we need to lock table, partition and partition col stats caches which are all 
> separate hashmaps. We can get better performance by organizing 
> hierarchically. For example, we can have a partition, partition col stats and 
> table col stats cache per table to improve on the previous mechanisms. This 
> will also result in better concurrency, since now instead of locking the 
> whole cache, we can selectively lock the table cache and modify multiple 
> tables in parallel. 
> In addition, currently, the prewarm mechanism populates all the caches 
> initially (it skips tables that do not pass whitelist/blacklist filter) and 
> it is a blocking call. This patch also makes prewarm non-blocking so that the 
> calls for tables that are already cached can be served from the memory and 
> the ones that are not can be served from the rdbms. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-19 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405169#comment-16405169
 ] 

Vineet Garg commented on HIVE-18727:


[~vbeshka] I guess package org.apache.hadoop.hive.ql.exec.errors would be more 
appropriate.

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18727) Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of Exception on failure

2018-03-19 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405169#comment-16405169
 ] 

Vineet Garg edited comment on HIVE-18727 at 3/19/18 5:34 PM:
-

[~vbeshka] I guess {{org.apache.hadoop.hive.ql.exec.errors}} would be more 
appropriate.


was (Author: vgarg):
[~vbeshka] I guess package org.apache.hadoop.hive.ql.exec.errors would be more 
appropriate.

> Update GenericUDFEnforceNotNullConstraint to throw an ERROR instead of 
> Exception on failure
> ---
>
> Key: HIVE-18727
> URL: https://issues.apache.org/jira/browse/HIVE-18727
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Kryvenko Igor
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18727.patch
>
>
> Throwing an exception makes TezProcessor stop retrying the task. Since this 
> is NOT NULL constraint violation we don't want TezProcessor to keep retrying 
> on failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18940) Hive notifications serialize all write DDL operations

2018-03-19 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405155#comment-16405155
 ] 

Andrew Sherman commented on HIVE-18940:
---

The restriction that  the EVENT_ID has to be in the order of commit is 
presumably having a major impact on concurrency in the HMS DBMS. It effectively 
serializes avery transaction. Has there been any thought to relaxing this 
restriction? 

> Hive notifications serialize all write DDL operations
> -
>
> Key: HIVE-18940
> URL: https://issues.apache.org/jira/browse/HIVE-18940
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Priority: Major
>
> The implementation of DbNotificationListener uses a single row to store 
> current notification ID and uses {{SELECT FOR UPDATE}} to lock the row. This 
> serializes all write DDL operations which isn't good.
> We should consider using database auto-increment for notification ID instead. 
> Especially on mMySQL/innoDb it is supported natively with relatively 
> light-weight locking. 
> This creates potential issue for consumers though because such IDs may have 
> holes. There are two types of holes - transient hole for a transaction which 
> have not committed yet and will be committed shortly and permanent holes for 
> transactions that fail. Consumers need to deal with it. It may be useful to 
> add DB-generated timestamp as well to assist in recovery from holes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405150#comment-16405150
 ] 

Hive QA commented on HIVE-18739:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 35 new + 669 unchanged - 7 
fixed = 704 total (was 676) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9710/dev-support/hive-personality.sh
 |
| git revision | master / 79e8869 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9710/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9710/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9710/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405149#comment-16405149
 ] 

Yongzhi Chen commented on HIVE-18986:
-

The change looks good. +1
Just one question:
Why delete private Map getPartitionColStats 
instead of changing it by adding new public static Map convertToMTableColumnStatistics method? 

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18926) Imporve operator-tree matching

2018-03-19 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-18926:

Attachment: HIVE-18926.04.patch

> Imporve operator-tree matching
> --
>
> Key: HIVE-18926
> URL: https://issues.apache.org/jira/browse/HIVE-18926
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18926.01.patch, HIVE-18926.02.patch, 
> HIVE-18926.03.patch, HIVE-18926.04.patch
>
>
> currently joins are not matched



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18858) System properties in job configuration not resolved when submitting MR job

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405121#comment-16405121
 ] 

Hive QA commented on HIVE-18858:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915140/HIVE-18858.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 13027 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Updated] (HIVE-17843) UINT32 Parquet columns are handled as signed INT32-s, silently reading incorrect data

2018-03-19 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17843:
---
Attachment: HIVE-17843.3.patch

> UINT32 Parquet columns are handled as signed INT32-s, silently reading 
> incorrect data
> -
>
> Key: HIVE-17843
> URL: https://issues.apache.org/jira/browse/HIVE-17843
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Ivanfi
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17843.1.patch, HIVE-17843.1.patch, 
> HIVE-17843.2.patch, HIVE-17843.3.patch
>
>
> An unsigned 32 bit Parquet column, such as
> {noformat}
> optional int32 uint_32_col (UINT_32)
> {noformat}
> is read by Hive as if it were signed, leading to incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18703) Make Operator comparision to be based on some primitive

2018-03-19 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich resolved HIVE-18703.
-
   Resolution: Fixed
 Assignee: Zoltan Haindrich
Fix Version/s: 3.0.0

HIVE-17626 contained this

> Make Operator comparision to be based on some primitive
> ---
>
> Key: HIVE-18703
> URL: https://issues.apache.org/jira/browse/HIVE-18703
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 3.0.0
>
>
> currently we have {{Operator.isSame(op)}} which can respond to wheter 2 
> operators are equal; it would be great to introduce a simple object on which 
> the comparision is happening; and that could also enable to lookup operators 
> in a set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Attachment: (was: HIVE-18986.1.patch)

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Attachment: HIVE-18986.1.patch

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-03-19 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-18986:

Description: 
If the table contains a lot of columns e.g, 5k, simple table rename would fail 
with the following stack trace. The issue is datanucleus can't handle the query 
with lots of colName='c1' && colName='c2' && ... .

 

2018-03-13 17:19:52,770 INFO 
org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: db=default 
tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 17:20:00,495 
ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-200]: 
java.lang.StackOverflowError at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)

 

  was:
If the table contains a lot of columns e.g, 5k, simple table rename would fail 
with the following stack trace. The issue is datanucleus can't handle the query 
with lots of colName='c1' && colName='c2'.

 

2018-03-13 17:19:52,770 INFO 
org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: db=default 
tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 17:20:00,495 
ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-200]: 
java.lang.StackOverflowError at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)

 


> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18986.1.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18858) System properties in job configuration not resolved when submitting MR job

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405079#comment-16405079
 ] 

Hive QA commented on HIVE-18858:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9709/dev-support/hive-personality.sh
 |
| git revision | master / 94152c9 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9709/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api . ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9709/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> System properties in job configuration not resolved when submitting MR job
> --
>
> Key: HIVE-18858
> URL: https://issues.apache.org/jira/browse/HIVE-18858
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-18858.1.patch, HIVE-18858.2.patch
>
>
> Since [this hadoop 
> commit|https://github.com/apache/hadoop/commit/5eb7dbe9b31a45f57f2e1623aa1c9ce84a56c4d1]
>  that was first released in 3.0.0, Configuration has a restricted mode, that 
> disables the resolution of system properties (that happens when retrieving a 
> configuration option).
> This leads to test failures when switching to Hadoop 3.0.0 (instead of 
> 3.0.0-beta1), since we're relying on the [substitution of 
> test.tmp.dir|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/data/conf/hive-site.xml#L37]
>  during the [maven 
> build|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/pom.xml#L83].
>  See test results on HIVE-18327.
> When we're passing job configurations to Hadoop, I 

[jira] [Assigned] (HIVE-14388) Add number of rows inserted message after insert command in Beeline

2018-03-19 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali reassigned HIVE-14388:
---

Assignee: Bharathkrishna Guruvayoor Murali  (was: Ke Jia)

> Add number of rows inserted message after insert command in Beeline
> ---
>
> Key: HIVE-14388
> URL: https://issues.apache.org/jira/browse/HIVE-14388
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
> Attachments: HIVE-14388-WIP.patch
>
>
> Currently, when you run insert command on beeline, it returns a message 
> saying "No rows affected .."
> A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14388) Add number of rows inserted message after insert command in Beeline

2018-03-19 Thread Bharathkrishna Guruvayoor Murali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16405037#comment-16405037
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-14388:
-

Thanks for letting me know. Assigning to myself.

> Add number of rows inserted message after insert command in Beeline
> ---
>
> Key: HIVE-14388
> URL: https://issues.apache.org/jira/browse/HIVE-14388
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Ke Jia
>Priority: Minor
> Attachments: HIVE-14388-WIP.patch
>
>
> Currently, when you run insert command on beeline, it returns a message 
> saying "No rows affected .."
> A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18342) Remove LinkedList from HiveAlterHandler.java

2018-03-19 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18342:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks Beluga Behr for the contribution!

> Remove LinkedList from HiveAlterHandler.java
> 
>
> Key: HIVE-18342
> URL: https://issues.apache.org/jira/browse/HIVE-18342
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HIVE-18342.1.patch
>
>
> Remove {{LinkedList}} in favor of {{ArrayList}} for class 
> {{org.apache.hadoop.hive.metastore.HiveAlterHandler}}.
> {quote}
> The size, isEmpty, get, set, iterator, and listIterator operations run in 
> constant time. The add operation runs in amortized constant time, that is, 
> adding n elements requires O(n) time. All of the other operations run in 
> linear time (roughly speaking). *The constant factor is low compared to that 
> for the LinkedList implementation.*
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18014) HIVE UDF jar_ Error ST_Point _ GIS ESRI

2018-03-19 Thread Venkat Atmuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkat Atmuri updated HIVE-18014:
-
Priority: Critical  (was: Major)

> HIVE UDF jar_Error ST_Point _  GIS ESRI
> ---
>
> Key: HIVE-18014
> URL: https://issues.apache.org/jira/browse/HIVE-18014
> Project: Hive
>  Issue Type: Bug
> Environment: CDH-5.10.1 
> Hive 1.1.0
>Reporter: Venkat Atmuri
>Priority: Critical
>
> When I'm trying to create a geometric consultation with ST_Point :
> hive> add jar esri-geometry-api-1.1-sources.jar;
> Added [esri-geometry-api-1.1-sources.jar] to class path
> Added resources: [esri-geometry-api-1.1-sources.jar]
> hive> add jar esri-geometry-api-1.1.jar;
> Added [esri-geometry-api-1.1.jar] to class path
> Added resources: [esri-geometry-api-1.1.jar]
> hive> add jar spatial-sdk-hadoop.jar;
> Added [spatial-sdk-hadoop.jar] to class path
> Added resources: [spatial-sdk-hadoop.jar]
> hive> add jar spatial-sdk-hive.jar ;
> Added [spatial-sdk-hive.jar] to class path
> Added resources: [spatial-sdk-hive.jar]
> hive> add jar spatial-sdk-json.jar;
> Added [spatial-sdk-json.jar] to class path
> Added resources: [spatial-sdk-json.jar]
> hive>create temporary function ST_AsText as 'com.esri.hadoop.hive.ST_AsText';
> OK
> Time taken: 0.309 seconds
> hive> create temporary function ST_Intersects as 
> 'com.esri.hadoop.hive.ST_Intersects';
> OK
> Time taken: 0.005 seconds
> hive> create temporary function ST_Length as 'com.esri.hadoop.hive.ST_Length';
> OK
> Time taken: 0.003 seconds
> hive> create temporary function ST_LineString as 
> 'com.esri.hadoop.hive.ST_LineString';
> OK
> Time taken: 0.004 seconds
> hive> create temporary function ST_Point as 'com.esri.hadoop.hive.ST_Point';
> OK
> Time taken: 0.004 seconds
> hive> create temporary function ST_Polygon as 
> 'com.esri.hadoop.hive.ST_Polygon';
> OK
> Time taken: 0.003 seconds
> hive> create temporary function ST_SetSRID as 
> 'com.esri.hadoop.hive.ST_SetSRID';
> OK
> Time taken: 0.002 seconds
> hive> create temporary function st_geomfromtext as 
> 'com.esri.hadoop.hive.ST_GeomFromText';
> OK
> Time taken: 0.002 seconds
> hive> create temporary function st_geometrytype as 
> 'com.esri.hadoop.hive.ST_GeometryType';
> OK
> Time taken: 0.002 seconds
> hive> create temporary function st_asjson as 'com.esri.hadoop.hive.ST_AsJson';
> OK
> Time taken: 0.003 seconds
> hive> create temporary function st_asbinary as 
> 'com.esri.hadoop.hive.ST_AsBinary';
> OK
> Time taken: 0.003 seconds
> hive> create temporary function st_x as 'com.esri.hadoop.hive.ST_X';
> OK
> Time taken: 0.003 seconds
> hive> create temporary function st_y as 'com.esri.hadoop.hive.ST_Y';
> OK
> Time taken: 0.002 seconds
> hive> create temporary function st_srid as 'com.esri.hadoop.hive.ST_SRID';
> OK
> Time taken: 0.004 seconds
> #Query
> hive>SELECT ST_Point(longitude, latitude) FROM testing_point LIMIT 1;
>  
> I get  the below error:
> Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: Unable to 
> instantiate UDF implementation class com.esri.hadoop.hive.ST_Point: 
> java.lang.IllegalAccessException: Class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge can not access a 
> member of class com.esri.hadoop.hive.ST_Point with modifiers 
> I've used different jars(Latest one and  Jar working fine on my own cluster). 
> Error in my development cluster.
> Any ideas on this error?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18344) Remove LinkedList from SharedWorkOptimizer.java

2018-03-19 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18344:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks Beluga Behr for the contribution!

> Remove LinkedList from SharedWorkOptimizer.java
> ---
>
> Key: HIVE-18344
> URL: https://issues.apache.org/jira/browse/HIVE-18344
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HIVE-18344.1.patch, HIVE-18344.2.patch
>
>
> Prefer {{ArrayList}} over {{LinkedList}} especially in this class because the 
> initial size of the collection is known.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16858) Acumulo Utils Improvements

2018-03-19 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-16858:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks Beluga Behr for the contribution!

> Acumulo Utils Improvements
> --
>
> Key: HIVE-16858
> URL: https://issues.apache.org/jira/browse/HIVE-16858
> Project: Hive
>  Issue Type: Improvement
>  Components: Accumulo Storage Handler
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HIVE-16858.1.patch, HIVE-16858.2.patch
>
>
> # Use Apache library for copy routine
> # Use Apache Commons where advantageous
> # Improve debug logging
> # Fix some spellcheck validations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404979#comment-16404979
 ] 

Hive QA commented on HIVE-18140:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915126/HIVE-18140.01wip04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 112 failed/errored test(s), 13425 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-18739) Add support for Export from unpartitioned Acid table

2018-03-19 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18739:
--
Attachment: HIVE-18739.11.patch

> Add support for Export from unpartitioned Acid table
> 
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, 
> HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18858) System properties in job configuration not resolved when submitting MR job

2018-03-19 Thread Daniel Voros (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404969#comment-16404969
 ] 

Daniel Voros commented on HIVE-18858:
-

Attached patch #2. This uses {{Confiuration#iterator()}} directly instead of 
{{HiveConf#getProperties()}} to skip the extra conversion. Hadoop version is 
still 3.0.0 to make sure tests will pass. If they do, I'll upload a patch 
without bumping the Hadoop version.

> System properties in job configuration not resolved when submitting MR job
> --
>
> Key: HIVE-18858
> URL: https://issues.apache.org/jira/browse/HIVE-18858
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-18858.1.patch, HIVE-18858.2.patch
>
>
> Since [this hadoop 
> commit|https://github.com/apache/hadoop/commit/5eb7dbe9b31a45f57f2e1623aa1c9ce84a56c4d1]
>  that was first released in 3.0.0, Configuration has a restricted mode, that 
> disables the resolution of system properties (that happens when retrieving a 
> configuration option).
> This leads to test failures when switching to Hadoop 3.0.0 (instead of 
> 3.0.0-beta1), since we're relying on the [substitution of 
> test.tmp.dir|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/data/conf/hive-site.xml#L37]
>  during the [maven 
> build|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/pom.xml#L83].
>  See test results on HIVE-18327.
> When we're passing job configurations to Hadoop, I believe there's no way to 
> disable the restricted mode, since we go through some Hadoop MR calls first, 
> see here:
> {code}
> "HiveServer2-Background-Pool: Thread-105@9500" prio=5 tid=0x69 nid=NA runnable
>   java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.conf.Configuration.addResourceObject(Configuration.java:970)
> - locked <0x2fe6> (a org.apache.hadoop.mapred.JobConf)
> at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:895)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:476)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:162)
> at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:788)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1432)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:248)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:90)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> 

[jira] [Updated] (HIVE-18858) System properties in job configuration not resolved when submitting MR job

2018-03-19 Thread Daniel Voros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros updated HIVE-18858:

Attachment: HIVE-18858.2.patch

> System properties in job configuration not resolved when submitting MR job
> --
>
> Key: HIVE-18858
> URL: https://issues.apache.org/jira/browse/HIVE-18858
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-18858.1.patch, HIVE-18858.2.patch
>
>
> Since [this hadoop 
> commit|https://github.com/apache/hadoop/commit/5eb7dbe9b31a45f57f2e1623aa1c9ce84a56c4d1]
>  that was first released in 3.0.0, Configuration has a restricted mode, that 
> disables the resolution of system properties (that happens when retrieving a 
> configuration option).
> This leads to test failures when switching to Hadoop 3.0.0 (instead of 
> 3.0.0-beta1), since we're relying on the [substitution of 
> test.tmp.dir|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/data/conf/hive-site.xml#L37]
>  during the [maven 
> build|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/pom.xml#L83].
>  See test results on HIVE-18327.
> When we're passing job configurations to Hadoop, I believe there's no way to 
> disable the restricted mode, since we go through some Hadoop MR calls first, 
> see here:
> {code}
> "HiveServer2-Background-Pool: Thread-105@9500" prio=5 tid=0x69 nid=NA runnable
>   java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.conf.Configuration.addResourceObject(Configuration.java:970)
> - locked <0x2fe6> (a org.apache.hadoop.mapred.JobConf)
> at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:895)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:476)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:162)
> at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:788)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1432)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:248)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:90)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:353)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404909#comment-16404909
 ] 

Hive QA commented on HIVE-18140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 21 new + 77 unchanged - 6 
fixed = 98 total (was 83) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | {color:red} ql generated 1 new + 99 unchanged - 1 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9708/dev-support/hive-personality.sh
 |
| git revision | master / 94152c9 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9708/yetus/diff-checkstyle-ql.txt
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9708/yetus/diff-javadoc-javadoc-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9708/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9708/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18140.01wip01.patch, HIVE-18140.01wip03.patch, 
> HIVE-18140.01wip04.patch
>
>
> suppose the following scenario:
> * part1 has basic stats {{RC=10,DS=1K}}
> * all other partition has no basic stats (and a bunch of rows)
> then 
> [this|https://github.com/apache/hive/blob/d9924ab3e285536f7e2cc15ecbea36a78c59c66d/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L378]
>  condition would be false; which in turn produces estimations for the whole 
> partitioned table: {{RC=10,DS=1K}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-03-19 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-18140:

Attachment: HIVE-18140.01wip04.patch

> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18140.01wip01.patch, HIVE-18140.01wip03.patch, 
> HIVE-18140.01wip04.patch
>
>
> suppose the following scenario:
> * part1 has basic stats {{RC=10,DS=1K}}
> * all other partition has no basic stats (and a bunch of rows)
> then 
> [this|https://github.com/apache/hive/blob/d9924ab3e285536f7e2cc15ecbea36a78c59c66d/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L378]
>  condition would be false; which in turn produces estimations for the whole 
> partitioned table: {{RC=10,DS=1K}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18975) NPE when inserting NULL value in structure and array with HBase table

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404870#comment-16404870
 ] 

Hive QA commented on HIVE-18975:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915109/HIVE-18975.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 13027 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-16889) Improve Performance Of VARCHAR

2018-03-19 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404794#comment-16404794
 ] 

BELUGA BEHR commented on HIVE-16889:


Would it be possible to create a HS2 configuration that would disable these 
checks, even if they are defined in the schema?   This would allow third party 
applications to continue use the VARCHAR data type, but it would not have this 
overheard.

> Improve Performance Of VARCHAR
> --
>
> Key: HIVE-16889
> URL: https://issues.apache.org/jira/browse/HIVE-16889
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 2.1.1, 3.0.0
>Reporter: BELUGA BEHR
>Assignee: Janaki Lahorani
>Priority: Major
>
> Often times, organizations use tools that create table schemas on the fly and 
> they specify a  VARCHAR column with precision.  In these scenarios, 
> performance suffers even though one could assume performance should be better 
> since there is pre-existing knowledge about the size of the data and buffers 
> could be more efficiently setup then in the case where no such knowledge 
> exists.
> Most of the performance seems to be caused by reading a STRING from a file 
> into a byte buffer, checking the length of the STRING, truncating the STRING 
> if needed, and then serializing the STRING back into bytes again.
> From the code, I have identified several areas where develops left notes 
> about later improvements.
> # org.apache.hadoop.hive.serde2.io.HiveVarcharWritable.enforceMaxLength(int)
> # org.apache.hadoop.hive.serde2.lazy.LazyHiveVarchar.init(ByteArrayRef, int, 
> int)
> # 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getHiveVarchar(Object,
>  PrimitiveObjectInspector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18975) NPE when inserting NULL value in structure and array with HBase table

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404774#comment-16404774
 ] 

Hive QA commented on HIVE-18975:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} hbase-handler: The patch generated 0 new + 1 
unchanged - 80 fixed = 1 total (was 81) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9707/dev-support/hive-personality.sh
 |
| git revision | master / 94152c9 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9707/yetus/patch-asflicense-problems.txt
 |
| modules | C: hbase-handler U: hbase-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9707/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> NPE when inserting NULL value in structure and array with HBase table
> -
>
> Key: HIVE-18975
> URL: https://issues.apache.org/jira/browse/HIVE-18975
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18975.1.patch, HIVE-18975.2.patch
>
>
> STR (Structure)
> *STEP 1. Create tables*
> {code}
> CREATE TABLE IF NOT EXISTS t1 (id INT);
> INSERT INTO TABLE t1 VALUES (1),(2),(3),(4),(5);
> CREATE TABLE IF NOT EXISTS `htable`(
>   `id` INT, 
>   `map_column` STRUCT) ROW FORMAT 
> SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe'  STORED BY 
> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  WITH SERDEPROPERTIES (   
> 'hbase.columns.mapping'=':key,id:id','serialization.format'='1') 
> TBLPROPERTIES ( 'hbase.table.name'='tmp/h');
> {code}
> *STEP 2. Insert into table stored in HBase the struct with NULL value in it*
> {code}
> INSERT INTO `htable` SELECT 2,NAMED_STRUCT("s_int",CAST(NULL AS 
> INT),"s_string","s1","s_date",CAST('2018-03-12' AS DATE)) FROM t1 LIMIT 1;
> {code}
> *ACTUAL RESULT*
> The query fails with NPE.
> {code}
> Diagnostic Messages for this Task:
> Error: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{},"value":{"_col0":2,"_col1":{"s_int":null,"s_string":"s1","s_date":"2018-03-12"}}}
>   at 
> 

[jira] [Commented] (HIVE-18926) Imporve operator-tree matching

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404751#comment-16404751
 ] 

Hive QA commented on HIVE-18926:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12915106/HIVE-18926.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 75 failed/errored test(s), 13424 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Work started] (HIVE-18988) Support bootstrap replication of ACID tables

2018-03-19 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-18988 started by Sankar Hariappan.
---
> Support bootstrap replication of ACID tables
> 
>
> Key: HIVE-18988
> URL: https://issues.apache.org/jira/browse/HIVE-18988
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, replication
> Fix For: 3.0.0
>
>
> Bootstrapping of ACID tables, need special handling to replicate a stable 
> state of data.
>  - If ACID feature enables, then perform bootstrap dump for ACID tables with 
> in read txn.
> -> Dump table/partition metadata.
> -> Get the list of valid data files for a table using same logic as read txn 
> do.
> -> Dump latest valid table Write ID as per current read txn.
>  - Find the valid last replication state such that it points to event ID of 
> open_txn event of oldest on-going txn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18926) Imporve operator-tree matching

2018-03-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404672#comment-16404672
 ] 

Hive QA commented on HIVE-18926:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
34s{color} | {color:red} ql: The patch generated 3 new + 137 unchanged - 12 
fixed = 140 total (was 149) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9706/dev-support/hive-personality.sh
 |
| git revision | master / 94152c9 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9706/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9706/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9706/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Imporve operator-tree matching
> --
>
> Key: HIVE-18926
> URL: https://issues.apache.org/jira/browse/HIVE-18926
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18926.01.patch, HIVE-18926.02.patch, 
> HIVE-18926.03.patch
>
>
> currently joins are not matched



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18988) Support bootstrap replication of ACID tables

2018-03-19 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan reassigned HIVE-18988:
---


> Support bootstrap replication of ACID tables
> 
>
> Key: HIVE-18988
> URL: https://issues.apache.org/jira/browse/HIVE-18988
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, ACID, replication
> Fix For: 3.0.0
>
>
> Bootstrapping of ACID tables, need special handling to replicate a stable 
> state of data.
>  - If ACID feature enables, then perform bootstrap dump for ACID tables with 
> in read txn.
> -> Dump table/partition metadata.
> -> Get the list of valid data files for a table using same logic as read txn 
> do.
> -> Dump latest valid table Write ID as per current read txn.
>  - Find the valid last replication state such that it points to event ID of 
> open_txn event of oldest on-going txn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >